aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1502.05243
1946072890
The task of classifying videos of natural dynamic scenes into appropriate classes has gained lot of attention in recent years. The problem especially becomes challenging when the camera used to capture the video is dynamic. In this paper, we analyse the performance of statistical aggregation (SA) techniques on various pre-trained convolutional neural network(CNN) models to address this problem. The proposed approach works by extracting CNN activation features for a number of frames in a video and then uses an aggregation scheme in order to obtain a robust feature descriptor for the video. We show through results that the proposed approach performs better than the-state-of-the arts for the Maryland and YUPenn dataset. The final descriptor obtained is powerful enough to distinguish among dynamic scenes and is even capable of addressing the scenario where the camera motion is dominant and the scene dynamics are complex. Further, this paper shows an extensive study on the performance of various aggregation methods and their combinations. We compare the proposed approach with other dynamic scene classification algorithms on two publicly available datasets - Maryland and YUPenn to demonstrate the superior performance of the proposed approach.
In the field of single image recognition tasks, bag-of-features based methods were initially prevalent among the research community @cite_17 @cite_25 @cite_7 @cite_16 @cite_5 . These methods were based on the principle of geometric independence or orderless spatial representation. Later, these methods were enhanced by the inclusion of weak geometric information through spatial pyramid matching (SPM) @cite_26 . This method employed histogram intersection at various levels of the pyramids for matching features. However, CNN based approaches have been able to achieve even higher accuracies as observed in some of the recent works @cite_15 @cite_13 @cite_8 @cite_18 . This sparked a lot of recent research work on architectures and applications of CNNs for visual classification and recognition tasks.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_7", "@cite_8", "@cite_5", "@cite_15", "@cite_16", "@cite_13", "@cite_25", "@cite_17" ], "mid": [ "1849277567", "2162915993", "2012592962", "", "", "2951925341", "", "", "2104978738", "1625255723" ], "abstract": [ "Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark [18]. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we explore both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.", "This paper presents a method for recognizing scene categories based on approximate global geometric correspondence. This technique works by partitioning the image into increasingly fine sub-regions and computing histograms of local features found inside each sub-region. The resulting \"spatial pyramid\" is a simple and computationally efficient extension of an orderless bag-of-features image representation, and it shows significantly improved performance on challenging scene categorization tasks. Specifically, our proposed method exceeds the state of the art on the Caltech-101 database and achieves high accuracy on a large database of fifteen natural scene categories. The spatial pyramid framework also offers insights into the success of several recently proposed image descriptions, including Torralba’s \"gist\" and Lowe’s SIFT descriptors.", "We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms.", "", "", "Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations lack geometric invariance, which limits their robustness for classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (MOP-CNN). This scheme extracts CNN activations for local patches at multiple scale levels, performs orderless VLAD pooling of these activations at each level separately, and concatenates the result. The resulting MOP-CNN representation can be used as a generic feature for either supervised or unsupervised recognition tasks, from image classification to instance-level retrieval; it consistently outperforms global CNN activations without requiring any joint training of prediction layers for a particular target dataset. In absolute terms, it achieves state-of-the-art results on the challenging SUN397 and MIT Indoor Scenes classification datasets, and competitive results on ILSVRC2012 2013 classification and INRIA Holidays retrieval datasets.", "", "", "Discriminative learning is challenging when examples are sets of features, and the sets vary in cardinality and lack any sort of meaningful ordering. Kernel-based classification methods can learn complex decision boundaries, but a kernel over unordered set inputs must somehow solve for correspondences epsivnerally a computationally expensive task that becomes impractical for large set sizes. We present a new fast kernel function which maps unordered feature sets to multi-resolution histograms and computes a weighted histogram intersection in this space. This \"pyramid match\" computation is linear in the number of features, and it implicitly finds correspondences based on the finest resolution histogram cell where a matched pair first appears. Since the kernel does not penalize the presence of extra features, it is robust to clutter. We show the kernel function is positive-definite, making it valid for use in learning algorithms whose optimal solutions are guaranteed only for Mercer kernels. We demonstrate our algorithm on object recognition tasks and show it to be accurate and dramatically faster than current approaches", "We present a novel method for generic visual categorization: the problem of identifying the object content of natural images while generalizing across variations inherent to the object class. This bag of keypoints method is based on vector quantization of affine invariant descriptors of image patches. We propose and compare two alternative implementations using different classifiers: Naive Bayes and SVM. The main advantages of the method are that it is simple, computationally efficient and intrinsically invariant. We present results for simultaneously classifying seven semantic visual categories. These results clearly demonstrate that the method is robust to background clutter and produces good categorization accuracy even without exploiting geometric information." ] }
1502.05243
1946072890
The task of classifying videos of natural dynamic scenes into appropriate classes has gained lot of attention in recent years. The problem especially becomes challenging when the camera used to capture the video is dynamic. In this paper, we analyse the performance of statistical aggregation (SA) techniques on various pre-trained convolutional neural network(CNN) models to address this problem. The proposed approach works by extracting CNN activation features for a number of frames in a video and then uses an aggregation scheme in order to obtain a robust feature descriptor for the video. We show through results that the proposed approach performs better than the-state-of-the arts for the Maryland and YUPenn dataset. The final descriptor obtained is powerful enough to distinguish among dynamic scenes and is even capable of addressing the scenario where the camera motion is dominant and the scene dynamics are complex. Further, this paper shows an extensive study on the performance of various aggregation methods and their combinations. We compare the proposed approach with other dynamic scene classification algorithms on two publicly available datasets - Maryland and YUPenn to demonstrate the superior performance of the proposed approach.
In @cite_13 , the CNN architecture consisted of five convolutional layers, followed by two fully connected layers (4096 dimensional) and an output layer. The output from the fifth max-pooling layer was shown to still preserve global spatial information @cite_18 . Even the activation features from the fully connected layer were found to be sensitive to global distortions such as rotation, translation, and scaling @cite_15 . However, they have proven to be very powerful general feature descriptors for high level vision tasks. Several CNN implementations such as DeCAF, Caffe and OverFeat, trained on very large datasets are available for feature extraction to perform image classification tasks @cite_29 @cite_9 @cite_22 . These CNNs, pre-trained on large datasets such as ImageNet, have been efficiently used in scene classification and have achieved high impressive accuracies @cite_15 (for example, MOP CNN, OverFeat, etc.). Also, the ImageNet trained model of these implementations have been shown to generalize well to accommodate other datasets as well @cite_29 @cite_18 . CNN features obtained from object recognition datasets have also been used for obtaining high accuracy in various high level computer vision tasks @cite_6 .
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_9", "@cite_29", "@cite_6", "@cite_15", "@cite_13" ], "mid": [ "1849277567", "1487583988", "2950094539", "2953360861", "2953391683", "2951925341", "" ], "abstract": [ "Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark [18]. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we explore both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.", "We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.", "Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models. The framework is a BSD-licensed C++ library with Python and MATLAB bindings for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures. Caffe fits industry and internet-scale media needs by CUDA GPU computation, processing over 40 million images a day on a single K40 or Titan GPU ( @math 2.5 ms per image). By separating model representation from actual implementation, Caffe allows experimentation and seamless switching among platforms for ease of development and deployment from prototyping machines to cloud environments. Caffe is maintained and developed by the Berkeley Vision and Learning Center (BVLC) with the help of an active community of contributors on GitHub. It powers ongoing research projects, large-scale industrial applications, and startup prototypes in vision, speech, and multimedia.", "We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be re-purposed to novel generic tasks. Our generic tasks may differ significantly from the originally trained tasks and there may be insufficient labeled or unlabeled data to conventionally train or adapt a deep architecture to the new tasks. We investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks, including scene recognition, domain adaptation, and fine-grained recognition challenges. We compare the efficacy of relying on various network levels to define a fixed feature, and report novel results that significantly outperform the state-of-the-art on several important vision challenges. We are releasing DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.", "Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the network which was trained to perform object classification on ILSVRC13. We use features extracted from the network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or @math distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks.", "Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations lack geometric invariance, which limits their robustness for classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (MOP-CNN). This scheme extracts CNN activations for local patches at multiple scale levels, performs orderless VLAD pooling of these activations at each level separately, and concatenates the result. The resulting MOP-CNN representation can be used as a generic feature for either supervised or unsupervised recognition tasks, from image classification to instance-level retrieval; it consistently outperforms global CNN activations without requiring any joint training of prediction layers for a particular target dataset. In absolute terms, it achieves state-of-the-art results on the challenging SUN397 and MIT Indoor Scenes classification datasets, and competitive results on ILSVRC2012 2013 classification and INRIA Holidays retrieval datasets.", "" ] }
1502.05243
1946072890
The task of classifying videos of natural dynamic scenes into appropriate classes has gained lot of attention in recent years. The problem especially becomes challenging when the camera used to capture the video is dynamic. In this paper, we analyse the performance of statistical aggregation (SA) techniques on various pre-trained convolutional neural network(CNN) models to address this problem. The proposed approach works by extracting CNN activation features for a number of frames in a video and then uses an aggregation scheme in order to obtain a robust feature descriptor for the video. We show through results that the proposed approach performs better than the-state-of-the arts for the Maryland and YUPenn dataset. The final descriptor obtained is powerful enough to distinguish among dynamic scenes and is even capable of addressing the scenario where the camera motion is dominant and the scene dynamics are complex. Further, this paper shows an extensive study on the performance of various aggregation methods and their combinations. We compare the proposed approach with other dynamic scene classification algorithms on two publicly available datasets - Maryland and YUPenn to demonstrate the superior performance of the proposed approach.
The spatio-temporal based approaches were introduced by spatio-temporal oriented energies @cite_4 , which also introduced the YUPenn dataset. The very same work concluded that even relatively simple spatio-temporal feature descriptors were able to achieve consistently higher performance on both YUPenn and Maryland datasets as compared to HOF+GIST and HOF+Chaos approaches. More details for both the dynamic scene datasets have been covered in Section . Current state-of-the-art approach, bags of space time energies (BoSE), proposes using a bag of visual words for dynamic scene classification @cite_21 . Here, local features extracted via spatio-temporally oriented filters are employed. They are encoded using a learned dictionary and then dynamically pooled. The technique currently holds the highest accuracy on the two mentioned datasets @cite_21 amongst all peer-reviewed studies. Recently, a work done by (not peer-reviewed yet) uses a novel three dimensional CNN architecture for spatio-temporal classification problems @cite_11 . This technique shows promising results and marginal improvement over current state of art method. Another recent work by used Caffe framework and vectorial pooling (VLAD) to obtain better than state of art performance for the event detection problem @cite_1 . Off the shelf descriptors were used to obtain high score on TRECVID-MED dataset.
{ "cite_N": [ "@cite_1", "@cite_21", "@cite_4", "@cite_11" ], "mid": [ "2950076437", "2006656585", "2071524685", "1584759658" ], "abstract": [ "In this paper, we propose a discriminative video representation for event detection over a large scale video dataset when only limited hardware resources are available. The focus of this paper is to effectively leverage deep Convolutional Neural Networks (CNNs) to advance event detection, where only frame level static descriptors can be extracted by the existing CNN toolkit. This paper makes two contributions to the inference of CNN video representation. First, while average pooling and max pooling have long been the standard approaches to aggregating frame level static features, we show that performance can be significantly improved by taking advantage of an appropriate encoding method. Second, we propose using a set of latent concept descriptors as the frame descriptor, which enriches visual information while keeping it computationally affordable. The integration of the two contributions results in a new state-of-the-art performance in event detection over the largest video datasets. Compared to improved Dense Trajectories, which has been recognized as the best video representation for event detection, our new representation improves the Mean Average Precision (mAP) from 27.6 to 36.8 for the TRECVID MEDTest 14 dataset and from 34.0 to 44.6 for the TRECVID MEDTest 13 dataset. This work is the core part of the winning solution of our CMU-Informedia team in TRECVID MED 2014 competition.", "This paper presents a unified bag of visual word (BoW) framework for dynamic scene recognition. The approach builds on primitive features that uniformly capture spatial and temporal orientation structure of the imagery (e.g., video), as extracted via application of a bank of spatiotemporally oriented filters. Various feature encoding techniques are investigated to abstract the primitives to an intermediate representation that is best suited to dynamic scene representation. Further, a novel approach to adaptive pooling of the encoded features is presented that captures spatial layout of the scene even while being robust to situations where camera motion and scene dynamics are confounded. The resulting overall approach has been evaluated on two standard, publically available dynamic scene datasets. The results show that in comparison to a representative set of alternatives, the proposed approach outperforms the previous state-of-the-art in classification accuracy by 10 .", "Natural scene classification is a fundamental challenge in computer vision. By far, the majority of studies have limited their scope to scenes from single image stills and thereby ignore potentially informative temporal cues. The current paper is concerned with determining the degree of performance gain in considering short videos for recognizing natural scenes. Towards this end, the impact of multiscale orientation measurements on scene classification is systematically investigated, as related to: (i) spatial appearance, (ii) temporal dynamics and (iii) joint spatial appearance and dynamics. These measurements in visual space, x-y, and spacetime, x-y-t, are recovered by a bank of spatiotemporal oriented energy filters. In addition, a new data set is introduced that contains 420 image sequences spanning fourteen scene categories, with temporal scene information due to objects and surfaces decoupled from camera-induced ones. This data set is used to evaluate classification performance of the various orientation-related representations, as well as state-of-the-art alternatives. It is shown that a notable performance increase is realized by spatiotemporal approaches in comparison to purely spatial or purely temporal methods.", "The analysis and understanding of video sequences is currently quite an active research field. Many applications such as video surveillance, optical motion capture or those of multimedia need to first be able to detect the objects moving in a scene filmed by a static camera. This requires the basic operation that consists of separating the moving objects called \"foreground\" from the static information called \"background\". Many background subtraction methods have been developed ( (2010); (2008)). A recent survey (Bouwmans (2009)) shows that subspace learning models are well suited for background subtraction. Principal Component Analysis (PCA) has been used to model the background by significantly reducing the data's dimension. To perform PCA, different Robust Principal Components Analysis (RPCA) models have been recently developed in the literature. The background sequence is then modeled by a low rank subspace that can gradually change over time, while the moving foreground objects constitute the correlated sparse outliers. However, authors compare their algorithm only with the PCA ( (1999)) or another RPCA model. Furthermore, the evaluation is not made with the datasets and the measures currently used in the field of background subtraction. Considering all of this, we propose to evaluate RPCA models in the field of video-surveillance. Contributions of this chapter can be summarized as follows: 1) A survey regarding robust principal component analysis and 2) An evaluation and comparison on different video surveillance datasets" ] }
1502.05209
2116903819
Since the proof of the four color theorem in 1976, computer-generated proofs have become a reality in mathematics and computer science. During the last decade, we have seen formal proofs using verified proof assistants being used to verify the validity of such proofs.
The proof of the four colour theorem from 1976 @cite_12 @cite_5 was not the first computer-assisted proof, but it was the first to generate broad awareness of a new area of mathematics, sometimes dubbed experimental'' or computational'' mathematics. Since then, numerous theorems in mathematics and computer science have been proved by computer-assisted and computer-generated proofs. Besides obvious philosophical debates about what constitutes a mathematical proof, concerns about the validity of such proofs have been raised since. In particular, proofs based on exhausting the solution space have been met with skepticism.
{ "cite_N": [ "@cite_5", "@cite_12" ], "mid": [ "1566410171", "1656795039" ], "abstract": [ "The sulphur- and oxygen-containing diaryl compounds of the formula: I in which A and B, which may be the same or different, represent O, S, SO or SO2, Alk is a C1-C4 hydrocarbon radical with a straight or branched chain, R represents COOH, an esterified COOH group, a carboxylic amide group, OH, O-SO2CH3, NH2, NHR1, NR1R2, NHZOH, NHZNR1R2, C(=NH)NH2, C(=NH)NHOH or 2- DELTA 2-imidazolinyl, Z is a C2-C4 hydrocarbon radical with a straight or branched chain, and R1 and R2 each represent a C1-C3 lower alkyl group, or together form, with the nitrogen atom to which they are linked, a N-heterocyclic group of 5 to 7 ring atoms which can be substituted and can comprise a second hetero-atom, and their addition salts with bases when R is COOH, and their addition salts with acids when R is a basic radical, are useful pharmacological agents in the treatment of circulatory complaints such as cardio-vascular illnesses.", "The isomer (R)-(-)-3- [2-(p-hydroxyphenyl)-1-methylethyl]-aminomethyl -3,4-dihydr o-2H-1,5-benzodioxepin-3-ol and its non-toxic acid addition salts is unexpectedly more potent in the reduction of elevated interocular pressure in mammals than its enantiomorph or the racemate thereof." ] }
1502.05209
2116903819
Since the proof of the four color theorem in 1976, computer-generated proofs have become a reality in mathematics and computer science. During the last decade, we have seen formal proofs using verified proof assistants being used to verify the validity of such proofs.
During the last decade, we have seen an increasing use of verified proof assistants to create formally verified computer-generated proofs. This has been a success story, and it has resulted in a plethora of formalizations of mathematical proofs, a list too long to even start mentioning particular instances. , consider the formal proof of the four colour theorem from 2005 @cite_7 .
{ "cite_N": [ "@cite_7" ], "mid": [ "121484200" ], "abstract": [ "The Tale of a Brainteaser Francis Guthrie certainly did it, when he coined his innocent little coloring puzzle in 1852. He managed to embarrass successively his mathematician brother, his brother’s professor, Augustus de Morgan, and all of de Morgan’s visitors, who couldn’t solve it; the Royal Society, who only realized ten years later that Alfred Kempe’s 1879 solution was wrong; and the three following generations of mathematicians who couldn’t fix it [19]. Even Appel and Haken’s 1976 triumph [2] had a hint of defeat: they’d had a computer do the proof for them! Perhaps the mathematical controversy around the proof died down with their book [3] and with the elegant 1995 revision [13] by Robertson, Saunders, Seymour, and Thomas. However something was still amiss: both proofs combined a textual argument, which could reasonably be checked by inspection, with computer code that could not. Worse, the empirical evidence provided by running code several times with the same input is weak, as it is blind to the most common cause of “computer” error: programmer error. For some thirty years, computer science has been working out a solution to this problem: formal program proofs. The idea is to write code that describes not only what the machine should do, but also why it should be doing it—a formal proof of correctness. The validity of the proof is an objective mathematical fact that can be checked by a different program, whose own validity can be ascertained empirically because it does run on many inputs. The main technical difficulty is that formal proofs are very difficult to produce," ] }
1502.05209
2116903819
Since the proof of the four color theorem in 1976, computer-generated proofs have become a reality in mathematics and computer science. During the last decade, we have seen formal proofs using verified proof assistants being used to verify the validity of such proofs.
Outside the world of formal proofs, computer-generated proofs are flourishing, too, and growing to tremendous sizes. The proof of Erd " o s' discrepancy conjecture for @math from 2014 @cite_10 has been touted as one of the largest mathematical proofs and produced approx. 13 GB of proof witnesses. Such large-scale proofs are extremely challenging for formal verification. Given the current state of theorem provers and computing equipment, it is unthinkable to use 's approach @cite_3 of importing an based on the proof witnesses into Coq, a process clearly prohibitive for such large-scale proofs as we consider.
{ "cite_N": [ "@cite_10", "@cite_3" ], "mid": [ "2964092005", "205472424" ], "abstract": [ "In 1930s Paul Erdős conjectured that for any positive integer C in any infinite ±1 sequence (x n ) there exists a subsequence x d , x 2d , x 3d ,…, x kd , for some positive integers k and d, such that ( i=1 ^k x_ id >C ). The conjecture has been referred to as one of the major open problems in combinatorial number theory and discrepancy theory. For the particular case of C = 1 a human proof of the conjecture exists; for C = 2 a bespoke computer program had generated sequences of length 1124 of discrepancy 2, but the status of the conjecture remained open even for such a small bound. We show that by encoding the problem into Boolean satisfiability and applying the state of the art SAT solver, one can obtain a discrepancy 2 sequence of length 1160 and a proof of the Erdős discrepancy conjecture for C = 2, claiming that no discrepancy 2 sequence of length 1161, or more, exists. We also present our partial results for the case of C = 3.", "Proof-by-reflection is a well-established technique that employs decision procedures to reduce the size of proof-terms. Currently, decision procedures can be written either in Type Theory--in a purely functional way that also ensures termination-- or in an effectful programming language, where they are used as oracles for the certified checker. The first option offers strong correctness guarantees, while the second one permits more efficient implementations. We propose a novel technique for proof-by-reflection that marries, in Type Theory, an effectful language with (partial) proofs of correctness. The key to our approach is to use simulable monads, where a monad is simulable if, for all terminating reduction sequences in its equivalent effectful computational model, there exists a witness from which the same reduction may be simulated a posteriori by the monad. We encode several examples using simulable monads and demonstrate the advantages of the technique over previous approaches." ] }
1502.05209
2116903819
Since the proof of the four color theorem in 1976, computer-generated proofs have become a reality in mathematics and computer science. During the last decade, we have seen formal proofs using verified proof assistants being used to verify the validity of such proofs.
The last years have seen the appearance of , e.g. for a verified compiler @cite_6 or for polyhedral analysis @cite_15 . Here, the verified proof tool is relegated to a checker of the computations of the untrusted oracle, e.g., by using hand-written untrusted code to compute a result and verified (extracted) code to check it before continuing the computation.
{ "cite_N": [ "@cite_15", "@cite_6" ], "mid": [ "2120065196", "2023035194" ], "abstract": [ "Polyhedra form an established abstract domain for inferring runtime properties of programs using abstract interpretation. Computations on them need to be certified for the whole static analysis results to be trusted. In this work, we look at how far we can get down the road of a posteriori verification to lower the overhead of certification of the abstract domain of polyhedra. We demonstrate methods for making the cost of inclusion certificate generation negligible. From a performance point of view, our single-representation, constraints-based implementation compares with state-of-the-art implementations.", "This paper reports on the development and formal verification (proof of semantic preservation) of CompCert, a compiler from Clight (a large subset of the C programming language) to PowerPC assembly code, using the Coq proof assistant both for programming the compiler and for proving its correctness. Such a verified compiler is useful in the context of critical software and its formal verification: the verification of the compiler guarantees that the safety properties proved on the source code hold for the executable compiled code as well." ] }
1502.05209
2116903819
Since the proof of the four color theorem in 1976, computer-generated proofs have become a reality in mathematics and computer science. During the last decade, we have seen formal proofs using verified proof assistants being used to verify the validity of such proofs.
The termination proof certification projects IsaFoR CeTA @cite_9 , based on Isabelle HOL, and A3PAT @cite_1 , based on Coq, go one step further, and use an untrusted oracle approach, where different termination analyzers provide proof witnesses, which are stored and later checked. However, a typical termination proof has @math - @math proof witnesses and totals a few KB to a few MB of data, and recent work @cite_13 mentions problems were encountered when dealing with proofs using several hundred megabytes'' of oracle data. In contrast, our target proof of size-optimality of sorting networks with @math inputs requires dealing with nearly @math million proof witnesses, totalling @math GB of oracle data.
{ "cite_N": [ "@cite_13", "@cite_9", "@cite_1" ], "mid": [ "2003992820", "1840129892", "1537398137" ], "abstract": [ "We provide an overview of CPF, the certification problem format, and explain some design decisions. Whereas CPF was originally invented to combine three different formats for termination proofs into a single one, in the meanwhile proofs for several other properties of term rewrite systems are also expressible: like confluence, complexity, and completion. As a consequence, the format is already supported by several tools and certifiers. Its acceptance is also demonstrated in international competitions: the certified tracks of both the termination and the confluence competition utilized CPF as exchange format between automated tools and trusted certifiers.", "Bounded increase is a termination technique where it is tried to find an argument x of a recursive function that is increased repeatedly until it reaches a bound b, which might be ensured by a condition x<b. Since the predicates like < may be arbitrary user-defined recursive functions, an induction calculus is utilized to prove conditional constraints. In this paper, we present a full formalization of bounded increase in the theorem prover Isabelle HOL. It fills one large gap in the pen-and-paper proof, and it includes generalized inference rules for the induction calculus as well as variants of the Babylonian algorithm to compute square roots. These algorithms were required to write executable functions which can certify untrusted termination proofs from termination tools that make use of bounded increase. And indeed, the resulting certifier was already useful: it detected an implementation error that remained undetected since 2007.", "We present the rewriting toolkit CiME3. Amongst other original features, this version enjoys two kinds of engines: to handle and discover proofs of various properties of rewriting systems, and to generate Coq scripts from proof traces given in certification problem format in order to certify them with a skeptical proof assistant like Coq. Thus, these features open the way for using CiME3 to add automation to proofs of termination or confluence in a formal development in the Coq proof assistant." ] }
1502.05137
2162454958
Previous work on predicting the target of visual search from human fixations only considered closed-world settings in which training labels are available and predictions are performed for a known set of potential targets. In this work we go beyond the state of the art by studying search target prediction in an open-world setting in which we no longer assume that we have fixation data to train for the search targets. We present a dataset containing fixation data of 18 users searching for natural images from three image categories within synthesised image collages of about 80 images. In a closed-world baseline experiment we show that we can predict the correct target image out of a candidate set of five images. We then present a new problem formulation for search target prediction in the open-world setting that is based on learning compatibilities between fixations and potential targets.
Other works investigated means to recognise more general aspects of user behaviour. investigated the recognition of everyday office activities from visual behaviour, such as reading, taking hand-written notes, or browsing the web @cite_11 . Based on long-term eye movement recordings, they later showed that high-level contextual cues, such as social interactions or being mentally active, could also be inferred from visual behaviour @cite_36 . They further showed that cognitive processes, such as visual memory recall or cognitive load, could be inferred from gaze information @cite_41 @cite_6 as well -- of which the former finding was recently confirmed by @cite_7 .
{ "cite_N": [ "@cite_7", "@cite_36", "@cite_41", "@cite_6", "@cite_11" ], "mid": [ "2094539281", "2137022750", "2108410782", "1587950995", "" ], "abstract": [ "In human vision, acuity and color sensitivity are greatest at the center of fixation and fall off rapidly as visual eccentricity increases. Humans exploit the high resolution of central vision by actively moving their eyes three to four times each second. Here we demonstrate that it is possible to classify the task that a person is engaged in from their eye movements using multivariate pattern classification. The results have important theoretical implications for computational and neural models of eye movement control. They also have important practical implications for using passively recorded eye movements to infer the cognitive state of a viewer, information that can be used as input for intelligent human-computer interfaces and related applications.", "In this work we present EyeContext, a system to infer high-level contextual cues from human visual behaviour. We conducted a user study to record eye movements of four participants over a full day of their daily life, totalling 42.5 hours of eye movement data. Participants were asked to self-annotate four non-mutually exclusive cues: social (interacting with somebody vs. no interaction), cognitive (concentrated work vs. leisure), physical (physically active vs. not active), and spatial (inside vs. outside a building). We evaluate a proof-of-concept EyeContext system that combines encoding of eye movements into strings and a spectrum string kernel support vector machine (SVM) classifier. Our results demonstrate the large information content available in long-term human visual behaviour and opens up new venues for research on eye-based behavioural monitoring and life logging.", "Physical activity, location, as well as a person's psychophysiological and affective state are common dimensions for developing context-aware systems in ubiquitous computing. An important yet missing contextual dimension is the cognitive context that comprises all aspects related to mental information processing, such as perception, memory, knowledge, or learning. In this work we investigate the feasibility of recognising visual memory recall. We use a recognition methodology that combines minimum redundancy maximum relevance feature selection (mRMR) with a support vector machine (SVM) classifier. We validate the methodology in a dual user study with a total of fourteen participants looking at familiar and unfamiliar pictures from four picture categories: abstract, landscapes, faces, and buildings. Using person-independent training, we are able to discriminate between familiar and unfamiliar abstract pictures with a top recognition rate of 84.3 (89.3 recall, 21.0 false positive rate) over all participants. We show that eye movement analysis is a promising approach to infer the cognitive context of a person and discuss the key challenges for the real-world implementation of eye-based cognition-aware systems.", "Hearing instruments (HIs) have emerged as true pervasive computers as they continuously adapt the hearing program to the user's context. However, current HIs are not able to distinguish different hearing needs in the same acoustic environment. In this work, we explore how information derived from body and eye movements can be used to improve the recognition of such hearing needs. We conduct an experiment to provoke an acoustic environment in which different hearing needs arise: active conversation and working while colleagues are having a conversation in a noisy office environment. We record body movements on nine body locations, eye movements using electrooculography (EOG), and sound using commercial HIs for eleven participants. Using a support vector machine (SVM) classifier and person-independent training we improve the accuracy of 77 based on sound to an accuracy of 92 using body movements. With a view to a future implementation into a HI we then perform a detailed analysis of the sensors attached to the head. We achieve the best accuracy of 86 using eye movements compared to 84 for head movements. Our work demonstrates the potential of additional sensor modalities for future HIs and motivates to investigate the wider applicability of this approach on further hearing situations and needs.", "" ] }
1502.05137
2162454958
Previous work on predicting the target of visual search from human fixations only considered closed-world settings in which training labels are available and predictions are performed for a known set of potential targets. In this work we go beyond the state of the art by studying search target prediction in an open-world setting in which we no longer assume that we have fixation data to train for the search targets. We present a dataset containing fixation data of 18 users searching for natural images from three image categories within synthesised image collages of about 80 images. In a closed-world baseline experiment we show that we can predict the correct target image out of a candidate set of five images. We then present a new problem formulation for search target prediction in the open-world setting that is based on learning compatibilities between fixations and potential targets.
Several previous works investigated the use of gaze information as an implicit measure of relevance in image retrieval tasks. For example, Oyekoya and Stendiford compared similarity measures based on a visual saliency model as well as real human gaze patterns, indicating better performance for gaze @cite_18 . In later works the same and other authors showed that gaze information yielded significantly better performance than random selection or using saliency information @cite_39 @cite_12 . Coddington presented a similar system but used two separate screens for the task @cite_21 while focused on implicit cues obtained from gaze in real-time interfaces @cite_20 . With the goal of making implicit relevance feedback richer, Klami proposed to infer which parts of the image the user found most relevant from gaze @cite_26 .
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_21", "@cite_39", "@cite_12", "@cite_20" ], "mid": [ "2012885596", "2065570765", "1985622654", "2104251414", "1988055532", "2172091789" ], "abstract": [ "Different modes of human-computer interaction will play a major part in making computing increasingly pervasive. More natural methods of interaction are in demand to replace devices such as the keyboard and the mouse, and it is becoming more important to develop the next generation of human-computer interfaces that can anticipate the user's intended actions. Human behaviour depends on highly developed abilities to perceive and interpret visual information and provides a medium for the next generation of image retrieval interfaces. If the computer can correctly interpret the user's eye gaze behaviour, it will be able to anticipate the user's objectives and retrieve images and video extremely rapidly and with a minimum of thought and manual involvement.", "A number of studies have recently used eye movements of a user inspecting the content as implicit relevance feedback for proactive retrieval systems. Typically binary feedback for images or text paragraphs is inferred from the gaze pattern. We seek to make such feedback richer for image retrieval, by inferring which parts of the image the user found relevant. For this purpose, we present a novel Bayesian mixture model for inferring possible target regions directly from gaze data alone, and show how the relevance of those regions can then be inferred using a simple classifier that is independent of the content or the task", "In this paper we present a novel gaze-based image retrieval application. The application is designed to be run on a dual monitor setup with a separate eye tracking device dedicated to each monitor. A source image is displayed on one monitor and the retrieved images are displayed on the second monitor. The system is completely gaze controlled. The user selects one or more objects or regions in the source image by fixating on them. The system then retrieves images containing similar objects from an image database. These are displayed in a grid on the second monitor. The user can then fixate on one of these images to select it as the new source image and the process can be repeated until a satisfactory image is found.", "This paper explores the feasibility of using an eye tracker as an image retrieval interface. A database of 1000 Corel images is used in the study and results are analysed using ANOVA. Results from participants performing image search tasks show that eye tracking data can be used to reach target images in fewer steps than by random selection. The effects of the intrinsic difficulty of finding images and the time allowed for successive selections were also considered. The results indicated evidence of the use of pre-attentive vision during visual search.", "Given the growing amount of very large image databases, content based image retrieval (CBIR) is becoming more and more important. One of the major challenges in CBIR is the semantic gap - commonly used feature-based algorithms are not able to identify what really draws human attention in an image. This problem is more crucial for localized CBIR, where certain regions parts of the image are what the user is really interested in. This paper explores how human gaze can be utilized to extract regions of interest (ROIs) of an image to perform attention based image retrieval. Using eye tracking data and knowledge about foveal and peripheral vision of humans, we present a foveal fixation clustering algorithm that automatically generates ROIs in an image while a person is viewing it. To objectively set different parameters of the algorithm, a small user study was conducted. The method was evaluated for use in a localized CBIR system. Image retrieval results using the publicly available SIVAL dataset were scored using mean average precision (MAP). Comparison to a saliency-based visual attention algorithm as well as to manually labeled regions showed that the retrieval results of the developed algorithm are nearly two times better than the saliency-based visual attention algorithm and very close to the results using hand-labeled regions.", "We introduce GaZIR, a gaze-based interface for browsing and searching for images. The system computes on-line predictions of relevance of images based on implicit feedback, and when the user zooms in, the images predicted to be the most relevant are brought out. The key novelty is that the relevance feedback is inferred from implicit cues obtained in real-time from the gaze pattern, using an estimator learned during a separate training phase. The natural zooming interface can be connected to any content-based information retrieval engine operating on user feedback. We show with experiments on one engine that there is sufficient amount of information in the gaze patterns to make the estimated relevance feedback a viable choice to complement or even replace explicit feedback by pointing-and-clicking." ] }
1502.05137
2162454958
Previous work on predicting the target of visual search from human fixations only considered closed-world settings in which training labels are available and predictions are performed for a known set of potential targets. In this work we go beyond the state of the art by studying search target prediction in an open-world setting in which we no longer assume that we have fixation data to train for the search targets. We present a dataset containing fixation data of 18 users searching for natural images from three image categories within synthesised image collages of about 80 images. In a closed-world baseline experiment we show that we can predict the correct target image out of a candidate set of five images. We then present a new problem formulation for search target prediction in the open-world setting that is based on learning compatibilities between fixations and potential targets.
Only a few previous works here focused on visual search and the problem of predicting search targets from gaze. aimed to predict subjects' gaze patterns during categorical search tasks @cite_22 . They designed a series of experiments in which participants had to find two categorical search targets (teddy bear and butterfly) among four visually similar distractors. They predicted the number of fixations made prior to search judgements as well as the percentage of first eye movements landing on the search target. In another work they showed how to predict the categorical search targets themselves from eye fixations @cite_5 . focused on predicting search targets from fixations @cite_9 . In three experiments, participants had to find a binary pattern and 3-level luminance patterns out of a set of other patterns, as well as one of 15 objects in 11 synthetic natural scenes. They showed that binary patterns with higher similarity to the search target were viewed more often by participants. Additionally, they found that when the complexity of the search target increased, participants were guided more by sub-patterns rather than the whole pattern.
{ "cite_N": [ "@cite_5", "@cite_9", "@cite_22" ], "mid": [ "2052423441", "2312708194", "2124153422" ], "abstract": [ "Is it possible to infer a person's goal by decoding their fixations on objects? Two groups of participants categorically searched for either a teddy bear or butterfly among random category distractors, each rated as high, medium, or low in similarity to the target classes. Target-similar objects were preferentially fixated in both search tasks, demonstrating information about target category in looking behavior. Different participants then viewed the searchers' scanpaths, superimposed over the target-absent displays, and attempted to decode the target category (bear butterfly). Bear searchers were classified perfectly; butterfly searchers were classified at 77 . Bear and butterfly Support Vector Machine (SVM) classifiers were also used to decode the same preferentially fixated objects and found to yield highly comparable classification rates. We conclude that information about a person's search goal exists in fixation behavior, and that this information can be behaviorally decoded to reveal a search target—essentially reading a person's mind by analyzing their fixations.", "We address the question of inferring the search target from fixation behavior in visual search. Such inference is possible since during search, our attention and gaze are guided toward visual features similar to those in the search target. We strive to answer two fundamental questions: what are the most powerful algorithmic principles for this task, and how does their performance depend on the amount of available eye movement data and the complexity of the target objects? In the first two experiments, we choose a random-dot search paradigm to eliminate contextual influences on search. We present an algorithm that correctly infers the target pattern up to 50 times as often as a previously employed method and promises sufficient power and robustness for interface control. Moreover, the current data suggest a principal limitation of target inference that is crucial for interface design: if the target pattern exceeds a certain spatial complexity level, only a subpattern tends to guide the observers' eye movements, which drastically impairs target inference. In the third experiment, we show that it is possible to predict search targets in natural scenes using pattern classifiers and classic computer vision features significantly above chance. The availability of compelling inferential algorithms could initiate a new generation of smart, gaze-controlled interfaces and wearable visual technologies that deduce from their users' eye movements the visual information for which they are looking. In a broader perspective, our study shows directions for efficient intent decoding from eye movements. HighlightsProviding a unified theoretical framework for intent decoding using eye movements.Proposing two new algorithms for search target inference from fixations.Studying the impact of target complexity in search performance and target inference.Sharing a large collection of code and data to promote future research in this area.", "We introduce a model of eye movements during categorical search, the task of finding and recognizing categorically defined targets. It extends a previous model of eye movements during search (target acquisition model, TAM) by using distances from an support vector machine classification boundary to create probability maps indicating pixel-by-pixel evidence for the target category in search images. Other additions include functionality enabling target-absent searches, and a fixation-based blurring of the search images now based on a mapping between visual and collicular space. We tested this model on images from a previously conducted variable set-size (6 13 20) present absent search experiment where participants searched for categorically defined teddy bear targets among random category distractors. The model not only captured target-present absent set-size effects, but also accurately predicted for all conditions the numbers of fixations made prior to search judgements. It also predicted the percentages of first eye movements during search landing on targets, a conservative measure of search guidance. Effects of set size on false negative and false positive errors were also captured, but error rates in general were overestimated. We conclude that visual features discriminating a target category from non-targets can be learned and used to guide eye movements during categorical search." ] }
1502.05137
2162454958
Previous work on predicting the target of visual search from human fixations only considered closed-world settings in which training labels are available and predictions are performed for a known set of potential targets. In this work we go beyond the state of the art by studying search target prediction in an open-world setting in which we no longer assume that we have fixation data to train for the search targets. We present a dataset containing fixation data of 18 users searching for natural images from three image categories within synthesised image collages of about 80 images. In a closed-world baseline experiment we show that we can predict the correct target image out of a candidate set of five images. We then present a new problem formulation for search target prediction in the open-world setting that is based on learning compatibilities between fixations and potential targets.
The works of @cite_5 and @cite_9 are most related to ours. However, both works only considered simplified visual stimuli or synthesised natural scenes in a closed-world setting. In that setting, all potential search targets were part of the training set and fixations for all of these targets were observed. In contrast, our work is the first to address the open-world setting in which we no longer assume that we have fixation data to train for these targets, and to present a new problem formulation for this open-world search target recognition in the open-world setting.
{ "cite_N": [ "@cite_5", "@cite_9" ], "mid": [ "2052423441", "2312708194" ], "abstract": [ "Is it possible to infer a person's goal by decoding their fixations on objects? Two groups of participants categorically searched for either a teddy bear or butterfly among random category distractors, each rated as high, medium, or low in similarity to the target classes. Target-similar objects were preferentially fixated in both search tasks, demonstrating information about target category in looking behavior. Different participants then viewed the searchers' scanpaths, superimposed over the target-absent displays, and attempted to decode the target category (bear butterfly). Bear searchers were classified perfectly; butterfly searchers were classified at 77 . Bear and butterfly Support Vector Machine (SVM) classifiers were also used to decode the same preferentially fixated objects and found to yield highly comparable classification rates. We conclude that information about a person's search goal exists in fixation behavior, and that this information can be behaviorally decoded to reveal a search target—essentially reading a person's mind by analyzing their fixations.", "We address the question of inferring the search target from fixation behavior in visual search. Such inference is possible since during search, our attention and gaze are guided toward visual features similar to those in the search target. We strive to answer two fundamental questions: what are the most powerful algorithmic principles for this task, and how does their performance depend on the amount of available eye movement data and the complexity of the target objects? In the first two experiments, we choose a random-dot search paradigm to eliminate contextual influences on search. We present an algorithm that correctly infers the target pattern up to 50 times as often as a previously employed method and promises sufficient power and robustness for interface control. Moreover, the current data suggest a principal limitation of target inference that is crucial for interface design: if the target pattern exceeds a certain spatial complexity level, only a subpattern tends to guide the observers' eye movements, which drastically impairs target inference. In the third experiment, we show that it is possible to predict search targets in natural scenes using pattern classifiers and classic computer vision features significantly above chance. The availability of compelling inferential algorithms could initiate a new generation of smart, gaze-controlled interfaces and wearable visual technologies that deduce from their users' eye movements the visual information for which they are looking. In a broader perspective, our study shows directions for efficient intent decoding from eye movements. HighlightsProviding a unified theoretical framework for intent decoding using eye movements.Proposing two new algorithms for search target inference from fixations.Studying the impact of target complexity in search performance and target inference.Sharing a large collection of code and data to promote future research in this area." ] }
1502.05491
1919365417
We address the problem of quantification, a supervised learning task whose goal is, given a class, to estimate the relative frequency (or prevalence) of the class in a dataset of unlabeled items. Quantification has several applications in data and text mining, such as estimating the prevalence of positive reviews in a set of reviews of a given product or estimating the prevalence of a given support issue in a dataset of transcripts of phone calls to tech support. So far, quantification has been addressed by learning a general-purpose classifier, counting the unlabeled items that have been assigned the class, and tuning the obtained counts according to some heuristics. In this article, we depart from the tradition of using general-purpose classifiers and use instead a supervised learning model for structured prediction, capable of generating classifiers directly optimized for the (multivariate and nonlinear) function used for evaluating quantification accuracy. The experiments that we have run on 5,500 binary high-dimensional datasets (averaging more than 14,000 documents each) show that this method is more accurate, more stable, and more efficient than existing state-of-the-art quantification methods.
Bella:2010kx compare many of the methods discussed in , and find that CC @math PCC @math ACC @math PACC (where @math means underperforms''). Also Tang:2010uq experimentally compare several among the methods discussed in , and find that CC @math PCC @math ACC @math PACC @math MS. They also propose a method (specific to linked data) that does not require the classification of individual items, but they find that it underperforms a robust classification-based quantification method such as MS. However, the experimental comparisons of @cite_7 and @cite_17 are both framed in terms of absolute error, which seems a sub-standard evaluation measure for this task (see ); additionally, the datasets they test on do not exhibit the severe imbalance typical of many binary text classification tasks, so it is not surprising that their results concerning MS are not confirmed by our experiments.
{ "cite_N": [ "@cite_7", "@cite_17" ], "mid": [ "2158243283", "1967622058" ], "abstract": [ "Quantification is the name given to a novel machine learning task which deals with correctly estimating the number of elements of one class in a set of examples. The output of a quantifier is a real value, since training instances are the same as a classification problem, a natural approach is to train a classifier and to derive a quantifier from it. Some previous works have shown that just classifying the instances and counting the examples belonging to the class of interest classify count typically yields bad quantifiers, especially when the class distribution may vary between training and test. Hence, adjusted versions of classify count have been developed by using modified thresholds. However, previous works have explicitly discarded (without a deep analysis) any possible approach based on the probability estimations of the classifier. In this paper, we present a method based on averaging the probability estimations of a classifier with a very simple scaling that does perform reasonably well, showing that probability estimators for quantification capture a richer view of the problem than methods based on a threshold.", "The increasing availability of participatory web and social media presents enormous opportunities to study human relations and collective behaviors. Many applications involving decision making want to obtain certain generalized properties about the population in a network, such as the proportion of actors given a category, instead of the category of individuals. While data mining and machine learning researchers have developed many methods for link-based classification or relational learning, most are optimized to classify individual nodes in a network. In order to accurately estimate the prevalence of one class in a network, some quantification method has to be used. In this work, two kinds of approaches are presented: quantification based on classification or quantification based on link analysis. Extensive experiments are conducted on several representative network data, with interesting findings reported concerning efficacy and robustness of different quantification methods, providing insights to further quantify the ebb and flow of online collective behaviors at macro-level." ] }
1502.05491
1919365417
We address the problem of quantification, a supervised learning task whose goal is, given a class, to estimate the relative frequency (or prevalence) of the class in a dataset of unlabeled items. Quantification has several applications in data and text mining, such as estimating the prevalence of positive reviews in a set of reviews of a given product or estimating the prevalence of a given support issue in a dataset of transcripts of phone calls to tech support. So far, quantification has been addressed by learning a general-purpose classifier, counting the unlabeled items that have been assigned the class, and tuning the obtained counts according to some heuristics. In this article, we depart from the tradition of using general-purpose classifiers and use instead a supervised learning model for structured prediction, capable of generating classifiers directly optimized for the (multivariate and nonlinear) function used for evaluating quantification accuracy. The experiments that we have run on 5,500 binary high-dimensional datasets (averaging more than 14,000 documents each) show that this method is more accurate, more stable, and more efficient than existing state-of-the-art quantification methods.
The idea of using a learner that directly optimizes a loss function specific to quantification was first proposed, although not implemented, in @cite_41 , which indeed proposes using SVM @math to directly optimize KLD; the present paper is thus the direct realization of that proposal. The first published work that implements and tests the idea of directly optimizing a quantification-specific loss function is @cite_6 , whose authors propose variants of decision trees and decision forests that directly optimize a loss combining classification accuracy and quantification accuracy. At the time of going to print we have become aware of a related paper @cite_32 whose authors, following @cite_41 , use @math to perform quantification; differently from the present paper, and similarly to @cite_6 , they use an evaluation function that combines classification accuracy and quantification accuracy.
{ "cite_N": [ "@cite_41", "@cite_32", "@cite_6" ], "mid": [ "2095734915", "2067977748", "" ], "abstract": [ "We examine the response to the recent natural disaster Hurricane Irene on Twitter.com. We collect over 65,000 Twitter messages relating to Hurricane Irene from August 18th to August 31st, 2011, and group them by location and gender. We train a sentiment classifier to categorize messages based on level of concern, and then use this classifier to investigate demographic differences. We report three principal findings: (1) the number of Twitter messages related to Hurricane Irene in directly affected regions peaks around the time the hurricane hits that region; (2) the level of concern in the days leading up to the hurricane's arrival is dependent on region; and (3) the level of concern is dependent on gender, with females being more likely to express concern than males. Qualitative linguistic variations further support these differences. We conclude that social media analysis provides a viable, real-time complement to traditional survey methods for understanding public perception towards an impending disaster.", "Real-world applications demand effective methods to estimate the class distribution of a sample. In many domains, this is more productive than seeking individual predictions. At a first glance, the straightforward conclusion could be that this task, recently identified as quantification, is as simple as counting the predictions of a classifier. However, due to natural distribution changes occurring in real-world problems, this solution is unsatisfactory. Moreover, current quantification models based on classifiers present the drawback of being trained with loss functions aimed at classification rather than quantification. Other recent attempts to address this issue suffer certain limitations regarding reliability, measured in terms of classification abilities. This paper presents a learning method that optimizes an alternative metric that combines simultaneously quantification and classification performance. Our proposal offers a new framework that allows the construction of binary quantifiers that are able to accurately estimate the proportion of positives, based on models with reliable classification abilities. HighlightsThis paper studies the first quantification-oriented learning approach.We implement the first learning method that optimizes a quantification metric.We propose a new metric that combines quantification and classification.We compare our proposal with current quantifiers on benchmark datasets.Our method is theoretically well-founded and offers competitive performance.", "" ] }
1502.04496
2951444567
Cloud services have turned remote computation into a commodity and enable convenient online collaboration. However, they require that clients fully trust the service provider in terms of confidentiality, integrity, and availability. Towards reducing this dependency, this paper introduces a protocol for verification of integrity and consistency for cloud object storage (VICOS), which enables a group of mutually trusting clients to detect data-integrity and consistency violations for a cloud object-storage service. It aims at services where multiple clients cooperate on data stored remotely on a potentially misbehaving service. VICOS enforces the consistency notion of fork-linearizability, supports wait-free client semantics for most operations, and reduces the computation and communication overhead compared to previous protocols. VICOS is based in a generic way on any authenticated data structure. Moreover, its operations cover the hierarchical name space of a cloud object store, supporting a real-world interface and not only a simplistic abstraction. A prototype of VICOS that works with the key-value store interface of commodity cloud storage services has been implemented, and an evaluation demonstrates its advantage compared to existing systems.
Many previous systems providing data integrity rely on trusted components. Distributed file systems with cryptographic protection provide stronger notions of integrity and consistency than given by ; there are many examples for this, from early research prototypes like FARSITE @cite_10 or SiRiUS @cite_15 to production file-systems today (e.g., IBM Spectrum Scale, http: www-03.ibm.com systems storage spectrum scale ). However, they rely on trusted directory services for freshness. Such a trusted coordinator is often missing or considered to be impractical. Iris @cite_13 relies on a trusted gateway appliance, which mediates all requests between the clients and the untrusted cloud storage. Several recent systems ensure data integrity with the help of trusted hardware, such as CATS @cite_32 , which offers accountability based on an immutable public publishing medium, or A2M @cite_11 , which assumes an append-only memory. They all require some form of global synchronization, usually done by the trusted component, for critical metadata to ensure linearizability. In the absence of such communication, as assumed here, they cannot protect consistency and prevent replay attacks.
{ "cite_N": [ "@cite_10", "@cite_32", "@cite_15", "@cite_13", "@cite_11" ], "mid": [ "2121133177", "2197404543", "100863554", "", "2121510533" ], "abstract": [ "Farsite is a secure, scalable file system that logically functions as a centralized file server but is physically distributed among a set of untrusted computers. Farsite provides file availability and reliability through randomized replicated storage; it ensures the secrecy of file contents with cryptographic techniques; it maintains the integrity of file and directory data with a Byzantine-fault-tolerant protocol; it is designed to be scalable by using a distributed hint mechanism and delegation certificates for pathname translations; and it achieves good performance by locally caching file data, lazily propagating file updates, and varying the duration and granularity of content leases. We report on the design of Farsite and the lessons we have learned by implementing much of that design.", "This article presents the design, implementation, and evaluation of CATS, a network storage service with strong accountability properties. CATS offers a simple web services interface that allows clients to read and write opaque objects of variable size. This interface is similar to the one offered by existing commercial Internet storage services. CATS extends the functionality of commercial Internet storage services by offering support for strong accountability. A CATS server annotates read and write responses with evidence of correct execution, and offers audit and challenge interfaces that enable clients to verify that the server is faithful. A faulty server cannot conceal its misbehavior, and evidence of misbehavior is independently verifiable by any participant. CATS clients are also accountable for their actions on the service. A client cannot deny its actions, and the server can prove the impact of those actions on the state views it presented to other clients. Experiments with a CATS prototype evaluate the cost of accountability under a range of conditions and expose the primary factors influencing the level of assurance and the performance of a strongly accountable storage server. The results show that strong accountability is practical for network storage systems in settings with strong identity and modest degrees of write-sharing. We discuss how the accountability concepts and techniques used in CATS generalize to other classes of network services.", "This paper presents SiRiUS, a secure file system designed to be layered over insecure network and P2P file systems such as NFS, CIFS, OceanStore, and Yahoo! Briefcase. SiRiUS assumes the network storage is untrusted and provides its own read-write cryptographic access control for file level sharing. Key management and revocation is simple with minimal out-of-band communication. File system freshness guarantees are supported by SiRiUS using hash tree constructions. SiRiUS contains a novel method of performing file random access in a cryptographic file system without the use of a block server. Extensions to SiRiUS include large scale group sharing using the NNL key revocation construction. Our implementation of SiRiUS performs well relative to the underlying file system despite using cryptographic operations.", "", "Researchers have made great strides in improving the fault tolerance of both centralized and replicated systems against arbitrary (Byzantine) faults. However, there are hard limits to how much can be done with entirely untrusted components; for example, replicated state machines cannot tolerate more than a third of their replica population being Byzantine. In this paper, we investigate how minimal trusted abstractions can push through these hard limits in practical ways. We propose Attested Append-Only Memory (A2M), a trusted system facility that is small, easy to implement and easy to verify formally. A2M provides the programming abstraction of a trusted log, which leads to protocol designs immune to equivocation -- the ability of a faulty host to lie in different ways to different clients or servers -- which is a common source of Byzantine headaches. Using A2M, we improve upon the state of the art in Byzantine-fault tolerant replicated state machines, producing A2M-enabled protocols (variants of Castro and Liskov's PBFT) that remain correct (linearizable) and keep making progress (live) even when half the replicas are faulty, in contrast to the previous upper bound. We also present an A2M-enabled single-server shared storage protocol that guarantees linearizability despite server faults. We implement A2M and our protocols, evaluate them experimentally through micro- and macro-benchmarks, and argue that the improved fault tolerance is cost-effective for a broad range of uses, opening up new avenues for practical, more reliable services." ] }
1502.04496
2951444567
Cloud services have turned remote computation into a commodity and enable convenient online collaboration. However, they require that clients fully trust the service provider in terms of confidentiality, integrity, and availability. Towards reducing this dependency, this paper introduces a protocol for verification of integrity and consistency for cloud object storage (VICOS), which enables a group of mutually trusting clients to detect data-integrity and consistency violations for a cloud object-storage service. It aims at services where multiple clients cooperate on data stored remotely on a potentially misbehaving service. VICOS enforces the consistency notion of fork-linearizability, supports wait-free client semantics for most operations, and reduces the computation and communication overhead compared to previous protocols. VICOS is based in a generic way on any authenticated data structure. Moreover, its operations cover the hierarchical name space of a cloud object store, supporting a real-world interface and not only a simplistic abstraction. A prototype of VICOS that works with the key-value store interface of commodity cloud storage services has been implemented, and an evaluation demonstrates its advantage compared to existing systems.
In CloudProof @cite_14 , an object-storage protection system with accountable and proof-based data integrity and consistency support, clients may verify the freshness of returned objects with the help of the data owner. Its auditing operation works in epochs and verifies operations on one object only with a certain probability and only at the end of an epoch. Moreover, the clients need to communicate directly with the owner of an object for establishing integrity and consistency.
{ "cite_N": [ "@cite_14" ], "mid": [ "1503242180" ], "abstract": [ "Several cloud storage systems exist today, but none of them provide security guarantees in their Service Level Agreements (SLAs). This lack of security support has been a major hurdle for the adoption of cloud services, especially for enterprises and cautious consumers. To fix this issue, we present CloudProof, a secure storage system specifically designed for the cloud. In CloudProof, customers can not only detect violations of integrity, write-serializability, and freshness, they can also prove the occurrence of these violations to a third party. This proof-based system is critical to enabling security guarantees in SLAs, wherein clients pay for a desired level of security and are assured they will receive a certain compensation in the event of cloud misbehavior. Furthermore, since CloudProof aims to scale to the size of large enterprises, we delegate as much work as possible to the cloud and use cryptographic tools to allow customers to detect and prove cloud misbehavior. Our evaluation of CloudProof indicates that its security mechanisms have a reasonable cost: they incur a latency overhead of only ∼15 on reads and writes, and reduce throughput by around 10 . We also achieve highly scalable access control, with membership management (addition and removal of members' permissions) for a large proprietary software with more than 5000 developers taking only a few seconds per month." ] }
1502.04496
2951444567
Cloud services have turned remote computation into a commodity and enable convenient online collaboration. However, they require that clients fully trust the service provider in terms of confidentiality, integrity, and availability. Towards reducing this dependency, this paper introduces a protocol for verification of integrity and consistency for cloud object storage (VICOS), which enables a group of mutually trusting clients to detect data-integrity and consistency violations for a cloud object-storage service. It aims at services where multiple clients cooperate on data stored remotely on a potentially misbehaving service. VICOS enforces the consistency notion of fork-linearizability, supports wait-free client semantics for most operations, and reduces the computation and communication overhead compared to previous protocols. VICOS is based in a generic way on any authenticated data structure. Moreover, its operations cover the hierarchical name space of a cloud object store, supporting a real-world interface and not only a simplistic abstraction. A prototype of VICOS that works with the key-value store interface of commodity cloud storage services has been implemented, and an evaluation demonstrates its advantage compared to existing systems.
Cryptographic integrity guarantees are of increasing interest for many diverse domains: Verena @cite_27 , for example, is a recent enhancement for web applications that involve database queries and updates by multiple clients. It targets a patient database holding diagnostic data and treatment information. In contrast to , however, it relies on a trusted server that supplies hash values of data objects to clients during every operation.
{ "cite_N": [ "@cite_27" ], "mid": [ "2518442919" ], "abstract": [ "Web applications rely on web servers to protect the integrity of sensitive information. However, an attacker gaining access to web servers can tamper with the data and query computation results, and thus serve corrupted web pages to the user. Violating the integrity of the web page can have serious consequences, affecting application functionality and decision-making processes. Worse yet, data integrity violation may affect physical safety, as in the case of medical web applications which enable physicians to assign treatment to patients based on diagnostic information stored at the web server. This paper presents Verena, a web application platform that provides end-to-end integrity guarantees against attackers that have full access to the web and database servers. In Verena, a client's browser can verify the integrity of a web page by verifying the results of queries on data stored at the server. Verena provides strong integrity properties such as freshness, completeness, and correctness for a common set of database queries, by relying on a small trusted computing base. In a setting where there can be many users with different write permissions, Verena allows a developer to specify an integrity policy for query results based on our notion of trust contexts, and then enforces this policy efficiently. We implemented and evaluated Verena on top of the Meteor framework. Our results show that Verena can support real applications with modest overhead." ] }
1502.04496
2951444567
Cloud services have turned remote computation into a commodity and enable convenient online collaboration. However, they require that clients fully trust the service provider in terms of confidentiality, integrity, and availability. Towards reducing this dependency, this paper introduces a protocol for verification of integrity and consistency for cloud object storage (VICOS), which enables a group of mutually trusting clients to detect data-integrity and consistency violations for a cloud object-storage service. It aims at services where multiple clients cooperate on data stored remotely on a potentially misbehaving service. VICOS enforces the consistency notion of fork-linearizability, supports wait-free client semantics for most operations, and reduces the computation and communication overhead compared to previous protocols. VICOS is based in a generic way on any authenticated data structure. Moreover, its operations cover the hierarchical name space of a cloud object store, supporting a real-world interface and not only a simplistic abstraction. A prototype of VICOS that works with the key-value store interface of commodity cloud storage services has been implemented, and an evaluation demonstrates its advantage compared to existing systems.
In the multi-client model, Mazi e @cite_22 @cite_21 have introduced the notion of and implemented SUNDR, the first system to guarantee fork-linearizable views to all clients. It detects integrity and consistency violations among all clients that become aware of each other's operations. The SUNDR system uses messages of size @math for @math clients @cite_9 , which might be expensive. The SUNDR prototype @cite_21 description also claims to handle multiple files and directory trees; however, the protocol description and guarantees are stated informally only, so that it remains unclear whether it achieves under all circumstances.
{ "cite_N": [ "@cite_9", "@cite_21", "@cite_22" ], "mid": [ "2147500445", "2114349788", "" ], "abstract": [ "When data is stored on a faulty server that is accessed concurrently by multiple clients, the server may present inconsistent data to different clients. For example, the server might complete a write operation of one client, but respond with stale data to another client. Mazieres and Shasha (PODC 2002) introduced the notion of fork-consistency, also called fork-linearizability, which ensures that the operations seen by every client are linearizable and guarantees that if the server causes the views of two clients to differ in a single operation, they may never again see each other's updates after that without the server being exposed as faulty. In this paper, we improve the communication complexity of their fork-linearizable storage access protocol with n clients from Ω(n2) to O(n). We also prove that in every such protocol, a reader must wait for a concurrent writer. This explains a seeming limitation of their and of our improved protocol. Furthermore, we give novel characterizations of fork-linearizability and prove that it is neither stronger nor weaker than sequential consistency.", "SUNDR is a network file system designed to store data securely on untrusted servers. SUNDR lets clients detect any attempts at unauthorized file modification by malicious server operators or users. SUNDR's protocol achieves a property called fork consistency, which guarantees that clients can detect any integrity or consistency failures as long as they see each other's file modifications. An implementation is described that performs comparably with NFS (sometimes better and sometimes worse), while offering significantly stronger security.", "" ] }
1502.04496
2951444567
Cloud services have turned remote computation into a commodity and enable convenient online collaboration. However, they require that clients fully trust the service provider in terms of confidentiality, integrity, and availability. Towards reducing this dependency, this paper introduces a protocol for verification of integrity and consistency for cloud object storage (VICOS), which enables a group of mutually trusting clients to detect data-integrity and consistency violations for a cloud object-storage service. It aims at services where multiple clients cooperate on data stored remotely on a potentially misbehaving service. VICOS enforces the consistency notion of fork-linearizability, supports wait-free client semantics for most operations, and reduces the computation and communication overhead compared to previous protocols. VICOS is based in a generic way on any authenticated data structure. Moreover, its operations cover the hierarchical name space of a cloud object store, supporting a real-world interface and not only a simplistic abstraction. A prototype of VICOS that works with the key-value store interface of commodity cloud storage services has been implemented, and an evaluation demonstrates its advantage compared to existing systems.
The SPORC system @cite_33 is a groupware collaboration service whose operations may conflict with each other, but can be made to commute by applying a specific technique called operational transformations.'' Through this mechanism, different execution orders still converge to the same state; still SPORC achieves only weak .
{ "cite_N": [ "@cite_33" ], "mid": [ "15308973" ], "abstract": [ "Cloud-based services are an attractive deployment model for user-facing applications like word processing and calendaring. Unlike desktop applications, cloud services allow multiple users to edit shared state concurrently and in real-time, while being scalable, highly available, and globally accessible. Unfortunately, these benefits come at the cost of fully trusting cloud providers with potentially sensitive and important data. To overcome this strict tradeoff, we present SPORC, a generic framework for building a wide variety of collaborative applications with untrusted servers. In SPORC, a server observes only encrypted data and cannot deviate from correct execution without being detected. SPORC allows concurrent, low-latency editing of shared state, permits disconnected operation, and supports dynamic access control even in the presence of concurrency. We demonstrate SPORC's flexibility through two prototype applications: a causally-consistent key-value store and a browser-based collaborative text editor. Conceptually, SPORC illustrates the complementary benefits of operational transformation (OT) and fork* consistency. The former allows SPORC clients to execute concurrent operations without locking and to resolve any resulting conflicts automatically. The latter prevents a misbehaving server from equivocating about the order of operations unless it is willing to fork clients into disjoint sets. Notably, unlike previous systems, SPORC can automatically recover from such malicious forks by leveraging OT's conflict resolution mechanism." ] }
1502.04496
2951444567
Cloud services have turned remote computation into a commodity and enable convenient online collaboration. However, they require that clients fully trust the service provider in terms of confidentiality, integrity, and availability. Towards reducing this dependency, this paper introduces a protocol for verification of integrity and consistency for cloud object storage (VICOS), which enables a group of mutually trusting clients to detect data-integrity and consistency violations for a cloud object-storage service. It aims at services where multiple clients cooperate on data stored remotely on a potentially misbehaving service. VICOS enforces the consistency notion of fork-linearizability, supports wait-free client semantics for most operations, and reduces the computation and communication overhead compared to previous protocols. VICOS is based in a generic way on any authenticated data structure. Moreover, its operations cover the hierarchical name space of a cloud object store, supporting a real-world interface and not only a simplistic abstraction. A prototype of VICOS that works with the key-value store interface of commodity cloud storage services has been implemented, and an evaluation demonstrates its advantage compared to existing systems.
The BST protocol @cite_24 supports an encrypted remote database hosted by an untrusted server that is accessed by multiple clients. Its consistency checking algorithm allows some commuting client operations to proceed concurrently; COP and ACOP @cite_34 extend BST and also guarantee for arbitrary services run by a Byzantine server, going beyond data storage services, and support wait-freedom for commuting operations. builds directly on COP, but improves the efficiency by avoiding the local state copies at clients and by reducing the computation and communication overhead. The main advantage is that clients can remain offline between executing operations without stalling the protocol.
{ "cite_N": [ "@cite_24", "@cite_34" ], "mid": [ "2106797866", "2951479557" ], "abstract": [ "We introduce a new paradigm for outsourcing the durability property of a multi-client transactional database to an untrusted service provider. Specifically, we enable untrusted service providers to support transaction serialization, backup and recovery for clients, with full data confidentiality and correctness. Moreover, providers learn nothing about transactions (except their size and timing), thus achieving read and write access pattern privacy. We build a proof-of-concept implementation of this protocol for the MySQL database management system, achieving tens of transactions per second in a two-client scenario with full transaction privacy and guaranteed correctness. This shows the method is ready for production use, creating a novel class of secure database outsourcing models.", "A group of mutually trusting clients outsources a computation service to a remote server, which they do not fully trust and that may be subject to attacks. The clients do not communicate with each other and would like to verify the correctness of the remote computation and the consistency of the server's responses. This paper first presents the Commutative-Operation verification Protocol (COP) that ensures linearizability when the server is correct and preserves fork-linearizability in any other case. All clients that observe each other's operations are consistent, in the sense that their own operations and those operations of other clients that they see are linearizable. Second, this work extends COP through authenticated data structures to Authenticated COP, which allows consistency verification of outsourced services whose state is kept only remotely, by the server. This yields the first fork-linearizable consistency verification protocol for generic outsourced services that (1) relieves clients from storing the state, (2) supports wait-free client operations, and (3) handles sequences of arbitrary commutative operations." ] }
1502.04585
2951865872
The organizer of a machine learning competition faces the problem of maintaining an accurate leaderboard that faithfully represents the quality of the best submission of each competing team. What makes this estimation problem particularly challenging is its sequential and adaptive nature. As participants are allowed to repeatedly evaluate their submissions on the leaderboard, they may begin to overfit to the holdout data that supports the leaderboard. Few theoretical results give actionable advice on how to design a reliable leaderboard. Existing approaches therefore often resort to poorly understood heuristics such as limiting the bit precision of answers and the rate of re-submission. In this work, we introduce a notion of "leaderboard accuracy" tailored to the format of a competition. We introduce a natural algorithm called "the Ladder" and demonstrate that it simultaneously supports strong theoretical guarantees in a fully adaptive model of estimation, withstands practical adversarial attacks, and achieves high utility on real submission files from an actual competition hosted by Kaggle. Notably, we are able to sidestep a powerful recent hardness result for adaptive risk estimation that rules out algorithms such as ours under a seemingly very similar notion of accuracy. On a practical note, we provide a completely parameter-free variant of our algorithm that can be deployed in a real competition with no tuning required whatsoever.
A highly relevant recent work @cite_3 , that inspired us, studies a more general question: Given a sequence of bounded functions @math over a domain @math estimate the expectations of these function @math over an unknown distribution @math given @math samples from this distribution. If we think of each function as expressing the loss of one classifier submitted to the leaderboard, then such an algorithm could in principle be used in our setting. The main result of @cite_3 is an algorithm that achieves maximum error [ O ( (k)^ 3 7 ( |X|)^ 1 7 n^ 2 7 ,( |X| (k) n)^ 1 4 ) ] This bound readily implies a corresponding result for leaderboard accuracy albeit worse than the one we show. One issue is that this algorithm requires the entire test set to be withheld and not just the labels as is required in the Kaggle application. The bigger obstacle is that the algorithm is unfortunately not computationally efficient and this is inherent. In fact, no computationally efficient algorithm can give non-trivial error on @math adaptively chosen functions as was shown recently @cite_0 @cite_1 under a standard computational hardness assumption.
{ "cite_N": [ "@cite_0", "@cite_1", "@cite_3" ], "mid": [ "2952550721", "2598809989", "2950541101" ], "abstract": [ "We show that, under a standard hardness assumption, there is no computationally efficient algorithm that given @math samples from an unknown distribution can give valid answers to @math adaptively chosen statistical queries. A statistical query asks for the expectation of a predicate over the underlying distribution, and an answer to a statistical query is valid if it is \"close\" to the correct expectation over the distribution. Our result stands in stark contrast to the well known fact that exponentially many statistical queries can be answered validly and efficiently if the queries are chosen non-adaptively (no query may depend on the answers to previous queries). Moreover, a recent work by shows how to accurately answer exponentially many adaptively chosen statistical queries via a computationally inefficient algorithm; and how to answer a quadratic number of adaptive queries via a computationally efficient algorithm. The latter result implies that our result is tight up to a linear factor in @math Conceptually, our result demonstrates that achieving statistical validity alone can be a source of computational intractability in adaptive settings. For example, in the modern large collaborative research environment, data analysts typically choose a particular approach based on previous findings. False discovery occurs if a research finding is supported by the data but not by the underlying distribution. While the study of preventing false discovery in Statistics is decades old, to the best of our knowledge our result is the first to demonstrate a computational barrier. In particular, our result suggests that the perceived difficulty of preventing false discovery in today's collaborative research environment may be inherent.", "", "A great deal of effort has been devoted to reducing the risk of spurious scientific discoveries, from the use of sophisticated validation techniques, to deep statistical methods for controlling the false discovery rate in multiple hypothesis testing. However, there is a fundamental disconnect between the theoretical results and the practice of data analysis: the theory of statistical inference assumes a fixed collection of hypotheses to be tested, or learning algorithms to be applied, selected non-adaptively before the data are gathered, whereas in practice data is shared and reused with hypotheses and new analyses being generated on the basis of data exploration and the outcomes of previous analyses. In this work we initiate a principled study of how to guarantee the validity of statistical inference in adaptive data analysis. As an instance of this problem, we propose and investigate the question of estimating the expectations of @math adaptively chosen functions on an unknown distribution given @math random samples. We show that, surprisingly, there is a way to estimate an exponential in @math number of expectations accurately even if the functions are chosen adaptively. This gives an exponential improvement over standard empirical estimators that are limited to a linear number of estimates. Our result follows from a general technique that counter-intuitively involves actively perturbing and coordinating the estimates, using techniques developed for privacy preservation. We give additional applications of this technique to our question." ] }
1502.04585
2951865872
The organizer of a machine learning competition faces the problem of maintaining an accurate leaderboard that faithfully represents the quality of the best submission of each competing team. What makes this estimation problem particularly challenging is its sequential and adaptive nature. As participants are allowed to repeatedly evaluate their submissions on the leaderboard, they may begin to overfit to the holdout data that supports the leaderboard. Few theoretical results give actionable advice on how to design a reliable leaderboard. Existing approaches therefore often resort to poorly understood heuristics such as limiting the bit precision of answers and the rate of re-submission. In this work, we introduce a notion of "leaderboard accuracy" tailored to the format of a competition. We introduce a natural algorithm called "the Ladder" and demonstrate that it simultaneously supports strong theoretical guarantees in a fully adaptive model of estimation, withstands practical adversarial attacks, and achieves high utility on real submission files from an actual competition hosted by Kaggle. Notably, we are able to sidestep a powerful recent hardness result for adaptive risk estimation that rules out algorithms such as ours under a seemingly very similar notion of accuracy. On a practical note, we provide a completely parameter-free variant of our algorithm that can be deployed in a real competition with no tuning required whatsoever.
Matching this hardness result, there is a computationally efficient algorithm in @cite_3 that achieves an error bound of @math which implies a bound on leaderboard accuracy that is worse than ours for all @math They also give an algorithm (called EffectiveRounds) with accuracy @math when the number of rounds of adaptivity'' is at most @math While we do not have a bound on @math in our setting better than @math The parameter @math corresponds to the depth of the adaptive tree we define in the proof of ub . While we bound the size of the tree, the depth could be as large as @math , the proof technique relies on sample splitting and a similar argument could be used to prove our upper bound. However, our argument does not require sample splitting and this is very important for the practical applicability of the algorithm.
{ "cite_N": [ "@cite_3" ], "mid": [ "2950541101" ], "abstract": [ "A great deal of effort has been devoted to reducing the risk of spurious scientific discoveries, from the use of sophisticated validation techniques, to deep statistical methods for controlling the false discovery rate in multiple hypothesis testing. However, there is a fundamental disconnect between the theoretical results and the practice of data analysis: the theory of statistical inference assumes a fixed collection of hypotheses to be tested, or learning algorithms to be applied, selected non-adaptively before the data are gathered, whereas in practice data is shared and reused with hypotheses and new analyses being generated on the basis of data exploration and the outcomes of previous analyses. In this work we initiate a principled study of how to guarantee the validity of statistical inference in adaptive data analysis. As an instance of this problem, we propose and investigate the question of estimating the expectations of @math adaptively chosen functions on an unknown distribution given @math random samples. We show that, surprisingly, there is a way to estimate an exponential in @math number of expectations accurately even if the functions are chosen adaptively. This gives an exponential improvement over standard empirical estimators that are limited to a linear number of estimates. Our result follows from a general technique that counter-intuitively involves actively perturbing and coordinating the estimates, using techniques developed for privacy preservation. We give additional applications of this technique to our question." ] }
1502.04585
2951865872
The organizer of a machine learning competition faces the problem of maintaining an accurate leaderboard that faithfully represents the quality of the best submission of each competing team. What makes this estimation problem particularly challenging is its sequential and adaptive nature. As participants are allowed to repeatedly evaluate their submissions on the leaderboard, they may begin to overfit to the holdout data that supports the leaderboard. Few theoretical results give actionable advice on how to design a reliable leaderboard. Existing approaches therefore often resort to poorly understood heuristics such as limiting the bit precision of answers and the rate of re-submission. In this work, we introduce a notion of "leaderboard accuracy" tailored to the format of a competition. We introduce a natural algorithm called "the Ladder" and demonstrate that it simultaneously supports strong theoretical guarantees in a fully adaptive model of estimation, withstands practical adversarial attacks, and achieves high utility on real submission files from an actual competition hosted by Kaggle. Notably, we are able to sidestep a powerful recent hardness result for adaptive risk estimation that rules out algorithms such as ours under a seemingly very similar notion of accuracy. On a practical note, we provide a completely parameter-free variant of our algorithm that can be deployed in a real competition with no tuning required whatsoever.
We sidestep the hardness result by going to a more specialized notion of accuracy that is surprisingly still sufficient for the leaderboard application. However, it does not resolve the more general question raised in @cite_3 . In particular, we do not always provide a loss estimate for each submitted classifier, but only for those that made a significant improvement over the previous best. This seemingly innocuous change is enough to circumvent the aforementioned hardness results.
{ "cite_N": [ "@cite_3" ], "mid": [ "2950541101" ], "abstract": [ "A great deal of effort has been devoted to reducing the risk of spurious scientific discoveries, from the use of sophisticated validation techniques, to deep statistical methods for controlling the false discovery rate in multiple hypothesis testing. However, there is a fundamental disconnect between the theoretical results and the practice of data analysis: the theory of statistical inference assumes a fixed collection of hypotheses to be tested, or learning algorithms to be applied, selected non-adaptively before the data are gathered, whereas in practice data is shared and reused with hypotheses and new analyses being generated on the basis of data exploration and the outcomes of previous analyses. In this work we initiate a principled study of how to guarantee the validity of statistical inference in adaptive data analysis. As an instance of this problem, we propose and investigate the question of estimating the expectations of @math adaptively chosen functions on an unknown distribution given @math random samples. We show that, surprisingly, there is a way to estimate an exponential in @math number of expectations accurately even if the functions are chosen adaptively. This gives an exponential improvement over standard empirical estimators that are limited to a linear number of estimates. Our result follows from a general technique that counter-intuitively involves actively perturbing and coordinating the estimates, using techniques developed for privacy preservation. We give additional applications of this technique to our question." ] }
1502.04548
2167697140
This paper presents a novel method for controlling teams of unmanned aerial vehicles using Stochastic Optimal Control (SOC) theory. The approach consists of a centralized high-level planner that computes optimal state trajectories as velocity sequences, and a platform-specific low-level controller which ensures that these velocity sequences are met. The planning task is expressed as a centralized path-integral control problem, for which optimal control computation corresponds to a probabilistic inference problem that can be solved by efficient sampling methods. Through simulation we show that our SOC approach (a) has significant benefits compared to deterministic control and other SOC methods in multimodal problems with noise-dependent optimal solutions, (b) is capable of controlling a large number of platforms in real-time, and (c) yields collective emergent behaviour in the form of flight formations. Finally, we show that our approach works for real platforms, by controlling a team of three quadrotors in outdoor conditions.
Stochastic optimal control is mostly used for UAV control in its simplest form, assuming a linear model perturbed by additive Gaussian noise and subject to quadratic costs (LQG), e.g. @cite_20 . While LQG can successfully perform simple actions like hovering, executing more complex actions requires considering additional corrections for aerodynamic effects such as induced power or blade flapping @cite_15 . These approaches are mainly designed for accurate trajectory control and assume a given desired state trajectory that the controller transforms into motor commands.
{ "cite_N": [ "@cite_15", "@cite_20" ], "mid": [ "2165771902", "2138158112" ], "abstract": [ "Abstract Quadrotor helicopters continue to grow in popularity for unmanned aerial vehicle applications. However, accurate dynamic models for deriving controllers for moderate to high speeds have been lacking. This work presents theoretical models of quadrotor aerodynamics with non-zero free-stream velocities based on helicopter momentum and blade element theory, validated with static tests and flight data. Controllers are derived using these models and implemented on the Stanford Testbed of Autonomous Rotorcraft for Multi-Agent Control (STARMAC), demonstrating significant improvements over existing methods. The design of the STARMAC platform is described, and flight results are presented demonstrating improved accuracy over commercially available quadrotors.", "To investigate and develop unmanned vehicle systems technologies for autonomous multiagent mission platforms, we are using an indoor multivehicle testbed called real-time indoor autonomous vehicle test environment (RAVEN) to study long-duration multivehicle missions in a controlled environment. Normally, demonstrations of multivehicle coordination and control technologies require that multiple human operators simultaneously manage flight hardware, navigation, control, and vehicle tasking. However, RAVEN simplifies all of these issues to allow researchers to focus, if desired, on the algorithms associated with high-level tasks. Alternatively, RAVEN provides a facility for testing low-level control algorithms on both fixed- and rotary-wing aerial platforms. RAVEN is also being used to analyze and implement techniques for embedding the fleet and vehicle health state (for instance, vehicle failures, refueling, and maintenance) into UAV mission planning. These characteristics facilitate the rapid prototyping of new vehicle configurations and algorithms without requiring a redesign of the vehicle hardware. This article describes the main components and architecture of RAVEN and presents recent flight test results illustrating the applications discussed above." ] }
1502.04548
2167697140
This paper presents a novel method for controlling teams of unmanned aerial vehicles using Stochastic Optimal Control (SOC) theory. The approach consists of a centralized high-level planner that computes optimal state trajectories as velocity sequences, and a platform-specific low-level controller which ensures that these velocity sequences are met. The planning task is expressed as a centralized path-integral control problem, for which optimal control computation corresponds to a probabilistic inference problem that can be solved by efficient sampling methods. Through simulation we show that our SOC approach (a) has significant benefits compared to deterministic control and other SOC methods in multimodal problems with noise-dependent optimal solutions, (b) is capable of controlling a large number of platforms in real-time, and (c) yields collective emergent behaviour in the form of flight formations. Finally, we show that our approach works for real platforms, by controlling a team of three quadrotors in outdoor conditions.
In outdoor conditions, motion capture is difficult and Global Positioning System (GPS) is used instead. Existing control approaches are typically either based on Reynolds flocking @cite_13 @cite_11 @cite_10 @cite_1 or flight formation @cite_8 @cite_0 . In Reynolds flocking, each agent is considered a point mass that obeys simple and distributed rules: separate from neighbors, align with the average heading of neighbors and steer towards neighborhood centroid to keep cohesion. Flight formation control is typically modeled using graphs, where every node is an agent that can exchange information with all or several agents. Velocity and or position coordination is usually achieved using consensus algorithms.
{ "cite_N": [ "@cite_13", "@cite_8", "@cite_1", "@cite_0", "@cite_10", "@cite_11" ], "mid": [ "2038650746", "2946345173", "2150312211", "1946833474", "2171541671", "" ], "abstract": [ "Micro Unmanned Aerial Vehicles (UAVs) such as quadrocopters have gained great popularity over the last years, both as a research platform and in various application fields. However, some complex application scenarios call for the formation of swarms consisting of multiple drones. In this paper a platform for the creation of such swarms is presented. It is based on commercially available quadrocopters enhanced with on-board processing and communication units enabling full autonomy of individual drones. Furthermore, a generic ground control station is presented that serves as integration platform. It allows the seamless coordination of different kinds of sensor platforms.", "In the last decade the development and control of Unmanned Aerial Vehicles (UAVs) has attracted a lot of interest. Both researchers and companies have a growing interest in improving this type of vehicle given their many civilian and military applications. This book presents the state of the art in the area of UAV Flight Formation. The coordination and robust consensus approaches are presented in detail as well as formation flight control strategies which are validated in experimental platforms. It aims at helping students and academics alike to better understand what coordination and flight formation control can make possible. Several novel methods are presented: - controllability and observability of multi-agent systems; - robust consensus; - flight formation control; - stability of formations over noisy networks; which generate solutions of guaranteed performance for UAV Flight Formation.", "The aggregate motion of a flock of birds, a herd of land animals, or a school of fish is a beautiful and familiar part of the natural world. But this type of complex motion is rarely seen in computer animation. This paper explores an approach based on simulation as an alternative to scripting the paths of each bird individually. The simulated flock is an elaboration of a particle systems, with the simulated birds being the particles. The aggregate motion of the simulated flock is created by a distributed behavioral model much like that at work in a natural flock; the birds choose their own course. Each simulated bird is implemented as an independent actor that navigates according to its local perception of the dynamic environment, the laws of simulated physics that rule its motion, and a set of behaviors programmed into it by the \"animator.\" The aggregate motion of the simulated flock is the result of the dense interaction of the relatively simple behaviors of the individual simulated birds.", "Formation control problems for quadrotor swarm systems are investigated. Firstly, for the formation control, the quadrotor is regarded as a point-mass system, and the dynamics of each quadrotor is modeled by double integrator. Then a consensus based formation protocol is presented for quadrotor swarm systems to achieve the desired time-varying formation. Formation control problems for swarm systems are transformed into consensus problems. Necessary and sufficient conditions for quadrotor swarm systems to achieve time-varying formations are proposed. Moreover, a quadrotor formation platform which consists of three quadrotors is introduced. Finally, simulation and experimental examples are presented for the quadrotor swarm system to achieve a time-varying formation respectively.", "We address formation control for a team of quadrotors in which the robots follow a specified group trajectory while safely changing the shape of the formation according to specifications. The formation is prescribed by shape vectors which dictate the relative separations and bearings between the robots, while the group trajectory is specified as the desired trajectory of a leader or a virtual robot in the group. Each robot plans its trajectory independently based on its local information of neighboring robots which includes both the neighbor's planned trajectory and an estimate of its state. We show that the decentralized trajectory planners (a) result in consensus on the planned trajectory for predefined shapes and (b) achieve safe reconfiguration when changing shapes.", "" ] }
1502.04748
1769744536
A complete set of filters @math for the optimal-depth @math -input sorting network problem is such that if there exists an @math -input sorting network of depth @math then there exists one of the form @math for some @math . Previous work on the topic presents a method for finding complete set of filters @math and @math that consists only of networks of depths one and two respectively, whose outputs are minimal and representative up to permutation and reflection. We present a novel practical approach for finding a complete set of filters @math containing only networks of depth three whose outputs are minimal and representative up to permutation and reflection. In previous work, we have developed a highly efficient algorithm for finding extremal sets ( i.e. outputs of comparator networks; itemsets; ) up to permutation. In this paper we present a modification to this algorithm that identifies the representative itemsets up to permutation and reflection. Hence, the here presented practical approach is the successful combination of known theory and practice that we apply to the domain of sorting networks. For all @math , we empirically compute the complete set of filters @math of the representative minimal up to permutation and reflection @math -input networks of depth three.
Parberry @cite_1 presented a computer assisted proof for the minimal depth of a nine and ten-input sorting networks. He significantly reduced network level candidates for the first two levels, in comparison to the naive approach, by exploiting symmetries of the networks (referred to as first and second normal form @cite_1 ).
{ "cite_N": [ "@cite_1" ], "mid": [ "2067081749" ], "abstract": [ "It is demonstrated that there is no nine-input sorting network of depth six. The proof was obtained by executing on a supercomputer a branch-and-bound algorithm which constructs and tests a critical subset of all possible candidates. Such proofs can be classified as experimental science, rather than mathematics. In keeping with the paradigms of experimental science, a high-level description of the experiment and analysis of the result are given." ] }
1502.04748
1769744536
A complete set of filters @math for the optimal-depth @math -input sorting network problem is such that if there exists an @math -input sorting network of depth @math then there exists one of the form @math for some @math . Previous work on the topic presents a method for finding complete set of filters @math and @math that consists only of networks of depths one and two respectively, whose outputs are minimal and representative up to permutation and reflection. We present a novel practical approach for finding a complete set of filters @math containing only networks of depth three whose outputs are minimal and representative up to permutation and reflection. In previous work, we have developed a highly efficient algorithm for finding extremal sets ( i.e. outputs of comparator networks; itemsets; ) up to permutation. In this paper we present a modification to this algorithm that identifies the representative itemsets up to permutation and reflection. Hence, the here presented practical approach is the successful combination of known theory and practice that we apply to the domain of sorting networks. For all @math , we empirically compute the complete set of filters @math of the representative minimal up to permutation and reflection @math -input networks of depth three.
The importance of finding the @math and @math set of filters is easily seen from Bundala's algorithm for finding sorting networks of optimal depth and is explicitly stated in the future work section of @cite_6 . Bundala's algorithm can be easily adapted to use prefixes of exactly three layers as an entry point to the SAT encoding of the problem, given that the presented algorithm @cite_6 uses exactly two layers as the entry point. Hence, such a reduction of the search space would result in a faster such SAT-solver-based algorithm for finding sorting networks of optimal depth.
{ "cite_N": [ "@cite_6" ], "mid": [ "1726079515" ], "abstract": [ "We prove depth optimality of sorting networks from \"The Art of Computer Programming\".Sorting networks posses symmetry that can be used to generate a few representatives.These representatives can be efficiently encoded using regular expressions.We construct SAT formulas whose unsatisfiability is sufficient to show optimality.Resulting algorithm is orders of magnitude faster than prior work on small instances. We solve a 40-year-old open problem on depth optimality of sorting networks. In 1973, Donald E. Knuth detailed sorting networks of the smallest depth known for n ź 16 inputs, quoting optimality for n ź 8 (Volume 3 of \"The Art of Computer Programming\"). In 1989, Parberry proved optimality of networks with 9 ź n ź 10 inputs. We present a general technique for obtaining such results, proving optimality of the remaining open cases of 11 ź n ź 16 inputs. Exploiting symmetry, we construct a small set R n of two-layer networks such that: if there is a depth-k sorting network on n inputs, then there is one whose first layers are in R n . For each network in R n , we construct a propositional formula whose satisfiability is necessary for the existence of a depth-k sorting network. Using an off-the-shelf SAT solver we prove optimality of the sorting networks listed by Knuth. For n ź 10 inputs, our algorithm is orders of magnitude faster than prior ones." ] }
1502.04748
1769744536
A complete set of filters @math for the optimal-depth @math -input sorting network problem is such that if there exists an @math -input sorting network of depth @math then there exists one of the form @math for some @math . Previous work on the topic presents a method for finding complete set of filters @math and @math that consists only of networks of depths one and two respectively, whose outputs are minimal and representative up to permutation and reflection. We present a novel practical approach for finding a complete set of filters @math containing only networks of depth three whose outputs are minimal and representative up to permutation and reflection. In previous work, we have developed a highly efficient algorithm for finding extremal sets ( i.e. outputs of comparator networks; itemsets; ) up to permutation. In this paper we present a modification to this algorithm that identifies the representative itemsets up to permutation and reflection. Hence, the here presented practical approach is the successful combination of known theory and practice that we apply to the domain of sorting networks. For all @math , we empirically compute the complete set of filters @math of the representative minimal up to permutation and reflection @math -input networks of depth three.
--- we took an existing algorithm for finding minimal itemsets up to permutation which a dataset (a collection of itemsets). We present a modification which is linear (in terms of the number of itemsets) in time and space to find the ones which are minimal up to permutation and reflection. --- this is a direct improvement of Bundala's technique of not considering all of the @math inputs to determine that no @math -input sorting network of depth @math exists. We take one step further to find the minimal up to permutation and reflection itemsets after applying the input set reduction (described in Experiment 3 @cite_6 ). --- we experimentally evaluated the modified algorithm to find the three layered @math -input comparator networks whose outputs (itemsets) are minimal up to permutation and reflection. The set @math is generated by applying all network levels to @math and then finding the minimal up to permutation and reflection ones, whereas @math is derived by applying all levels to all itemsets in the set @math and then reducing.
{ "cite_N": [ "@cite_6" ], "mid": [ "1726079515" ], "abstract": [ "We prove depth optimality of sorting networks from \"The Art of Computer Programming\".Sorting networks posses symmetry that can be used to generate a few representatives.These representatives can be efficiently encoded using regular expressions.We construct SAT formulas whose unsatisfiability is sufficient to show optimality.Resulting algorithm is orders of magnitude faster than prior work on small instances. We solve a 40-year-old open problem on depth optimality of sorting networks. In 1973, Donald E. Knuth detailed sorting networks of the smallest depth known for n ź 16 inputs, quoting optimality for n ź 8 (Volume 3 of \"The Art of Computer Programming\"). In 1989, Parberry proved optimality of networks with 9 ź n ź 10 inputs. We present a general technique for obtaining such results, proving optimality of the remaining open cases of 11 ź n ź 16 inputs. Exploiting symmetry, we construct a small set R n of two-layer networks such that: if there is a depth-k sorting network on n inputs, then there is one whose first layers are in R n . For each network in R n , we construct a propositional formula whose satisfiability is necessary for the existence of a depth-k sorting network. Using an off-the-shelf SAT solver we prove optimality of the sorting networks listed by Knuth. For n ź 10 inputs, our algorithm is orders of magnitude faster than prior ones." ] }
1502.04744
1489980550
Head Mounted Displays (HMDs) allow users to experience virtual reality with a great level of immersion. However, even simple physical tasks like drinking a beverage can be difficult and awkward while in a virtual reality experience. We explore mixed reality renderings that selectively incorporate the physical world into the virtual world for interactions with physical objects. We conducted a user study comparing four rendering techniques that balances immersion in a virtual world with ease of interaction with the physical world. Finally, we discuss the pros and cons of each approach, suggesting guidelines for future rendering techniques that bring physical objects into virtual reality.
Since its inception, a large body of VR work has explored incorporating virtual models of physical objects into virtual experiences. For instance, a low-fidelity model of a user's hands could be captured using data input gloves @cite_8 . Previous work has also explored approaches for directly merging the virtual world with the physical world @cite_0 . Augmented Reality (AR) overlays virtual objects into the physical world, and has a rich history of use in mobile phones and HMDs (e.g., @cite_6 ). In contrast, our work is aligned with Augmented Virtuality (AV), where virtual reality is enhanced with parts of the physical world, grounding the experience in the virtual world. Previous work in AV has focused on collaborative applications including displaying real world video on virtual office windows @cite_9 or displaying group communication around a virtual table @cite_5 . More recent work has explored physical depth based renderings of a user's hands in VR for productivity applications @cite_4 . In contrast, we focus on peripheral physical interactions, exploring a design space of rendering techniques that selectively show aspects of the physical world, reinforcing immersion while minimizing distraction.
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_9", "@cite_6", "@cite_0", "@cite_5" ], "mid": [ "", "2142859885", "1544132830", "2072903758", "2126514505", "" ], "abstract": [ "", "Manipulation in immersive virtual environments is difficult partly because users must do without the haptic contact with real objects they rely on in the real world to orient themselves and their manipulanda. To compensate for this lack, we propose exploiting the one real object every user has in a virtual environment, his body. We present a unified framework for virtual-environment interaction based on proprioception, a person's sense of the position and orientation of his body and limbs. We describe three forms of body-relative interaction: • Direct manipulation—ways to use body sense to help control manipulation • Physical mnemonics—ways to store recall information relative to the body • Gestural actions—ways to use body-relative actions to issue commands Automatic scaling is a way to bring objects instantly within reach so that users can manipulate them using proprioceptive cues. Several novel virtual interaction techniques based upon automatic scaling and our proposed framework of proprioception allow a user to interact with a virtual world intuitively, efficiently, precisely, and lazily. We report the results of both informal user trials and formal user studies of the usability of the body-relative interaction techniques presented. CR", "This work presents a system for creating augmented virtuality. We present a tool and methodology that can be used to create virtual worlds that are augmented by video textures taken of real world objects. These textures are images automatically extracted from images of real world scenes. The idea is to construct and update, in real time, a representation of the salient and relevant features of the real world. This idea has the advantage of constructing a virtual world that has the relevant data of the real world, but maintaining the flexibility of a virtual world. One advantage of the virtual-real world representation is that it is not dependent on physical location and can be manipulated in a way not subject to the temporal, spatial, and physical constraints found in the real world.", "We describe a design approach, Tangible Augmented Reality, for developing face-to-face collaborative Augmented Reality (AR) interfaces. Tangible Augmented Reality combines Augmented Reality techniques with Tangible User Interface elements to create interfaces in which users can interact with spatial data as easily as real objects. Tangible AR interfaces remove the separation between the real and virtual worlds, and so enhance natural face-to-face communication We present several examples of Tangible AR interfaces and results from a user study that compares communication in a collaborative AR interface to more traditional approaches. We find that in a collaborative AR interface people use behaviours that are more similar to unmediated face-to-face collaboration than in a projection screen interface.", "In this paper we discuss augmented reality (AR) displays in a general sense, within the context of a reality-virtuality (RV) continuum, encompassing a large class of mixed reality' (MR) displays, which also includes augmented virtuality (AV). MR displays are defined by means of seven examples of existing display concepts in which real objects and virtual objects are juxtaposed. Essential factors which distinguish different MR display systems from each other are presented, first by means of a table in which the nature of the underlying scene, how it is viewed, and the observer's reference to it are compared, and then by means of a three dimensional taxonomic framework comprising: extent of world knowledge, reproduction fidelity, and extent of presence metaphor. A principal objective of the taxonomy is to clarify terminology issues and to provide a framework for classifying research across different disciplines.", "" ] }
1502.04390
2951037516
Parameter-specific adaptive learning rate methods are computationally efficient ways to reduce the ill-conditioning problems encountered when training large deep networks. Following recent work that strongly suggests that most of the critical points encountered when training such networks are saddle points, we find how considering the presence of negative eigenvalues of the Hessian could help us design better suited adaptive learning rate schemes. We show that the popular Jacobi preconditioner has undesirable behavior in the presence of both positive and negative curvature, and present theoretical and empirical evidence that the so-called equilibration preconditioner is comparatively better suited to non-convex problems. We introduce a novel adaptive learning rate scheme, called ESGD, based on the equilibration preconditioner. Our experiments show that ESGD performs as well or better than RMSProp in terms of convergence speed, always clearly improving over plain stochastic gradient descent.
A recent revival of interest in adaptive learning rates has been started by AdaGrad . Adagrad collects information from the gradients across several parameter updates to tune the learning rate. This gives us the diagonal preconditioning matrix @math which relies on the sum of gradients @math at each timestep @math . @cite_5 relies strongly on convexity to justify this method. This makes the application to neural networks difficult from a theoretical perspective. RMSProp and AdaDelta were follow-up methods introduced to be practical adaptive learning methods to train large neural networks. Although RMSProp has been shown to work very well , there is not much understanding for its success in practice. Preconditioning might be a good framework to get a better understanding of such adaptive learning rate methods.
{ "cite_N": [ "@cite_5" ], "mid": [ "2146502635" ], "abstract": [ "We present a new family of subgradient methods that dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradient-based learning. Metaphorically, the adaptation allows us to find needles in haystacks in the form of very predictive but rarely seen features. Our paradigm stems from recent advances in stochastic optimization and online learning which employ proximal functions to control the gradient steps of the algorithm. We describe and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal function that can be chosen in hindsight. We give several efficient algorithms for empirical risk minimization problems with common and important regularization functions and domain constraints. We experimentally study our theoretical analysis and show that adaptive subgradient methods outperform state-of-the-art, yet non-adaptive, subgradient algorithms." ] }
1502.04275
2951341166
In this paper, we propose an approach that exploits object segmentation in order to improve the accuracy of object detection. We frame the problem as inference in a Markov Random Field, in which each detection hypothesis scores object appearance as well as contextual information using Convolutional Neural Networks, and allows the hypothesis to choose and score a segment out of a large pool of accurate object segmentation proposals. This enables the detector to incorporate additional evidence when it is available and thus results in more accurate detections. Our experiments show an improvement of 4.1 in mAP over the R-CNN baseline on PASCAL VOC 2010, and 3.4 over the current state-of-the-art, demonstrating the power of our approach.
In the past years, a variety of segmentation algorithms that exploit object detections as a top-down cue have been explored. The standard approach has been to use detection features as unary potentials in an MRF @cite_30 , or as candidate bounding boxes for holistic MRFs @cite_39 @cite_10 . @cite_29 , segmentation within the detection boxes has been performed using a GrabCut method. @cite_25 , object segmentations are found by aligning the masks obtained from Poselets @cite_25 @cite_34 .
{ "cite_N": [ "@cite_30", "@cite_29", "@cite_39", "@cite_34", "@cite_10", "@cite_25" ], "mid": [ "2161236525", "2039507552", "2137881638", "2127251585", "1610707153", "1864464506" ], "abstract": [ "Most state-of-the-art techniques for multi-class image segmentation and labeling use conditional random fields defined over pixels or image regions. While region-level models often feature dense pairwise connectivity, pixel-level models are considerably larger and have only permitted sparse graph structures. In this paper, we consider fully connected CRF models defined on the complete set of pixels in an image. The resulting graphs have billions of edges, making traditional inference algorithms impractical. Our main contribution is a highly efficient approximate inference algorithm for fully connected CRF models in which the pairwise edge potentials are defined by a linear combination of Gaussian kernels. Our experiments demonstrate that dense connectivity at the pixel level substantially improves segmentation and labeling accuracy.", "Template-based object detectors such as the deformable parts model of [11] achieve state-of-the-art performance for a variety of object categories, but are still outperformed by simpler bag-of-words models for highly flexible objects such as cats and dogs. In these cases we propose to use the template-based model to detect a distinctive part for the class, followed by detecting the rest of the object via segmentation on image specific information learnt from that part. This approach is motivated by two ob- servations: (i) many object classes contain distinctive parts that can be detected very reliably by template-based detec- tors, whilst the entire object cannot; (ii) many classes (e.g. animals) have fairly homogeneous coloring and texture that can be used to segment the object once a sample is provided in an image. We show quantitatively that our method substantially outperforms whole-body template-based detectors for these highly deformable object categories, and indeed achieves accuracy comparable to the state-of-the-art on the PASCAL VOC competition, which includes other models such as bag-of-words.", "In this paper we propose an approach to holistic scene understanding that reasons jointly about regions, location, class and spatial extent of objects, presence of a class in the image, as well as the scene type. Learning and inference in our model are efficient as we reason at the segment level, and introduce auxiliary variables that allow us to decompose the inherent high-order potentials into pairwise potentials between a few variables with small number of states (at most the number of classes). Inference is done via a convergent message-passing algorithm, which, unlike graph-cuts inference, has no submodularity restrictions and does not require potential specific moves. We believe this is very important, as it allows us to encode our ideas and prior knowledge about the problem without the need to change the inference engine every time we introduce a new potential. Our approach outperforms the state-of-the-art on the MSRC-21 benchmark, while being much faster. Importantly, our holistic model is able to improve performance in all tasks.", "We present a new framework in which image segmentation, figure ground organization, and object detection all appear as the result of solving a single grouping problem. This framework serves as a perceptual organization stage that integrates information from low-level image cues with that of high-level part detectors. Pixels and parts each appear as nodes in a graph whose edges encode both affinity and ordering relationships. We derive a generalized eigen-problem from this graph and read off an interpretation of the image from the solution eigenvectors. Combining an off-the-shelf top-down part-based person detector with our low-level cues and grouping formulation, we demonstrate improvements to object detection and segmentation.", "Computer vision algorithms for individual tasks such as object recognition, detection and segmentation have shown impressive results in the recent past. The next challenge is to integrate all these algorithms and address the problem of scene understanding. This paper is a step towards this goal. We present a probabilistic framework for reasoning about regions, objects, and their attributes such as object class, location, and spatial extent. Our model is a Conditional Random Field defined on pixels, segments and objects. We define a global energy function for the model, which combines results from sliding window detectors, and low-level pixel-based unary and pairwise relations. One of our primary contributions is to show that this energy function can be solved efficiently. Experimental results show that our model achieves significant improvement over the baseline methods on CamVid and PASCAL VOC datasets.", "Bourdev and Malik (ICCV 09) introduced a new notion of parts, poselets, constructed to be tightly clustered both in the configuration space of keypoints, as well as in the appearance space of image patches. In this paper we develop a new algorithm for detecting people using poselets. Unlike that work which used 3D annotations of keypoints, we use only 2D annotations which are much easier for naive human annotators. The main algorithmic contribution is in how we use the pattern of poselet activations. Individual poselet activations are noisy, but considering the spatial context of each can provide vital disambiguating information, just as object detection can be improved by considering the detection scores of nearby objects in the scene. This can be done by training a two-layer feed-forward network with weights set using a max margin technique. The refined poselet activations are then clustered into mutually consistent hypotheses where consistency is based on empirically determined spatial keypoint distributions. Finally, bounding boxes are predicted for each person hypothesis and shape masks are aligned to edges in the image to provide a segmentation. To the best of our knowledge, the resulting system is the current best performer on the task of people detection and segmentation with an average precision of 47.8 and 40.5 respectively on PASCAL VOC 2009." ] }
1502.04275
2951341166
In this paper, we propose an approach that exploits object segmentation in order to improve the accuracy of object detection. We frame the problem as inference in a Markov Random Field, in which each detection hypothesis scores object appearance as well as contextual information using Convolutional Neural Networks, and allows the hypothesis to choose and score a segment out of a large pool of accurate object segmentation proposals. This enables the detector to incorporate additional evidence when it is available and thus results in more accurate detections. Our experiments show an improvement of 4.1 in mAP over the R-CNN baseline on PASCAL VOC 2010, and 3.4 over the current state-of-the-art, demonstrating the power of our approach.
There have been a few approaches to use segmentation to improve object detection. @cite_24 cast votes for the object's location by using a Hough transform with a set of regions. @cite_32 uses DPM to find a rough object location and refines it according to color information and occlusion boundaries. @cite_9 , segmentation is used to mask-out the background inside the detection, resulting in improved performance. Segmentation and detection has also been addressed in a joint formulation in @cite_16 by combining shape information obtained via DPM parts as well as color and boundary cues.
{ "cite_N": [ "@cite_24", "@cite_9", "@cite_16", "@cite_32" ], "mid": [ "2103897297", "", "2083542343", "2056933870" ], "abstract": [ "This paper presents a unified framework for object detection, segmentation, and classification using regions. Region features are appealing in this context because: (1) they encode shape and scale information of objects naturally; (2) they are only mildly affected by background clutter. Regions have not been popular as features due to their sensitivity to segmentation errors. In this paper, we start by producing a robust bag of overlaid regions for each image using , CVPR 2009. Each region is represented by a rich set of image cues (shape, color and texture). We then learn region weights using a max-margin framework. In detection and segmentation, we apply a generalized Hough voting scheme to generate hypotheses of object locations, scales and support, followed by a verification classifier and a constrained segmenter on each hypothesis. The proposed approach significantly outperforms the state of the art on the ETHZ shape database(87.1 average detection rate compared to 's 67.2 ), and achieves competitive performance on the Caltech 101 database.", "", "We formulate a layered model for object detection and image segmentation. We describe a generative probabilistic model that composites the output of a bank of object detectors in order to define shape masks and explain the appearance, depth ordering, and labels of all pixels in an image. Notably, our system estimates both class labels and object instance labels. Building on previous benchmark criteria for object detection and image segmentation, we define a novel score that evaluates both class and instance segmentation. We evaluate our system on the PASCAL 2009 and 2010 segmentation challenge data sets and show good test results with state-of-the-art performance in several categories, including segmenting humans.", "In this paper, we propose an approach to accurately localize detected objects. The goal is to predict which features pertain to the object and define the object extent with segmentation or bounding box. Our initial detector is a slight modification of the DPM detector by , which often reduces confusion with background and other objects but does not cover the full object. We then describe and evaluate several color models and edge cues for local predictions, and we propose two approaches for localization: learned graph cut segmentation and structural bounding box prediction. Our experiments on the PASCAL VOC 2010 dataset show that our approach leads to accurate pixel assignment and large improvement in bounding box overlap, sometimes leading to large overall improvement in detection accuracy." ] }
1502.04275
2951341166
In this paper, we propose an approach that exploits object segmentation in order to improve the accuracy of object detection. We frame the problem as inference in a Markov Random Field, in which each detection hypothesis scores object appearance as well as contextual information using Convolutional Neural Networks, and allows the hypothesis to choose and score a segment out of a large pool of accurate object segmentation proposals. This enables the detector to incorporate additional evidence when it is available and thus results in more accurate detections. Our experiments show an improvement of 4.1 in mAP over the R-CNN baseline on PASCAL VOC 2010, and 3.4 over the current state-of-the-art, demonstrating the power of our approach.
Our work is inspired by the success of segDPM @cite_0 . By augmenting the DPM detector @cite_5 with very simple segmentation features that can be computed in constant time, segDPM improved the detection performance by $8 approach used segments computed from the final segmentation output of CPMC @cite_33 in order to place accurate boxes in parts of the image where segmentation for the object class of interest was available. This idea was subsequently exploited in @cite_13 by augmenting the DPM with an additional set of deformable context parts'' which scored contextual segmentation features around the object. @cite_35 , the segDPM detector @cite_0 was augmented with part visibility reasoning, achieving state-of-the-art results for detection of articulated classes. @cite_37 , the authors extended segDPM to incorporate segmentation compatibility also at the part level.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_33", "@cite_0", "@cite_5", "@cite_13" ], "mid": [ "2104408738", "", "78159342", "1964005749", "2168356304", "2125215748" ], "abstract": [ "Detecting objects becomes difficult when we need to deal with large shape deformation, occlusion and low resolution. We propose a novel approach to i) handle large deformations and partial occlusions in animals (as examples of highly deformable objects), ii) describe them in terms of body parts, and iii) detect them when their body parts are hard to detect (e.g., animals depicted at low resolution). We represent the holistic object and body parts separately and use a fully connected model to arrange templates for the holistic object and body parts. Our model automatically decouples the holistic object or body parts from the model when they are hard to detect. This enables us to represent a large number of holistic object and body part combinations to better deal with different \"detectability\" patterns caused by deformations, occlusion and or low resolution. We apply our method to the six animal categories in the PASCAL VOC dataset and show that our method significantly improves state-of-the-art (by 4.1 AP) and provides a richer representation for objects. During training we use annotations for body parts (e.g., head, torso, etc.), making use of a new dataset of fully annotated object parts for PASCAL VOC 2010, which provides a mask for each part.", "", "Feature extraction, coding and pooling, are important components on many contemporary object recognition paradigms. In this paper we explore novel pooling techniques that encode the second-order statistics of local descriptors inside a region. To achieve this effect, we introduce multiplicative second-order analogues of average and max-pooling that together with appropriate non-linearities lead to state-of-the-art performance on free-form region recognition, without any type of feature coding. Instead of coding, we found that enriching local descriptors with additional image information leads to large performance gains, especially in conjunction with the proposed pooling methodology. We show that second-order pooling over free-form regions produces results superior to those of the winning systems in the Pascal VOC 2011 semantic segmentation challenge, with models that are 20,000 times faster.", "In this paper we are interested in how semantic segmentation can help object detection. Towards this goal, we propose a novel deformable part-based model which exploits region-based segmentation algorithms that compute candidate object regions by bottom-up clustering followed by ranking of those regions. Our approach allows every detection hypothesis to select a segment (including void), and scores each box in the image using both the traditional HOG filters as well as a set of novel segmentation features. Thus our model blends'' between the detector and segmentation models. Since our features can be computed very efficiently given the segments, we maintain the same complexity as the original DPM. We demonstrate the effectiveness of our approach in PASCAL VOC 2010, and show that when employing only a root filter our approach outperforms Dalal & Triggs detector on all classes, achieving 13 higher average AP. When employing the parts, we outperform the original DPM in @math out of @math classes, achieving an improvement of 8 AP. Furthermore, we outperform the previous state-of-the-art on VOC 2010 test by 4 .", "We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.", "In this paper we study the role of context in existing state-of-the-art detection and segmentation approaches. Towards this goal, we label every pixel of PASCAL VOC 2010 detection challenge with a semantic category. We believe this data will provide plenty of challenges to the community, as it contains 520 additional classes for semantic segmentation and object detection. Our analysis shows that nearest neighbor based approaches perform poorly on semantic segmentation of contextual classes, showing the variability of PASCAL imagery. Furthermore, improvements of exist ing contextual models for detection is rather modest. In order to push forward the performance in this difficult scenario, we propose a novel deformable part-based model, which exploits both local context around each candidate detection as well as global context at the level of the scene. We show that this contextual reasoning significantly helps in detecting objects at all scales." ] }
1502.03634
1713770575
In transport modeling and prediction, trip purposes play an important role since mobility choices (e.g. modes, routes, departure times) are made in order to carry out specific activities. Activity based models, which have been gaining popularity in recent years, are built from a large number of observed trips and their purposes. However, data acquired through traditional interview-based travel surveys lack the accuracy and quantity required by such models. Smartphones and interactive web interfaces have emerged as an attractive alternative to conventional travel surveys. A smartphone-based travel survey, Future Mobility Survey (FMS), was developed and field-tested in Singapore and collected travel data from more than 1000 participants for multiple days. To provide a more intelligent interface, inferring the activities of a user at a certain location is a crucial challenge. This paper presents a learning model that infers the most likely activity associated to a certain visited place. The data collected in FMS contain errors or noise due to various reasons, so a robust approach via ensemble learning is used to improve generalization performance. Our model takes advantage of cross-user historical data as well as user-specific information, including socio-demographics. Our empirical results using FMS data demonstrate that the proposed method contributes significantly to our travel survey application.
Most of the algorithms used to derive activities in GPS travel surveys are rule-based and rely heavily on GIS information, such as Point Of Interest (POI) and land use information @cite_14 @cite_8 @cite_26 . An early car-based study in America by @cite_14 inferred trip purposes from GPS data and an extensive GIS land use database. In more recent work, POI's attractiveness is defined along time of day to indicate the potential possibilities for activities @cite_4 , and @cite_20 proposed to infer an activity based on the distance between POI and the stop location. Another option is to use individual characteristics as input for activity recognition algorithms. @cite_6 developed a rule based approach to identify activities based on users' home and work locations, and POI land use information in the Swiss. Similar information and rules were used in the GPS survey in the Netherlands @cite_12 . Reference @cite_15 described a more complicated heuristic rule-based method which collects users' workplace or school, the two most frequently used grocery stores, and occupation beforehand to be used to derive trip characteristics.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_26", "@cite_8", "@cite_6", "@cite_15", "@cite_12", "@cite_20" ], "mid": [ "2025573206", "2146299864", "", "", "2114898256", "1970788804", "1983447652", "1998451808" ], "abstract": [ "Several recent pilot studies combined Global Positioning System (GPS) technology with travel survey data collection to evaluate opportunities for improving the quantity and accuracy of travel data. These studies used GPS to supplement traditional data elements collected in paper or electronic travel diaries. Although many traditional trip elements can be obtained from the GPS data, trip purpose has remained an important element, requiring the use of a diary to continue. Presented are the results of a proof-of-concept study conducted at the Georgia Institute of Technology that examined the feasibility of using GPS data loggers to completely replace, rather than supplement, traditional travel diaries. In this approach, all GPS data collected must be processed so that all essential trip data elements, including trip purpose, are derived. If this processing is done correctly and quickly, then the computer-assisted telephone interview retrieval call could be shortened significantly, reducing both respondent bu...", "GPS (Globe Positioning System) trajectory data provide a new way for city travel analysis others than traditional travel diary data. But generally raw GPS traces do not include information on trip purposes or activities. Earlier studies addressed this issue through a combination of manual and computer-assisted data processing steps. Nevertheless, geographic context databases provide the possibility for automatic activity identification based on GPS trajectories since each activity is uniquely defined by a set of features such as location and duration. Distinguished with most existing methods using two dimensional factors, this paper presents a novel approach using spatial temporal attractiveness of POIs (Point of Interests) to identify activity-locations as well as durations from raw GPS trajectory. We also introduce an algorithm to figure out how the intersections of trajectories and spatial-temporal attractiveness prisms indicate the potential possibilities for activities. Finally, Experiments using real world GPS tracking data, road networks and POIs are conducted for evaluations of the proposed approach.", "", "", "The recent Swedish Intelligent Speed Adaptation (ISA) study included a component that involved the installation of units based on the Global Positioning System (GPS) in hundreds of cars in three Swedish cities, Borlange, Lund, and Lidkoping; these vehicles were observed for up to 2 years. In Borlange, the speed and location data of each vehicle were transmitted at regular intervals to a central server and stored for later analysis. This data set contains a wealth of travel behavior information that had not been available before. However, a data set of this magnitude introduces a major need for automated processes that can glean travel behavior details from the trip summary and collected GPS point files. A summary is presented of characteristics of and issues with the Borlange GPS data set, which included 186 personal vehicles with at least 30 days of travel data and corresponding household sociodemographic data. (These 186 vehicles recorded 49,667 vehicle days of travel and 240,435 trips inside the study ...", "In the late 1990s, global positioning system (GPS) devices began to be used as a method for measuring personal travel. Early devices were for in-vehicle use only and derived their power from the accessory socket of the car. In the early 2000s, the first wearable devices appeared, using battery power from rechargeable batteries. The early wearable devices were heavy and ungainly, and success in having people use the devices was limited. In 2005, the Institute of Transport and Logistics Studies (ITLS) and NeveITS pioneered the use of a much smaller device with its own internal battery, similar in weight and dimensions to a mobile telephone. Subsequent to the initial deployment of this device, there have been further advances in the sensitivity of the antenna receiver and we have developed with NeveITS a number of improvements to software. Most recently, another device called a Starnav, has been developed for ITLS in Taiwan, and offers further sophistication and user friendliness than the Neve devices. This paper describes these GPS devices and demonstrates the capability of these devices to provide detailed and accurate data on travel movements. We provide a brief description of the software we have developed and continue to improve for analysing the resulting data. The latest technologies for GPS devices indicate the potential to replace many conventional methods of data collection that are flawed because of known errors and inaccuracies.", "In the past few decades, travel patterns have become more complex and policy makers demand more detailed information. As a result, conventional data collection methods seem no longer adequate to satisfy all data needs. Travel researchers around the world are currently experimenting with different Global Positioning System (GPS)-based data collection methods. An overview of the literature shows the potential of these methods, especially when algorithms that include spatial data are used to derive trip characteristics from the GPS logs. This article presents an innovative method that combines GPS logs, Geographic Information System (GIS) technology and an interactive web-based validation application. In particular, this approach concentrates on the issue of deriving and validating trip purposes and travel modes, as well as allowing for reliable multi-day data collection. In 2007, this method was used in practice in a large-scale study conducted in the Netherlands. In total, 1104 respondents successfully participated in the one-week survey. The project demonstrated that GPS-based methods now provide reliable multi-day data. In comparison with data from the Dutch Travel Survey, travel mode and trip purpose shares were almost equal while more trips per tour were recorded, which indicates the ability of collecting trips that are missed by paper diary methods.", "The collection of huge amount of tracking data made possible by the widespread use of GPS devices, enabled the analysis of such data for several applications domains, ranging from traffic management to advertisement and social studies. However, the raw positioning data, as it is detected by GPS devices, lacks of semantic information since this data does not natively provide any additional contextual information like the places that people visited or the activities performed. Traditionally, this information is collected by hand filled questionnaire where a limited number of users are asked to annotate their tracks with the activities they have done. With the purpose of getting large amount of semantically rich trajectories, we propose an algorithm for automatically annotating raw trajectories with the activities performed by the users. To do this, we analyse the stops points trying to infer the Point Of Interest (POI) the user has visited. Based on the category of the POI and a probability measure based on the gravity law, we infer the activity performed. We experimented and evaluated the method in a real case study of car trajectories, manually annotated by users with their activities. Experimental results are encouraging and will drive our future works." ] }
1502.03634
1713770575
In transport modeling and prediction, trip purposes play an important role since mobility choices (e.g. modes, routes, departure times) are made in order to carry out specific activities. Activity based models, which have been gaining popularity in recent years, are built from a large number of observed trips and their purposes. However, data acquired through traditional interview-based travel surveys lack the accuracy and quantity required by such models. Smartphones and interactive web interfaces have emerged as an attractive alternative to conventional travel surveys. A smartphone-based travel survey, Future Mobility Survey (FMS), was developed and field-tested in Singapore and collected travel data from more than 1000 participants for multiple days. To provide a more intelligent interface, inferring the activities of a user at a certain location is a crucial challenge. This paper presents a learning model that infers the most likely activity associated to a certain visited place. The data collected in FMS contain errors or noise due to various reasons, so a robust approach via ensemble learning is used to improve generalization performance. Our model takes advantage of cross-user historical data as well as user-specific information, including socio-demographics. Our empirical results using FMS data demonstrate that the proposed method contributes significantly to our travel survey application.
More elaborate algorithms have been proposed taking a machine learning approach. Deng and Li @cite_7 used attributes such as land use, sociodemographic information of the respondents, etc. to construct decision trees. An adaptive boosting technique was used to improve the classification results. @cite_11 proposed a location based activity recognition system using Relational Markov Networks. These works are evaluated based on small samples of experimental data.
{ "cite_N": [ "@cite_7", "@cite_11" ], "mid": [ "2002313219", "2138460198" ], "abstract": [ "Passive global positioning system (GPS) travel survey has proven an innovative alternative to the traditional paper-and-pencil method, as it can record exact time and location of travel activities without incurring much burden on survey respondents. Lying in the core of this technique is how to derive accurate travel information (e.g., trip purpose) from GPS data. Previous studies rely on simple rules that are manually extracted from the inherent structure of GPS data streams. These methods, however, are difficult to generalize for other applications and lack helpful hints for new research. This paper presents a machine learning approach to deriving trip purpose from GPS track data coupled with other relevant data sources. This approach employs a number of attributes (i.e., time stamp and land-use type of trip ends, a set of spatiotemporal indices of travel, and demographic and socioeconomic characteristics of respondents) to construct a decision tree for purpose of classification. Each attribute provides partial evidence to the depiction of a given purpose, but this depiction may or may not be monotonic, and none of them can work alone toward the goal. A reasoning procedure using the adaptive boosting technique was designed to explore how these attributes could work together to achieve trip purpose derivation. This technique generated multiple decision trees to improve the classification results through a mechanism of voting from these trees. Each tree was constructed in the depth-first fashion with the root node and the split of subsequent nodes being determined on the basis of the gain-ratio computed for the relevant attributes. This procedure was implemented in the C5.0 machine learning environment with 226 GPS trip records collected from 36 respondents. The experimental results seemed rather promising: using 10 iterations for adaptive boosting, an overall classification accuracy of 87.6 was achieved.", "In this paper we define a general framework for activity recognition by building upon and extending Relational Markov Networks. Using the example of activity recognition from location data, we show that our model can represent a variety of features including temporal information such as time of day, spatial information extracted from geographic databases, and global constraints such as the number of homes or workplaces of a person. We develop an efficient inference and learning technique based on MCMC. Using GPS location data collected by multiple people we show that the technique can accurately label a person's activity locations. Furthermore, we show that it is possible to learn good models from less data by using priors extracted from other people's data." ] }
1502.03634
1713770575
In transport modeling and prediction, trip purposes play an important role since mobility choices (e.g. modes, routes, departure times) are made in order to carry out specific activities. Activity based models, which have been gaining popularity in recent years, are built from a large number of observed trips and their purposes. However, data acquired through traditional interview-based travel surveys lack the accuracy and quantity required by such models. Smartphones and interactive web interfaces have emerged as an attractive alternative to conventional travel surveys. A smartphone-based travel survey, Future Mobility Survey (FMS), was developed and field-tested in Singapore and collected travel data from more than 1000 participants for multiple days. To provide a more intelligent interface, inferring the activities of a user at a certain location is a crucial challenge. This paper presents a learning model that infers the most likely activity associated to a certain visited place. The data collected in FMS contain errors or noise due to various reasons, so a robust approach via ensemble learning is used to improve generalization performance. Our model takes advantage of cross-user historical data as well as user-specific information, including socio-demographics. Our empirical results using FMS data demonstrate that the proposed method contributes significantly to our travel survey application.
Few work exists for activity detection in smartphone based travel surveys. @cite_21 converted GPS trajectories collected by smartphones into lists of activities by first finding businesses around a user stop, and then employing reverse Latent Semantic Analysis (LSA) to look up the most relevant terms associated with the businesses.
{ "cite_N": [ "@cite_21" ], "mid": [ "2149880269" ], "abstract": [ "This paper describes a system that takes as input GPS data streams generated by users' phones and creates a searchable database of locations and activities. The system is called iDiary and turns large GPS signals collected from smartphones into textual descriptions of the trajectories. The system features a user interface similar to Google Search that allows users to type text queries on their activities (e.g., \"Where did I buy books?\") and receive textual answers based on their GPS signals. iDiary uses novel algorithms for semantic compression (known as coresets) and trajectory clustering of massive GPS signals in parallel to compute the critical locations of a user. Using an external database, we then map these locations to textual descriptions and activities so that we can apply text mining techniques on the resulting data (e.g. LSA or transportation mode recognition). We provide experimental results for both the system and algorithms and compare them to existing commercial and academic state-of-the-art. This is the first GPS system that enables text-searchable activities from GPS data." ] }
1502.03504
2267618974
Emerging hybrid accelerator architectures for high performance computing are often suited for the use of a data-parallel programming model. Unfortunately, programmers of these architectures face a steep learning curve that frequently requires learning a new language (e.g., OpenCL). Furthermore, the distributed (and frequently multi-level) nature of the memory organization of clusters of these machines provides an additional level of complexity. This paper presents preliminary work examining how programming with a local orientation can be employed to provide simpler access to accelerator architectures. A locally-oriented programming model is especially useful for the solution of algorithms requiring the application of a stencil or convolution kernel. In this programming model, a programmer codes the algorithm by modifying only a single array element (called the local element), but has read-only access to a small sub-array surrounding the local element. We demonstrate how a locally-oriented programming model can be adopted as a language extension using source-to-source program transformations.
LOPe builds upon prior work studying how to map Fortran to accelerator programming models like OpenCL. In the ForOpenCL project @cite_2 we exploited Fortran's pure and elemental functions to express data-parallel kernels of an OpenCL-based program. In practice, array calculations for a given index @math will require read-only access to a local neighborhood of size @math around @math . LOPe extends this work by introducing a mechanism for representing these neighborhoods as array declaration type annotations.
{ "cite_N": [ "@cite_2" ], "mid": [ "2079427028" ], "abstract": [ "Emerging GPU architectures for high performance computing are well suited to a data-parallel programming model. This paper presents preliminary work examining a programming methodology that provides Fortran programmers with access to these emerging systems. We use array constructs in Fortran to show how this infrequently exploited, standardised language feature is easily transformed to lower-level accelerator code. The transformations in ForOpenCL are based on a simple mapping from Fortran to OpenCL. We demonstrate, using a stencil code solving the shallow-water fluid equations, that the performance of the ForOpenCL compiler-generated transformations is comparable with that of hand-optimised OpenCL code." ] }
1502.03630
1762643624
Topic modeling of textual corpora is an important and challenging problem. In most previous work, the "bag-of-words" assumption is usually made which ignores the ordering of words. This assumption simplifies the computation, but it unrealistically loses the ordering information and the semantic of words in the context. In this paper, we present a Gaussian Mixture Neural Topic Model (GMNTM) which incorporates both the ordering of words and the semantic meaning of sentences into topic modeling. Specifically, we represent each topic as a cluster of multi-dimensional vectors and embed the corpus into a collection of vectors generated by the Gaussian mixture model. Each word is affected not only by its topic, but also by the embedding vector of its surrounding words and the context. The Gaussian mixture components and the topic of documents, sentences and words can be learnt jointly. Extensive experiments show that our model can learn better topics and more accurate word distributions for each topic. Quantitatively, comparing to state-of-the-art topic modeling approaches, GMNTM obtains significantly better performance in terms of perplexity, retrieval accuracy and classification accuracy.
In the past decade, a great variety of topic models have been proposed, which can extract interesting topics in the form of multinomial distributions automatically from texts @cite_6 @cite_0 @cite_18 @cite_20 @cite_9 . Among these approaches, LDA @cite_6 and its variants are the most popular models for topic modeling. The mixture of topics per document in the LDA model is generated from a Dirichlet prior mutual to all documents in the corpus. Different extensions of the LDA model have been proposed. For example, teh2006hierarchical assumes that the number of mixture components is unknown a prior and is to be inferred from the data. mcauliffe2008supervised develops a supervised latent Dirichlet allocation model (sLDA) for document-response pairs. Recent work incorporates context information into the topic modeling, such as time @cite_28 , black geographic location @cite_29 , authorship @cite_12 , and sentiment @cite_3 @cite_11 , to make topic models fit expectations better.
{ "cite_N": [ "@cite_18", "@cite_28", "@cite_9", "@cite_29", "@cite_6", "@cite_3", "@cite_0", "@cite_12", "@cite_20", "@cite_11" ], "mid": [ "2138107145", "2171343266", "2100002341", "2107610218", "", "", "", "2109154616", "1498269992", "" ], "abstract": [ "Probabilistic topic modeling provides a suite of tools for the unsupervised analysis of large collections of documents. Topic modeling algorithms can uncover the underlying themes of a collection and decompose its documents according to those themes. This analysis can be used for corpus exploration, document search, and a variety of prediction problems. In this tutorial, I will review the state-of-the-art in probabilistic topic models. I will describe the three components of topic modeling: (1) Topic modeling assumptions (2) Algorithms for computing with topic models (3) Applications of topic models In (1), I will describe latent Dirichlet allocation (LDA), which is one of the simplest topic models, and then describe a variety of ways that we can build on it. These include dynamic topic models, correlated topic models, supervised topic models, author-topic models, bursty topic models, Bayesian nonparametric topic models, and others. I will also discuss some of the fundamental statistical ideas that are used in building topic models, such as distributions on the simplex, hierarchical Bayesian modeling, and models of mixed-membership. In (2), I will review how we compute with topic models. I will describe approximate posterior inference for directed graphical models using both sampling and variational inference, and I will discuss the practical issues and pitfalls in developing these algorithms for topic models. Finally, I will describe some of our most recent work on building algorithms that can scale to millions of documents and documents arriving in a stream. In (3), I will discuss applications of topic models. These include applications to images, music, social networks, and other data in which we hope to uncover hidden patterns. I will describe some of our recent work on adapting topic modeling algorithms to collaborative filtering, legislative modeling, and bibliometrics without citations. Finally, I will discuss some future directions and open research problems in topic models.", "This paper presents an LDA-style topic model that captures not only the low-dimensional structure of data, but also how the structure changes over time. Unlike other recent work that relies on Markov assumptions or discretization of time, here each topic is associated with a continuous distribution over timestamps, and for each generated document, the mixture distribution over topics is influenced by both word co-occurrences and the document's timestamp. Thus, the meaning of a particular topic can be relied upon as constant, but the topics' occurrence and correlations change significantly over time. We present results on nine months of personal email, 17 years of NIPS research papers and over 200 years of presidential state-of-the-union addresses, showing improved topics, better timestamp prediction, and interpretable trends.", "We introduce a two-layer undirected graphical model, called a \"Replicated Softmax\", that can be used to model and automatically extract low-dimensional latent semantic representations from a large unstructured collection of documents. We present efficient learning and inference algorithms for this model, and show how a Monte-Carlo based method, Annealed Importance Sampling, can be used to produce an accurate estimate of the log-probability the model assigns to test data. This allows us to demonstrate that the proposed model is able to generalize much better compared to Latent Dirichlet Allocation in terms of both the log-probability of held-out documents and the retrieval accuracy.", "Mining subtopics from weblogs and analyzing their spatiotemporal patterns have applications in multiple domains. In this paper, we define the novel problem of mining spatiotemporal theme patterns from weblogs and propose a novel probabilistic approach to model the subtopic themes and spatiotemporal theme patterns simultaneously. The proposed model discovers spatiotemporal theme patterns by (1) extracting common themes from weblogs; (2) generating theme life cycles for each given location; and (3) generating theme snapshots for each given time period. Evolution of patterns can be discovered by comparative analysis of theme life cycles and theme snapshots. Experiments on three different data sets show that the proposed approach can discover interesting spatiotemporal theme patterns effectively. The proposed probabilistic model is general and can be used for spatiotemporal text mining on any domain with time and location information.", "", "", "", "We propose a new unsupervised learning technique for extracting information from large text collections. We model documents as if they were generated by a two-stage stochastic process. Each author is represented by a probability distribution over topics, and each topic is represented as a probability distribution over words for that topic. The words in a multi-author paper are assumed to be the result of a mixture of each authors' topic mixture. The topic-word and author-topic distributions are learned from data in an unsupervised manner using a Markov chain Monte Carlo algorithm. We apply the methodology to a large corpus of 160,000 abstracts and 85,000 authors from the well-known CiteSeer digital library, and learn a model with 300 topics. We discuss in detail the interpretation of the results discovered by the system including specific topic and author models, ranking of authors by topic and topics by author, significant trends in the computer science literature between 1990 and 2002, parsing of abstracts by topics and authors and detection of unusual papers by specific authors. An online query interface to the model is also discussed that allows interactive exploration of author-topic models for corpora such as CiteSeer.", "Algorithms such as Latent Dirichlet Allocation (LDA) have achieved significant progress in modeling word document relationships. These algorithms assume each word in the document was generated by a hidden topic and explicitly model the word distribution of each topic as well as the prior distribution over topics in the document. Given these parameters, the topics of all words in the same document are assumed to be independent. In this paper, we propose modeling the topics of words in the document as a Markov chain. Specifically, we assume that all words in the same sentence have the same topic, and successive sentences are more likely to have the same topics. Since the topics are hidden, this leads to using the well-known tools of Hidden Markov Models for learning and inference. We show that incorporating this dependency allows us to learn better topics and to disambiguate words that can belong to different topics. Quantitatively, we show that we obtain better perplexity in modeling documents with only a modest increase in learning and inference complexity.", "" ] }
1502.03630
1762643624
Topic modeling of textual corpora is an important and challenging problem. In most previous work, the "bag-of-words" assumption is usually made which ignores the ordering of words. This assumption simplifies the computation, but it unrealistically loses the ordering information and the semantic of words in the context. In this paper, we present a Gaussian Mixture Neural Topic Model (GMNTM) which incorporates both the ordering of words and the semantic meaning of sentences into topic modeling. Specifically, we represent each topic as a cluster of multi-dimensional vectors and embed the corpus into a collection of vectors generated by the Gaussian mixture model. Each word is affected not only by its topic, but also by the embedding vector of its surrounding words and the context. The Gaussian mixture components and the topic of documents, sentences and words can be learnt jointly. Extensive experiments show that our model can learn better topics and more accurate word distributions for each topic. Quantitatively, comparing to state-of-the-art topic modeling approaches, GMNTM obtains significantly better performance in terms of perplexity, retrieval accuracy and classification accuracy.
Recently, there are several undirected graphical models being proposed, which typically outperform LDA. mcauliffe2008supervised present a two-layer undirected graphical model, called Replicated Softmax'', that can be used to model and automatically extract low-dimensional latent semantic representations from a large unstructured collection of document. hinton2009replicated extend Replicated Softmax'' by adding another layer of hidden units on top of the first with bipartite undirected connections. Neural network based approaches, such as Neural Autoregressive Density Estimators (DocNADE) @cite_2 and Hybrid Neural Network-Latent Topic Model @cite_27 , are also shown outperforming the LDA model.
{ "cite_N": [ "@cite_27", "@cite_2" ], "mid": [ "2295103979", "2157006255" ], "abstract": [ "This paper introduces a hybrid model that combines a neural network with a latent topic model. The neural network provides a lowdimensional embedding for the input data, whose subsequent distribution is captured by the topic model. The neural network thus acts as a trainable feature extractor while the topic model captures the group structure of the data. Following an initial pretraining phase to separately initialize each part of the model, a unified training scheme is introduced that allows for discriminative training of the entire model. The approach is evaluated on visual data in scene classification task, where the hybrid model is shown to outperform models based solely on neural networks or topic models, as well as other baseline methods.", "We describe a new model for learning meaningful representations of text documents from an unlabeled collection of documents. This model is inspired by the recently proposed Replicated Softmax, an undirected graphical model of word counts that was shown to learn a better generative model and more meaningful document representations. Specifically, we take inspiration from the conditional mean-field recursive equations of the Replicated Softmax in order to define a neural network architecture that estimates the probability of observing a new word in a given document given the previously observed words. This paradigm also allows us to replace the expensive softmax distribution over words with a hierarchical distribution over paths in a binary tree of words. The end result is a model whose training complexity scales logarithmically with the vocabulary size instead of linearly as in the Replicated Softmax. Our experiments show that our model is competitive both as a generative model of documents and as a document representation learning algorithm." ] }
1502.03179
1550569702
We study asymptotics for solutions of Maxwell's equations, in fact of the Hodge-de Rham equation @math without restriction on the form degree, on a geometric class of stationary spacetimes with a warped product type structure (without any symmetry assumptions), which in particular include Schwarzschild-de Sitter spaces of all spacetime dimensions @math . We prove that solutions decay exponentially to @math or to stationary states in every form degree, and give an interpretation of the stationary states in terms of cohomological information of the spacetime. We also study the wave equation on differential forms and in particular prove analogous results on Schwarzschild-de Sitter spacetimes. We demonstrate the stability of our analysis and deduce asymptotics and decay for solutions of Maxwell's equations, the Hodge-de Rham equation and the wave equation on differential forms on Kerr-de Sitter spacetimes with small angular momentum.
Vasy's proof of the meromorphy of the (modified) resolvent of the Laplacian on differential forms on asymptotically hyperbolic spaces @cite_10 makes use of the same microlocal framework as the present paper, and it also shows how to link the intrinsic' structure of the asymptotically hyperbolic space and the form of the Hodge-Laplacian with a non-degenerately extended' space and operator. For Kerr-de Sitter spacetimes, Dyatlov @cite_1 defined quasinormal modes or resonances in the same way as they are used here, and obtained exponential decay to constants away from the event horizon for scalar waves. This followed work of Melrose, S 'a Barreto and Vasy @cite_29 , where this was shown up to the event horizon of a Schwarzschild-de Sitter black hole, and of Dafermos and Rodnianski @cite_37 who proved polynomial decay in this setting. Dyatlov proved exponential decay up to the event horizon for Kerr-de Sitter in @cite_40 , and significantly strengthened this in @cite_6 , obtaining a full resonance expansion for scalar waves, improving on the result of Bony and H "afner @cite_46 in the Schwarzschild-de Sitter setting, which in turn followed S 'a Barreto and Zworski @cite_31 .
{ "cite_N": [ "@cite_37", "@cite_31", "@cite_29", "@cite_1", "@cite_6", "@cite_40", "@cite_46", "@cite_10" ], "mid": [ "", "", "2949406468", "2018414839", "2083456890", "2963880204", "1965595223", "1861554096" ], "abstract": [ "", "", "Solutions to the wave equation on de Sitter-Schwarzschild space with smooth initial data on a Cauchy surface are shown to decay exponentially to a constant at temporal infinity, with corresponding uniform decay on the appropriately compactified space.", "We provide a rigorous definition of quasi-normal modes for a rotating black hole. They are given by the poles of a certain meromorphic family of operators and agree with the heuristic definition in the physics literature. If the black hole rotates slowly enough, we show that these poles form a discrete subset of ( C ) . As an application we prove that the local energy of linear waves in that background decays exponentially once orthogonality to the zero resonance is imposed.", "We establish a Bohr–Sommerfeld type condition for quasi-normal modes of a slowly rotating Kerr–de Sitter black hole, providing their full asymptotic description in any strip of fixed width. In particular, we observe a Zeeman-like splitting of the high multiplicity modes at a = 0 (Schwarzschild–de Sitter), once spherical symmetry is broken. The numerical results presented in Appendix B show that the asymptotics are in fact accurate at very low energies and agree with the numerical results established by other methods in the physics literature. We also prove that solutions of the wave equation can be asymptotically expanded in terms of quasi-normal modes; this confirms the validity of the interpretation of their real parts as frequencies of oscillations, and imaginary parts as decay rates of gravitational waves.", "", "We describe an expansion of the solution of the wave equation on the De Sitter–Schwarzschild metric in terms of resonances. The principal term in the expansion is due to a resonance at 0. The error term decays polynomially if we permit a logarithmic derivative loss in the angular directions and exponentially if we permit an ( ) derivative loss in the angular directions.", "We show the analytic continuation of the resolvent of the Laplacian on asymptotically hyperbolic spaces on differential forms, including high energy estimates in strips. This is achieved by placing the spectral family of the Laplacian within the framework developed, and applied to scalar problems, by the author recently, roughly by extending the problem across the boundary of the compactification of the asymptotically hyperbolic space in a suitable manner. The main novelty is that the non-scalar nature of the operator is dealt with by relating it to a problem on an asymptotically Minkowski space to motivate the choice of the extension across the conformal boundary." ] }
1502.03179
1550569702
We study asymptotics for solutions of Maxwell's equations, in fact of the Hodge-de Rham equation @math without restriction on the form degree, on a geometric class of stationary spacetimes with a warped product type structure (without any symmetry assumptions), which in particular include Schwarzschild-de Sitter spaces of all spacetime dimensions @math . We prove that solutions decay exponentially to @math or to stationary states in every form degree, and give an interpretation of the stationary states in terms of cohomological information of the spacetime. We also study the wave equation on differential forms and in particular prove analogous results on Schwarzschild-de Sitter spacetimes. We demonstrate the stability of our analysis and deduce asymptotics and decay for solutions of Maxwell's equations, the Hodge-de Rham equation and the wave equation on differential forms on Kerr-de Sitter spacetimes with small angular momentum.
In the scalar setting too, the wave equation on asymptotically flat spacetimes has received more attention. Dafermos, Rodnianski and Shlapentokh-Rothman @cite_9 , building on @cite_51 @cite_17 @cite_47 , established the decay of scalar waves on all non-extremal Kerr spacetimes, following pioneering work by Kay and Wald @cite_41 @cite_0 in the Schwarzschild setting. Tataru and Tohaneanu @cite_48 @cite_38 proved decay and Price's law for slowly rotating Kerr using local energy decay estimates, and Strichartz estimates were proved by Marzuola, Metcalfe, Tataru and Tohaneanu @cite_49 .
{ "cite_N": [ "@cite_38", "@cite_41", "@cite_48", "@cite_9", "@cite_0", "@cite_49", "@cite_47", "@cite_51", "@cite_17" ], "mid": [ "", "2076260317", "2048123872", "1697896460", "2050566505", "", "1988465365", "2962947744", "1666285156" ], "abstract": [ "", "The authors prove boundedness on an exterior Schwarzschild wedge for Cinfinity solutions of the covariant Klein-Gordon equation which have compact support on Cauchy surfaces in Kruskal spacetime. Previously used methods enable such boundedness to be proven only for solutions whose initial data satisfy the additional restriction of vanishing at the bifurcation 2-sphere of the horizon. By employing a rarely considered discrete isometry of Kruskal spacetime and the causal propagation property of the equation, they remove this restriction. This also enables them to prove boundedness exterior to the horizon of a spacetime representing the collapse to a black hole of a spherically symmetric compact star for solutions of the same equation having Cinfinity initial data on a Cauchy surface drawn prior to the collapse.", "Author(s): Tataru, D | Abstract: In this article we study the pointwise decay properties of solutions to the wave equation on a class of stationary asymptotically flat backgrounds in three space dimensions. Under the assumption that uniform energy bounds and a weak form of local energy decay hold forward in time we establish a @math local uniform decay rate for linear waves. This work was motivated by open problems concerning decay rates for linear waves on Schwarzschild and Kerr backgrounds, where such a decay rate has been conjectured by R. Price. Our results apply to both of these cases.", "This paper concludes the series begun in [M. Dafermos and I. Rodnianski, Decay for solutions of the wave equation on Kerr exterior spacetimes I-II: the cases |a| << M or axisymmetry, arXiv:1010.5132], providing the complete proof of definitive boundedness and decay results for the scalar wave equation on Kerr backgrounds in the general subextremal |a| < M case without symmetry assumptions. The essential ideas of the proof (together with explicit constructions of the most difficult multiplier currents) have been announced in our survey [M. Dafermos and I. Rodnianski, The black hole stability problem for linear scalar perturbations, in Proceedings of the 12th Marcel Grossmann Meeting on General Relativity, T. (ed.), World Scientific, Singapore, 2011, pp. 132-189, arXiv:1010.5137]. Our proof appeals also to the quantitative mode-stability proven in [Y. Shlapentokh-Rothman, Quantitative Mode Stability for the Wave Equation on the Kerr Spacetime, arXiv:1302.6902, to appear, Ann. Henri Poincare], together with a streamlined continuity argument in the parameter a, appearing here for the first time. While serving as Part III of a series, this paper repeats all necessary notations so that it can be read independently of previous work.", "It is shown that the standard arguments for the stability of the Schwarzschild metric can be made into a rigorous proof that the numerical values of linear perturbations of Schwarzschild must remain uniformly bounded for all time.", "", "We give a quantitative refinement and simple proofs of mode stability type statements for the wave equation on Kerr backgrounds in the full sub-extremal range (|a| < M). As an application, we are able to quantitatively control the energy flux along the horizon and null infinity and establish integrated local energy decay for solutions to the wave equation in any bounded-frequency regime.", "We review our recent work on linear stability for scalar perturba- tions of Kerr spacetimes, that is to say, boundedness and decay properties for solutions of the scalar wave equation 2g = 0 on Kerr exterior backgrounds (M,ga,M). We begin with the very slowly rotating caseSaS ≪M, where first boundedness and then decay has been shown in rapid developments over the last two years, following earlier progress in the Schwarzschild case a= 0. We then turn to the general subextremal range SaS <M, where we give here for the first time the essential elements of a proof of definitive decay bounds for solutions . These developments give hope that the problem of the non-linear stability of the Kerr family of black holes might soon be addressed. This paper accompanies a talk by one of the authors (I.R.) at the 12th Marcel Grossmann Meeting, Paris, June 2009.", "This paper contains the first two parts (I-II) of a three-part series concerning the scalar wave equation = 0 on a fixed Kerr background. We here restrict to two cases: (II1) |a| M, general or (II2) |a| < M, axisymmetric. In either case, we prove a version of 'integrated local energy decay', specifically, that the 4-integral of an energy-type density (degenerating in a neighborhood of the Schwarzschild photon sphere and at infinity), integrated over the domain of dependence of a spacelike hypersurface connecting the future event horizon with spacelike infinity or a sphere on null infinity, is bounded by a natural (non-degenerate) energy flux of through . (The case (II1) has in fact been treated previously in our Clay Lecture notes: Lectures on black holes and linear waves, arXiv:0811.0354.) In our forthcoming Part III, the restriction to axisymmetry for the general |a| < M case is removed. The complete proof is surveyed in our companion paper The black hole stability problem for linear scalar perturbations, which includes the essential details of our forthcoming Part III. Together with previous work (see our: A new physical-space approach to decay for the wave equation with applications to black hole spacetimes, in XVIth International Congress on Mathematical Physics, Pavel Exner ed., Prague 2009 pp. 421-433, 2009, arxiv:0910.4957), this result leads, under suitable assumptions on initial data of , to polynomial decay bounds for the energy flux of through the foliation of the black hole exterior defined by the time translates of a spacelike hypersurface terminating on null infinity, as well as to pointwise decay estimates, of a definitive form useful for nonlinear applications." ] }
1502.03179
1550569702
We study asymptotics for solutions of Maxwell's equations, in fact of the Hodge-de Rham equation @math without restriction on the form degree, on a geometric class of stationary spacetimes with a warped product type structure (without any symmetry assumptions), which in particular include Schwarzschild-de Sitter spaces of all spacetime dimensions @math . We prove that solutions decay exponentially to @math or to stationary states in every form degree, and give an interpretation of the stationary states in terms of cohomological information of the spacetime. We also study the wave equation on differential forms and in particular prove analogous results on Schwarzschild-de Sitter spacetimes. We demonstrate the stability of our analysis and deduce asymptotics and decay for solutions of Maxwell's equations, the Hodge-de Rham equation and the wave equation on differential forms on Kerr-de Sitter spacetimes with small angular momentum.
Non-linear results for wave equations on black hole spacetimes include @cite_34 , see also the references therein, Luk's work @cite_28 on semilinear forward problems on Kerr, and the scattering construction of dynamical black holes by Dafermos, Holzegel and Rodnianski @cite_4 . Fully general stability results for Einstein's equations specifically are available for de Sitter space by the works of Friedrich @cite_11 , Anderson @cite_12 , Rodnianski and Speck @cite_14 and Ringstr "om @cite_24 , and for Minkowski space by the work of Christodoulou and Klainerman @cite_15 , partially simplified and extended by Lindblad and Rodnianski @cite_35 @cite_36 , Bieri and Zipser @cite_50 and Speck @cite_3 .
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_4", "@cite_28", "@cite_36", "@cite_3", "@cite_24", "@cite_50", "@cite_15", "@cite_34", "@cite_12", "@cite_11" ], "mid": [ "2055247681", "1622075441", "1921678059", "2962830193", "1972312481", "1776952983", "", "2103138450", "", "1659437492", "2152174805", "2035054911" ], "abstract": [ "We prove global stability of Minkowski space for the Einstein vacuum equations in harmonic (wave) coordinate gauge for the set of restricted data coinciding with the Schwarzschild solution in the neighborhood of space-like infinity. The result contradicts previous beliefs that wave coordinates are “unstable in the large” and provides an alternative approach to the stability problem originally solved ( for unrestricted data, in a different gauge and with a precise description of the asymptotic behavior at null infinity) by D. Christodoulou and S. Klainerman.", "In this article, we study small perturbations of the family of Friedmann-Lema ^itre-Robertson-Walker cosmological background solutions to the Euler-Einstein system with a positive cosmological constant in 1 + 3 dimensions. The background solutions describe an initially uniform quiet fluid of positive energy density evolving in a spacetime undergoing accelerated expansion. We show that under the equation of state p = c_s^2*(energy density), 0 < c_s^2 < 1 3, the background solutions are globally future asymptotically stable under small irrotational perturbations. In particular, we prove that the perturbed spacetimes, which have the topological structure [0,infinity) x T^3, are future causally geodesically complete.", "We construct a large class of dynamical vacuum black hole spacetimes whose exterior geometry asymptotically settles down to a fixed Schwarzschild or Kerr metric. The construction proceeds by solving a backwards scattering problem for the Einstein vacuum equations with characteristic data prescribed on the event horizon and (in the limit) at null infinity. The class admits the full \"functional\" degrees of freedom for the vacuum equations, and thus our solutions will in general possess no geometric or algebraic symmetries. It is essential, however, for the construction that the scattering data (and the resulting solution spacetime) converge to stationarity exponentially fast, in advanced and retarded time, their rate of decay intimately related to the surface gravity of the event horizon. This can be traced back to the celebrated redshift effect, which in the context of backwards evolution is seen as a blueshift.", "We study a semilinear equation with derivatives satisfying a null condition on slowly rotating Kerr spacetimes. We prove that given sufficiently small initial data, the solution exists globally in time and decays with a quantitative rate to the trivial solution. The proof uses the robust vector field method. It makes use of the decay properties of the linear wave equation on Kerr spacetime, in particular the � !", "We give a new proof of the global stability of Minkowski space originally established in the vacuum case by Christodoulou and Klainerman. The new approach, which relies on the classical harmonic gauge, shows that the Einstein-vacuum and the Einstein-scalar field equations with asymptotically flat initial data satisfying a global smallness condition produce global (causally geodesically complete) solutions asymptotically convergent to the Minkowski space-time.", "In this article, we study the coupling of the Einstein field equations of general relativity to a family of models of nonlinear electromagnetic fields. The family comprises all covariant electromagnetic models that satisfy the following criteria: they are derivable from a sufficiently regular Lagrangian, they reduce to the linear Maxwell model in the weak-field limit, and their corresponding energy-momentum tensors satisfy the dominant energy condition. Our main result is a proof of the global nonlinear stability of the 1 + 3-dimensional Minkowski spacetime solution to the coupled system for any member of the family, which includes the linear Maxwell model. This stability result is a consequence of a small-data global existence result for a reduced system of equations that is equivalent to the original system in our wave coordinate gauge. Our analysis of the spacetime metric components is based on a framework recently developed by Lindblad and Rodnianski, which allows us to derive suitable estimates for tensorial systems of quasilinear wave equations with nonlinearities that satisfy the weak null condition. Our analysis of the electromagnetic fields, which satisfy quasilinear first-order equations, is based on an extension of a geometric energy-method framework developed by Christodoulou, together with a collection of pointwise decay estimates for the Faraday tensor developed in the article. We work directly with the electromagnetic fields, and thus avoid the use of electromagnetic potentials.", "", "This book consists of two independent works: Part I is 'Solutions of the Einstein Vacuum Equations', by Lydia Bieri. Part II is 'Solutions of the Einstein-Maxwell Equations', by Nina Zipser. A famous result of Christodoulou and Klainerman is the global nonlinear stability of Minkowski spacetime. In this book, Bieri and Zipser provide two extensions to this result. In the first part, Bieri solves the Cauchy problem for the Einstein vacuum equations with more general, asymptotically flat initial data, and describes precisely the asymptotic behavior. In particular, she assumes less decay in the power of @math and one less derivative than in the Christodoulou-Klainerman result. She proves that in this case, too, the initial data, being globally close to the trivial data, yields a solution which is a complete spacetime, tending to the Minkowski spacetime at infinity along any geodesic. In contrast to the original situation, certain estimates in this proof are borderline in view of decay, indicating that the conditions in the main theorem on the decay at infinity on the initial data are sharp. In the second part, Zipser proves the existence of smooth, global solutions to the Einstein-Maxwell equations. A nontrivial solution of these equations is a curved spacetime with an electromagnetic field. To prove the existence of solutions to the Einstein-Maxwell equations, Zipser follows the argument and methodology introduced by Christodoulou and Klainerman. To generalize the original results, she needs to contend with the additional curvature terms that arise due to the presence of the electromagnetic field @math ; in her case the Ricci curvature of the spacetime is not identically zero but rather represented by a quadratic in the components of @math . In particular the Ricci curvature is a constant multiple of the stress-energy tensor for @math . Furthermore, the traceless part of the Riemann curvature tensor no longer satisfies the homogeneous Bianchi equations but rather inhomogeneous equations including components of the spacetime Ricci curvature. Therefore, the second part of this book focuses primarily on the derivation of estimates for the new terms that arise due to the presence of the electromagnetic field.", "", "We consider quasilinear wave equations on manifolds for which infinity has a structure generalizing that of Kerr-de Sitter space; in particular the trapped geodesics form a normally hyperbolic invariant manifold. We prove the global existence and decay, to constants for the actual wave equation, of solutions. The key new ingredient compared to earlier work by the authors in the semilinear case [33] and by the first author in the non-trapping quasilinear case [30] is the use of the Nash-Moser iteration in our framework.", "A new proof of Friedrich’s theorem on the existence and stability of asymptotically de Sitter spaces in 3 + 1 dimensions is given, which extends to all even dimensions. In addition we characterize the possible limits of spaces which are globally asymptotically de Sitter, to the past and future.", "It is demonstrated that initial data sufficiently close to De-Sitter data develop into solutions of Einstein's equations Ric[g]=Λg with positive cosmological constant Λ, which are asymptotically simple in the past as well as in the future, whence null geodesically complete. Furthermore it is shown that hyperboloidal initial data (describing hypersurfaces which intersect future null infinity in a space-like two-sphere), which are sufficiently close to Minkowskian hyperboloidal data, develop into future asymptotically simple whence null geodesically future complete solutions of Einstein's equations Ric[g]=0, for which future null infinity forms a regular cone with vertexi+ that represents future time-like infinity." ] }
1502.03288
2254542549
In this paper, we consider the problem of efficiently representing a set @math of @math items out of a universe @math while supporting a number of operations on it. Let @math be the gap stream associated with @math , @math its bit-size when encoded with , and @math its empirical zero-order entropy. We prove that (1) @math if @math is highly compressible, and (2) @math . Let @math be the number of gap lengths between elements in @math . We firstly propose a new space-efficient zero-order compressed representation of @math taking @math bits of space. Then, we describe a fully-indexable dictionary that supports and queries in @math time while requiring asymptotically the same space as the proposed compressed representation of @math .
The on asks to maintain a (space-efficient) data structure called over a set @math , @math , supporting efficiently a range of queries on @math . In this problem, @math is an ordered set and is called . As showed by Jacobson in his doctoral thesis @cite_1 , a set of just two operations, and , is sufficient and powerful enough in order to derive other fundamental functionalities desired from such a structure: , , and . @math , with @math , is the number of elements in @math that are smaller than or equal to @math . @math , where @math , is the @math -th smallest element in @math . In this paper, we focus on , i.e. data structures supporting both rank and select operations efficiently.
{ "cite_N": [ "@cite_1" ], "mid": [ "127947978" ], "abstract": [ "Data compression is when you take a big chunk of data and crunch it down to fit into a smaller space. That data is put on ice; you have to un-crunch the compressed data to get at it. Data optimization, on the other hand, is when you take a chunk of data plus a collection of operations you can perform on that data, and crunch it into a smaller space while retaining the ability to perform the operations efficiently. This thesis investigates the problem of data optimization for some fundamental static data types, concentrating on linked data structures such as trees. I chose to restrict my attention to static data structures because they are easier to optimize since the optimization can be performed off-line. Data optimization comes in two different flavors: concrete and abstract. Concrete optimization finds minimal representations within a given implementation of a data structure; abstract optimization seeks implementations with guaranteed economy of space and time. I consider the problem of concrete optimization of various pointer-based implementations of trees and graphs. The only legitimate use of a pointer is as a reference, so we are free to map the pieces of a linked structure into memory as we choose. The problem is to find a mapping that maximizes overlap of the pieces, and hence minimizes the space they occupy. I solve the problem of finding a minimal representation for general unordered trees where pointers to children are stored in a block of consecutive locations. The algorithm presented is based on weighted matching. I also present an analysis showing that the average number of cons-cells required to store a binary tree of n nodes as a minimal binary DAG is asymptotic to @math lg @math . Methods for representing trees of n nodes in @math ( @math ) bits that allow efficient tree-traversal are presented. I develop tools for abstract optimization based on a succinct representation for ordered sets that supports ranking and selection. These tools are put to use in a building an @math ( @math )-bit data structure that represents n-node planar graphs, allowing efficient traversal and adjacency-testing." ] }
1502.03322
1669910309
Sentiment analysis on user reviews helps to keep track of user reactions towards products, and make advices to users about what to buy. State-of-the-art review-level sentiment classification techniques could give pretty good precisions of above 90 . However, current phrase-level sentiment analysis approaches might only give sentiment polarity labelling precisions of around 70 80 , which is far from satisfaction and restricts its application in many practical tasks. In this paper, we focus on the problem of phrase-level sentiment polarity labelling and attempt to bridge the gap between phrase-level and review-level sentiment analysis. We investigate the inconsistency between the numerical star ratings and the sentiment orientation of textual user reviews. Although they have long been treated as identical, which serves as a basic assumption in previous work, we find that this assumption is not necessarily true. We further propose to leverage the results of review-level sentiment classification to boost the performance of phrase-level polarity labelling using a novel constrained convex optimization framework. Besides, the framework is capable of integrating various kinds of information sources and heuristics, while giving the global optimal solution due to its convexity. Experimental results on both English and Chinese reviews show that our framework achieves high labelling precisions of up to 89 , which is a significant improvement from current approaches.
With the rapid growth of e-commerce, social networks and online discussion forums, the web has been rich in user-generated free-text data, where users express various attitudes towards products or events, which have been attracting researchers into Sentiment Analysis @cite_36 @cite_2 . Sentiment analysis plays an important role in many applications, including opinion retrieval @cite_13 , word-of-mouth tracking @cite_25 , and opinion oriented document summarization @cite_14 @cite_32 , etc.
{ "cite_N": [ "@cite_14", "@cite_36", "@cite_32", "@cite_2", "@cite_13", "@cite_25" ], "mid": [ "2169218343", "66373487", "2160660844", "", "1965435478", "2087665422" ], "abstract": [ "In this paper we present an opinion summarization technique in spoken dialogue systems. Opinion mining has been well studied for years, but very few have considered its application in spoken dialogue systems. Review summarization, when applied to real dialogue systems, is much more complicated than pure text-based summarization. We conduct a systematic study on dialogue-system-oriented review analysis and propose a three-level framework for a recommendation dialogue system. In previous work we have explored a linguistic parsing approach to phrase extraction from reviews. In this paper we will describe an approach using statistical models such as decision trees and SVMs to select the most representative phrases from the extracted phrase set. We will also explain how to generate informative yet concise review summaries for dialogue purposes. Experimental results in the restaurant domain show that the proposed approach using decision tree algorithms achieves an outperformance of 13 compared to SVM models and an improvement of 36 over a heuristic rule baseline. Experiments also show that the decision-tree-based phrase selection model can achieve rather reliable predictions on the phrase label, comparable to human judgment. The proposed statistical approach is based on domain-independent learning features and can be extended to other domains effectively.", "Sentiment analysis or opinion mining is the computational study of people’s opinions, appraisals, attitudes, and emotions toward entities, individuals, issues, events, topics and their attributes. The task is technically challenging and practically very useful. For example, businesses always want to find public or consumer opinions about their products and services. Potential customers also want to know the opinions of existing users before they use a service or purchase a product.", "Merchants selling products on the Web often ask their customers to review the products that they have purchased and the associated services. As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. For a popular product, the number of reviews can be in hundreds or even thousands. This makes it difficult for a potential customer to read them to make an informed decision on whether to purchase the product. It also makes it difficult for the manufacturer of the product to keep track and to manage customer opinions. For the manufacturer, there are additional difficulties because many merchant sites may sell the same product and the manufacturer normally produces many kinds of products. In this research, we aim to mine and to summarize all the customer reviews of a product. This summarization task is different from traditional text summarization because we only mine the features of the product on which the customers have expressed their opinions and whether the opinions are positive or negative. We do not summarize the reviews by selecting a subset or rewrite some of the original sentences from the reviews to capture the main points as in the classic text summarization. Our task is performed in three steps: (1) mining product features that have been commented on by customers; (2) identifying opinion sentences in each review and deciding whether each opinion sentence is positive or negative; (3) summarizing the results. This paper proposes several novel techniques to perform these tasks. Our experimental results using reviews of a number of products sold online demonstrate the effectiveness of the techniques.", "", "", "In this paper, we report research results investigating micro-blogging as a form of online word of mouth branding. We analyzed 149,472 micro-blog postings containing branding comments, sentiments, and opinions. We investigated the overall structure of these micro-blog postings, types of expressions, and sentiment fluctuations. Of the branding micro-blogs, nearly 20 percent contained some expressions of branding sentiments. Of these tweets with sentiments, more than 50 percent were positive and 33 percent critical of the company or product. We discuss the implications for organizations in using micro-blogging as part of their overall marketing strategy and branding campaigns." ] }
1502.03322
1669910309
Sentiment analysis on user reviews helps to keep track of user reactions towards products, and make advices to users about what to buy. State-of-the-art review-level sentiment classification techniques could give pretty good precisions of above 90 . However, current phrase-level sentiment analysis approaches might only give sentiment polarity labelling precisions of around 70 80 , which is far from satisfaction and restricts its application in many practical tasks. In this paper, we focus on the problem of phrase-level sentiment polarity labelling and attempt to bridge the gap between phrase-level and review-level sentiment analysis. We investigate the inconsistency between the numerical star ratings and the sentiment orientation of textual user reviews. Although they have long been treated as identical, which serves as a basic assumption in previous work, we find that this assumption is not necessarily true. We further propose to leverage the results of review-level sentiment classification to boost the performance of phrase-level polarity labelling using a novel constrained convex optimization framework. Besides, the framework is capable of integrating various kinds of information sources and heuristics, while giving the global optimal solution due to its convexity. Experimental results on both English and Chinese reviews show that our framework achieves high labelling precisions of up to 89 , which is a significant improvement from current approaches.
One of the core tasks in sentiment analysis is to determine the sentiment orientations that users express in reviews, sentences or on specific product features, corresponding to review(document)-level @cite_6 , sentence-level @cite_26 @cite_5 and phrase-level @cite_7 @cite_18 @cite_29 sentiment analysis.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_7", "@cite_29", "@cite_6", "@cite_5" ], "mid": [ "2131305515", "2014902591", "2022204871", "1964613733", "2166706824", "2132166724" ], "abstract": [ "The explosion of Web opinion data has made essential the need for automatic tools to analyze and understand people's sentiments toward different topics. In most sentiment analysis applications, the sentiment lexicon plays a central role. However, it is well known that there is no universally optimal sentiment lexicon since the polarity of words is sensitive to the topic domain. Even worse, in the same domain the same word may indicate different polarities with respect to different aspects. For example, in a laptop review, \"large\" is negative for the battery aspect while being positive for the screen aspect. In this paper, we focus on the problem of learning a sentiment lexicon that is not only domain specific but also dependent on the aspect in context given an unlabeled opinionated text collection. We propose a novel optimization framework that provides a unified and principled way to combine different sources of information for learning such a context-dependent sentiment lexicon. Experiments on two data sets (hotel reviews and customer feedback surveys on printers) show that our approach can not only identify new sentiment words specific to the given domain but also determine the different polarities of a word depending on the aspect in context. In further quantitative evaluation, our method is proved to be effective in constructing a high quality lexicon by comparing with a human annotated gold standard. In addition, using the learned context-dependent sentiment lexicon improved the accuracy in an aspect-level sentiment classification task.", "This paper describes a corpus annotation project to study issues in the manual annotation of opinions, emotions, sentiments, speculations, evaluations and other private states in language. The resulting corpus annotation scheme is described, as well as examples of its use. In addition, the manual annotation process and the results of an inter-annotator agreement study on a 10,000-sentence corpus of articles drawn from the world press are presented.", "This paper presents a new approach to phrase-level sentiment analysis that first determines whether an expression is neutral or polar and then disambiguates the polarity of the polar expressions. With this approach, the system is able to automatically identify the contextual polarity for a large subset of sentiment expressions, achieving results that are significantly better than baseline.", "One of the important types of information on the Web is the opinions expressed in the user generated content, e.g., customer reviews of products, forum posts, and blogs. In this paper, we focus on customer reviews of products. In particular, we study the problem of determining the semantic orientations (positive, negative or neutral) of opinions expressed on product features in reviews. This problem has many applications, e.g., opinion mining, summarization and search. Most existing techniques utilize a list of opinion (bearing) words (also called opinion lexicon) for the purpose. Opinion words are words that express desirable (e.g., great, amazing, etc.) or undesirable (e.g., bad, poor, etc) states. These approaches, however, all have some major shortcomings. In this paper, we propose a holistic lexicon-based approach to solving the problem by exploiting external evidences and linguistic conventions of natural language expressions. This approach allows the system to handle opinion words that are context dependent, which cause major difficulties for existing algorithms. It also deals with many special words, phrases and language constructs which have impacts on opinions based on their linguistic patterns. It also has an effective function for aggregating multiple conflicting opinion words in a sentence. A system, called Opinion Observer, based on the proposed technique has been implemented. Experimental results using a benchmark product review data set and some additional reviews show that the proposed technique is highly effective. It outperforms existing methods significantly", "We consider the problem of classifying documents not by topic, but by overall sentiment, e.g., determining whether a review is positive or negative. Using movie reviews as data, we find that standard machine learning techniques definitively outperform human-produced baselines. However, the three machine learning methods we employed (Naive Bayes, maximum entropy classification, and support vector machines) do not perform as well on sentiment classification as on traditional topic-based categorization. We conclude by examining factors that make the sentiment classification problem more challenging.", "In this paper, we present a dependency tree-based method for sentiment classification of Japanese and English subjective sentences using conditional random fields with hidden variables. Subjective sentences often contain words which reverse the sentiment polarities of other words. Therefore, interactions between words need to be considered in sentiment classification, which is difficult to be handled with simple bag-of-words approaches, and the syntactic dependency structures of subjective sentences are exploited in our method. In the method, the sentiment polarity of each dependency subtree in a sentence, which is not observable in training data, is represented by a hidden variable. The polarity of the whole sentence is calculated in consideration of interactions between the hidden variables. Sum-product belief propagation is used for inference. Experimental results of sentiment classification for Japanese and English subjective sentences showed that the method performs better than other methods based on bag-of-features." ] }
1502.03322
1669910309
Sentiment analysis on user reviews helps to keep track of user reactions towards products, and make advices to users about what to buy. State-of-the-art review-level sentiment classification techniques could give pretty good precisions of above 90 . However, current phrase-level sentiment analysis approaches might only give sentiment polarity labelling precisions of around 70 80 , which is far from satisfaction and restricts its application in many practical tasks. In this paper, we focus on the problem of phrase-level sentiment polarity labelling and attempt to bridge the gap between phrase-level and review-level sentiment analysis. We investigate the inconsistency between the numerical star ratings and the sentiment orientation of textual user reviews. Although they have long been treated as identical, which serves as a basic assumption in previous work, we find that this assumption is not necessarily true. We further propose to leverage the results of review-level sentiment classification to boost the performance of phrase-level polarity labelling using a novel constrained convex optimization framework. Besides, the framework is capable of integrating various kinds of information sources and heuristics, while giving the global optimal solution due to its convexity. Experimental results on both English and Chinese reviews show that our framework achieves high labelling precisions of up to 89 , which is a significant improvement from current approaches.
Review- and sentence-level sentiment analysis attempt to label a review or sentence as one of some predefined sentiment polarities, which are, typically, positive, negative and sometimes neutral @cite_36 . This task is referred to as Sentiment Classification, which has drawn much attention from the research community, and both supervised @cite_6 @cite_8 @cite_43 @cite_21 @cite_39 @cite_15 , unsupervised @cite_30 @cite_32 @cite_0 @cite_3 @cite_16 @cite_33 or semi-supervised @cite_44 @cite_41 @cite_37 @cite_35 methods have been investigated.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_37", "@cite_33", "@cite_8", "@cite_36", "@cite_41", "@cite_21", "@cite_32", "@cite_6", "@cite_39", "@cite_0", "@cite_43", "@cite_3", "@cite_44", "@cite_15", "@cite_16" ], "mid": [ "2155328222", "2071085454", "5236451", "150150005", "120442290", "66373487", "101422005", "104703790", "2160660844", "2166706824", "1831460538", "2119188197", "2138260386", "2010163591", "2168816626", "2113459411", "2006386362" ], "abstract": [ "This paper presents a simple unsupervised learning algorithm for classifying reviews as recommended (thumbs up) or not recommended (thumbs down). The classification of a review is predicted by the average semantic orientation of the phrases in the review that contain adjectives or adverbs. A phrase has a positive semantic orientation when it has good associations (e.g., \"subtle nuances\") and a negative semantic orientation when it has bad associations (e.g., \"very cavalier\"). In this paper, the semantic orientation of a phrase is calculated as the mutual information between the given phrase and the word \"excellent\" minus the mutual information between the given phrase and the word \"poor\". A review is classified as recommended if the average semantic orientation of its phrases is positive. The algorithm achieves an average accuracy of 74 when evaluated on 410 reviews from Epinions, sampled from four different domains (reviews of automobiles, banks, movies, and travel destinations). The accuracy ranges from 84 for automobile reviews to 66 for movie reviews.", "We present a graph-based semi-supervised learning algorithm to address the sentiment analysis task of rating inference. Given a set of documents (e.g., movie reviews) and accompanying ratings (e.g., \"4 stars\"), the task calls for inferring numerical ratings for unlabeled documents based on the perceived sentiment expressed by their text. In particular, we are interested in the situation where labeled data is scarce. We place this task in the semi-supervised setting and demonstrate that considering unlabeled reviews in the learning process can improve rating-inference performance. We do so by creating a graph on both labeled and unlabeled data to encode certain assumptions for this task. We then solve an optimization problem to obtain a smooth rating function over the whole graph. When only limited labeled data is available, this method achieves significantly better predictive accuracy over other methods that ignore the unlabeled examples during training.", "Various semi-supervised learning methods have been proposed recently to solve the long-standing shortage problem of manually labeled data in sentiment classification. However, most existing studies assume the balance between negative and positive samples in both the labeled and unlabeled data, which may not be true in reality. In this paper, we investigate a more common case of semi-supervised learning for imbalanced sentiment classification. In particular, various random subspaces are dynamically generated to deal with the imbalanced class distribution problem. Evaluation across four domains shows the effectiveness of our approach.", "We address the problem of sentiment and objectivity classification of product reviews in Chinese. Our approach is distinctive in that it treats both positive negative sentiment and subjectivity objectivity not as distinct classes but rather as a continuum; we argue that this is desirable from the perspective of would-be customers who read the reviews. We use novel unsupervised techniques, including a one-word 'seed' vocabulary and iterative retraining for sentiment processing, and a criterion of 'sentiment density' for determining the extent to which a document is opinionated. The classifier achieves up to 87 F-measure for sentiment polarity detection.", "A hybrid digital phase locked loop is disclosed to recover an isochronous clock from a \"stuffed\" multiplexed input signal as found in an asynchronous PCM demultiplexer. A low frequency voltage controlled multivibrator is controlled by the output of a phase comparator. The phase comparator is coupled to the input signal and the output signal of a distributor. The distributor is controlled by the multivibrator to sequentially switch a multiphase output signal of a crystal oscillator to provide the output signal of the distributor. This arrangement overcomes the requirement of a voltage controlled crystal oscillator per channel group in the demultiplexer.", "Sentiment analysis or opinion mining is the computational study of people’s opinions, appraisals, attitudes, and emotions toward entities, individuals, issues, events, topics and their attributes. The task is technically challenging and practically very useful. For example, businesses always want to find public or consumer opinions about their products and services. Potential customers also want to know the opinions of existing users before they use a service or purchase a product.", "This paper presents a novel semi-supervised learning algorithm called Active Deep Networks (ADN), to address the semi-supervised sentiment classification problem with active learning. First, we propose the semi-supervised learning method of ADN. ADN is constructed by Restricted Boltzmann Machines (RBM) with unsupervised learning using labeled data and abundant of unlabeled data. Then the constructed structure is fine-tuned by gradient-descent based supervised learning with an exponential loss function. Second, we apply active learning in the semi-supervised learning framework to identify reviews that should be labeled as training data. Then ADN architecture is trained by the selected labeled data and all unlabeled data. Experiments on five sentiment classification datasets show that ADN outperforms the semi-supervised learning algorithm and deep learning techniques applied for sentiment classification.", "Evaluating text fragments for positive and negative subjective expressions and their strength can be important in applications such as single- or multi- document summarization, document ranking, data mining, etc. This paper looks at a simplified version of the problem: classifying online product reviews into positive and negative classes. We discuss a series of experiments with different machine learning algorithms in order to experimentally evaluate various trade-offs, using approximately 100K product reviews from the web.", "Merchants selling products on the Web often ask their customers to review the products that they have purchased and the associated services. As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. For a popular product, the number of reviews can be in hundreds or even thousands. This makes it difficult for a potential customer to read them to make an informed decision on whether to purchase the product. It also makes it difficult for the manufacturer of the product to keep track and to manage customer opinions. For the manufacturer, there are additional difficulties because many merchant sites may sell the same product and the manufacturer normally produces many kinds of products. In this research, we aim to mine and to summarize all the customer reviews of a product. This summarization task is different from traditional text summarization because we only mine the features of the product on which the customers have expressed their opinions and whether the opinions are positive or negative. We do not summarize the reviews by selecting a subset or rewrite some of the original sentences from the reviews to capture the main points as in the classic text summarization. Our task is performed in three steps: (1) mining product features that have been commented on by customers; (2) identifying opinion sentences in each review and deciding whether each opinion sentence is positive or negative; (3) summarizing the results. This paper proposes several novel techniques to perform these tasks. Our experimental results using reviews of a number of products sold online demonstrate the effectiveness of the techniques.", "We consider the problem of classifying documents not by topic, but by overall sentiment, e.g., determining whether a review is positive or negative. Using movie reviews as data, we find that standard machine learning techniques definitively outperform human-produced baselines. However, the three machine learning methods we employed (Naive Bayes, maximum entropy classification, and support vector machines) do not perform as well on sentiment classification as on traditional topic-based categorization. We conclude by examining factors that make the sentiment classification problem more challenging.", "This paper considers the problem of document-level multi-way sentiment detection, proposing a hierarchical classifier algorithm that accounts for the inter-class similarity of tagged sentiment-bearing texts. This type of classifier also provides a natural mechanism for reducing the feature space of the problem. Our results show that this approach improves on state-of-the-art predictive performance for movie reviews with three-star and four-star ratings, while simultaneously reducing training times and memory requirements.", "This paper presents a comparative study of three closely related Bayesian models for unsupervised document level sentiment classification, namely, the latent sentiment model (LSM), the joint sentiment-topic (JST) model, and the Reverse-JST model. Extensive experiments have been conducted on two corpora, the movie review dataset and the multi-domain sentiment dataset. It has been found that while all the three models achieve either better or comparable performance on these two corpora when compared to the existing unsupervised sentiment classification approaches, both JST and Reverse-JST are able to extract sentiment-oriented topics. In addition, Reverse-JST always performs worse than JST suggesting that the JST model is more appropriate for joint sentiment topic detection.", "In this paper, we investigate structured models for document-level sentiment classification. When predicting the sentiment of a subjective document (e.g., as positive or negative), it is well known that not all sentences are equally discriminative or informative. But identifying the useful sentences automatically is itself a difficult learning problem. This paper proposes a joint two-level approach for document-level sentiment classification that simultaneously extracts useful (i.e., subjective) sentences and predicts document-level sentiment based on the extracted sentences. Unlike previous joint learning methods for the task, our approach (1) does not rely on gold standard sentence-level subjectivity annotations (which may be expensive to obtain), and (2) optimizes directly for document-level performance. Empirical evaluations on movie reviews and U.S. Congressional floor debates show improved performance over previous approaches.", "We describe and evaluate a new method of automatic seed word selection for un-supervised sentiment classification of product reviews in Chinese. The whole method is unsupervised and does not require any annotated training data; it only requires information about commonly occurring negations and adverbials. Unsupervised techniques are promising for this task since they avoid problems of domain-dependency typically associated with supervised methods. The results obtained are close to those of supervised classifiers and sometimes better, up to an F1 of 92 .", "Supervised polarity classification systems are typically domain-specific. Building these systems involves the expensive process of annotating a large amount of data for each domain. A potential solution to this corpus annotation bottleneck is to build unsupervised polarity classification systems. However, unsupervised learning of polarity is difficult, owing in part to the prevalence of sentimentally ambiguous reviews, where reviewers discuss both the positive and negative aspects of a product. To address this problem, we propose a semi-supervised approach to sentiment classification where we first mine the unambiguous reviews using spectral techniques and then exploit them to classify the ambiguous reviews via a novel combination of active learning, transductive learning, and ensemble learning.", "Unsupervised vector-based approaches to semantics can model rich lexical meanings, but they largely fail to capture sentiment information that is central to many word meanings and important for a wide range of NLP tasks. We present a model that uses a mix of unsupervised and supervised techniques to learn word vectors capturing semantic term--document information as well as rich sentiment content. The proposed model can leverage both continuous and multi-dimensional sentiment information as well as non-sentiment annotations. We instantiate the model to utilize the document-level sentiment polarity annotations present in many online documents (e.g. star ratings). We evaluate the model using small, widely used sentiment and subjectivity corpora and find it out-performs several previously introduced methods for sentiment classification. We also introduce a large dataset of movie reviews to serve as a more robust benchmark for work in this area.", "This paper presents the SELC Model (SElf-Supervised, (Lexicon-based and (Corpus-based Model) for sentiment classification. The SELC Model includes two phases. The first phase is a lexicon-based iterative process. In this phase, some reviews are initially classified based on a sentiment dictionary. Then more reviews are classified through an iterative process with a negative positive ratio control. In the second phase, a supervised classifier is learned by taking some reviews classified in the first phase as training data. Then the supervised classifier applies on other reviews to revise the results produced in the first phase. Experiments show the effectiveness of the proposed model. SELC totally achieves 6.63 F1-score improvement over the best result in previous studies on the same data (from 82.72 to 89.35 ). The first phase of the SELC Model independently achieves 5.90 improvement (from 82.72 to 88.62 ). Moreover, the standard deviation of F1-scores is reduced, which shows that the SELC Model could be more suitable for domain-independent sentiment classification." ] }
1502.03322
1669910309
Sentiment analysis on user reviews helps to keep track of user reactions towards products, and make advices to users about what to buy. State-of-the-art review-level sentiment classification techniques could give pretty good precisions of above 90 . However, current phrase-level sentiment analysis approaches might only give sentiment polarity labelling precisions of around 70 80 , which is far from satisfaction and restricts its application in many practical tasks. In this paper, we focus on the problem of phrase-level sentiment polarity labelling and attempt to bridge the gap between phrase-level and review-level sentiment analysis. We investigate the inconsistency between the numerical star ratings and the sentiment orientation of textual user reviews. Although they have long been treated as identical, which serves as a basic assumption in previous work, we find that this assumption is not necessarily true. We further propose to leverage the results of review-level sentiment classification to boost the performance of phrase-level polarity labelling using a novel constrained convex optimization framework. Besides, the framework is capable of integrating various kinds of information sources and heuristics, while giving the global optimal solution due to its convexity. Experimental results on both English and Chinese reviews show that our framework achieves high labelling precisions of up to 89 , which is a significant improvement from current approaches.
Phrase-level sentiment analysis aims to analyze the sentiment expressed by users in a finer-grained granularity. It considers the sentiment expressed on specific product features or aspects @cite_32 . Perhaps one of the most important tasks in phrase-level sentiment analysis is the construction of Sentiment Lexicon @cite_24 @cite_20 @cite_29 @cite_18 @cite_12 , which is to extract feature-opinion word pairs and their corresponding sentiment polarities from these opinion rich user-generated free-texts. The construction of a high-quality sentiment lexicon would benefit various tasks, for example, personalized recommendation @cite_42 @cite_11 @cite_1 and automatic review summarization @cite_32 @cite_24 .
{ "cite_N": [ "@cite_18", "@cite_29", "@cite_42", "@cite_1", "@cite_32", "@cite_24", "@cite_12", "@cite_20", "@cite_11" ], "mid": [ "2131305515", "1964613733", "2152184085", "2046216022", "2160660844", "2084046180", "2001892351", "2141631351", "2251814753" ], "abstract": [ "The explosion of Web opinion data has made essential the need for automatic tools to analyze and understand people's sentiments toward different topics. In most sentiment analysis applications, the sentiment lexicon plays a central role. However, it is well known that there is no universally optimal sentiment lexicon since the polarity of words is sensitive to the topic domain. Even worse, in the same domain the same word may indicate different polarities with respect to different aspects. For example, in a laptop review, \"large\" is negative for the battery aspect while being positive for the screen aspect. In this paper, we focus on the problem of learning a sentiment lexicon that is not only domain specific but also dependent on the aspect in context given an unlabeled opinionated text collection. We propose a novel optimization framework that provides a unified and principled way to combine different sources of information for learning such a context-dependent sentiment lexicon. Experiments on two data sets (hotel reviews and customer feedback surveys on printers) show that our approach can not only identify new sentiment words specific to the given domain but also determine the different polarities of a word depending on the aspect in context. In further quantitative evaluation, our method is proved to be effective in constructing a high quality lexicon by comparing with a human annotated gold standard. In addition, using the learned context-dependent sentiment lexicon improved the accuracy in an aspect-level sentiment classification task.", "One of the important types of information on the Web is the opinions expressed in the user generated content, e.g., customer reviews of products, forum posts, and blogs. In this paper, we focus on customer reviews of products. In particular, we study the problem of determining the semantic orientations (positive, negative or neutral) of opinions expressed on product features in reviews. This problem has many applications, e.g., opinion mining, summarization and search. Most existing techniques utilize a list of opinion (bearing) words (also called opinion lexicon) for the purpose. Opinion words are words that express desirable (e.g., great, amazing, etc.) or undesirable (e.g., bad, poor, etc) states. These approaches, however, all have some major shortcomings. In this paper, we propose a holistic lexicon-based approach to solving the problem by exploiting external evidences and linguistic conventions of natural language expressions. This approach allows the system to handle opinion words that are context dependent, which cause major difficulties for existing algorithms. It also deals with many special words, phrases and language constructs which have impacts on opinions based on their linguistic patterns. It also has an effective function for aggregating multiple conflicting opinion words in a sentence. A system, called Opinion Observer, based on the proposed technique has been implemented. Experimental results using a benchmark product review data set and some additional reviews show that the proposed technique is highly effective. It outperforms existing methods significantly", "Collaborative Filtering(CF)-based recommendation algorithms, such as Latent Factor Models (LFM), work well in terms of prediction accuracy. However, the latent features make it difficulty to explain the recommendation results to the users. Fortunately, with the continuous growth of online user reviews, the information available for training a recommender system is no longer limited to just numerical star ratings or user item features. By extracting explicit user opinions about various aspects of a product from the reviews, it is possible to learn more details about what aspects a user cares, which further sheds light on the possibility to make explainable recommendations. In this work, we propose the Explicit Factor Model (EFM) to generate explainable recommendations, meanwhile keep a high prediction accuracy. We first extract explicit product features (i.e. aspects) and user opinions by phrase-level sentiment analysis on user reviews, then generate both recommendations and disrecommendations according to the specific product features to the user's interests and the hidden features learned. Besides, intuitional feature-level explanations about why an item is or is not recommended are generated from the model. Offline experimental results on several real-world datasets demonstrate the advantages of our framework over competitive baseline algorithms on both rating prediction and top-K recommendation tasks. Online experiments show that the detailed explanations make the recommendations and disrecommendations more influential on user's purchasing behavior.", "Previous research on Recommender Systems (RS), especially the continuously popular approach of Collaborative Filtering (CF), has been mostly focusing on the information resource of explicit user numerical ratings or implicit (still numerical) feedbacks. However, the ever-growing availability of textual user reviews has become an important information resource, where a wealth of explicit product attributes features and user attitudes sentiments are expressed therein. This information rich resource of textual reviews have clearly exhibited brand-new approaches to solving many of the important problems that have been perplexing the research community for years, such as the paradox of cold-start, the explanation of recommendation, and the automatic generation of user or item profiles. However, it is only recently that the fundamental importance of textual reviews has gained wide recognition, perhaps mainly because of the difficulty in formatting, structuring and analyzing the free-texts. In this research, we stress the importance of incorporating textual reviews for recommendation through phrase-level sentiment analysis, and further investigate the role that the texts play in various important recommendation tasks.", "Merchants selling products on the Web often ask their customers to review the products that they have purchased and the associated services. As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. For a popular product, the number of reviews can be in hundreds or even thousands. This makes it difficult for a potential customer to read them to make an informed decision on whether to purchase the product. It also makes it difficult for the manufacturer of the product to keep track and to manage customer opinions. For the manufacturer, there are additional difficulties because many merchant sites may sell the same product and the manufacturer normally produces many kinds of products. In this research, we aim to mine and to summarize all the customer reviews of a product. This summarization task is different from traditional text summarization because we only mine the features of the product on which the customers have expressed their opinions and whether the opinions are positive or negative. We do not summarize the reviews by selecting a subset or rewrite some of the original sentences from the reviews to capture the main points as in the classic text summarization. Our task is performed in three steps: (1) mining product features that have been commented on by customers; (2) identifying opinion sentences in each review and deciding whether each opinion sentence is positive or negative; (3) summarizing the results. This paper proposes several novel techniques to perform these tasks. Our experimental results using reviews of a number of products sold online demonstrate the effectiveness of the techniques.", "We present a lexicon-based approach to extracting sentiment from text. The Semantic Orientation CALculator (SO-CAL) uses dictionaries of words annotated with their semantic orientation (polarity and strength), and incorporates intensification and negation. SO-CAL is applied to the polarity classification task, the process of assigning a positive or negative label to a text that captures the text's opinion towards its main subject matter. We show that SO-CAL's performance is consistent across domains and in completely unseen data. Additionally, we describe the process of dictionary creation, and our use of Mechanical Turk to check dictionaries for consistency and reliability.", "Current approaches for contextual sentiment lexicon construction in phrase-level sentiment analysis assume that the numerical star rating of a review represents the overall sentiment orientation of the review text. Although widely adopted, we find through user rating analysis that this is not necessarily true. In this paper, we attempt to bridge the gap between phrase-level and review document-level sentiment analysis by leveraging the results given by review-level sentiment classification to boost phrase-level sentiment polarity labeling in contextual sentiment lexicon construction tasks, using a novel constrained convex optimization framework. Experimental results on both English and Chinese reviews show that our framework improves the precision of sentiment polarity labeling by up to 5.6 , which is a significant improvement from current approaches.", "The Web has become an excellent source for gathering consumer opinions. There are now numerous Web sites containing such opinions, e.g., customer reviews of products, forums, discussion groups, and blogs. This paper focuses on online customer reviews of products. It makes two contributions. First, it proposes a novel framework for analyzing and comparing consumer opinions of competing products. A prototype system called Opinion Observer is also implemented. The system is such that with a single glance of its visualization, the user is able to clearly see the strengths and weaknesses of each product in the minds of consumers in terms of various product features. This comparison is useful to both potential customers and product manufacturers. For a potential customer, he she can see a visual side-by-side and feature-by-feature comparison of consumer opinions on these products, which helps him her to decide which product to buy. For a product manufacturer, the comparison enables it to easily gather marketing intelligence and product benchmarking information. Second, a new technique based on language pattern mining is proposed to extract product features from Pros and Cons in a particular type of reviews. Such features form the basis for the above comparison. Experimental results show that the technique is highly effective and outperform existing methods significantly.", "The frequently changing user preferences and or item profiles have put essential importance on the dynamic modeling of users and items in personalized recommender systems. However, due to the insufficiency of per user item records when splitting the already sparse data across time dimension, previous methods have to restrict the drifting purchasing patterns to pre-assumed distributions, and were hardly able to model them rather directly with, for example, time series analysis. Integrating content information helps to alleviate the problem in practical systems, but the domain-dependent content knowledge is expensive to obtain due to the large amount of manual efforts. In this paper, we make use of the large volume of textual reviews for the automatic extraction of domain knowledge, namely, the explicit features aspects in a specific product domain. We thus degrade the product-level modeling of user preferences, which suffers from the lack of data, to the feature-level modeling, which not only grants us the ability to predict user preferences through direct time series analysis, but also allows us to know the essence under the surface of product-level changes in purchasing patterns. Besides, the expanded feature space also helps to make cold-start recommendations for users with few purchasing records. Technically, we develop the Fourier-assisted Auto-Regressive Integrated Moving Average (FARIMA) process to tackle with the year-long seasonal period of purchasing data to achieve daily-aware preference predictions, and we leverage the conditional opportunity models for daily-aware personalized recommendation. Extensive experimental results on real-world cosmetic purchasing data from a major e-commerce website (JD.com) in China verified both the effectiveness and efficiency of our approach." ] }
1502.03322
1669910309
Sentiment analysis on user reviews helps to keep track of user reactions towards products, and make advices to users about what to buy. State-of-the-art review-level sentiment classification techniques could give pretty good precisions of above 90 . However, current phrase-level sentiment analysis approaches might only give sentiment polarity labelling precisions of around 70 80 , which is far from satisfaction and restricts its application in many practical tasks. In this paper, we focus on the problem of phrase-level sentiment polarity labelling and attempt to bridge the gap between phrase-level and review-level sentiment analysis. We investigate the inconsistency between the numerical star ratings and the sentiment orientation of textual user reviews. Although they have long been treated as identical, which serves as a basic assumption in previous work, we find that this assumption is not necessarily true. We further propose to leverage the results of review-level sentiment classification to boost the performance of phrase-level polarity labelling using a novel constrained convex optimization framework. Besides, the framework is capable of integrating various kinds of information sources and heuristics, while giving the global optimal solution due to its convexity. Experimental results on both English and Chinese reviews show that our framework achieves high labelling precisions of up to 89 , which is a significant improvement from current approaches.
Although some opinion words like "good" or "bad" usually express consistent sentiments in different cases, many others might have different sentiment polarities when accompanied with different feature words, which means that the sentiment lexicon is @cite_7 .
{ "cite_N": [ "@cite_7" ], "mid": [ "2022204871" ], "abstract": [ "This paper presents a new approach to phrase-level sentiment analysis that first determines whether an expression is neutral or polar and then disambiguates the polarity of the polar expressions. With this approach, the system is able to automatically identify the contextual polarity for a large subset of sentiment expressions, achieving results that are significantly better than baseline." ] }
1502.03322
1669910309
Sentiment analysis on user reviews helps to keep track of user reactions towards products, and make advices to users about what to buy. State-of-the-art review-level sentiment classification techniques could give pretty good precisions of above 90 . However, current phrase-level sentiment analysis approaches might only give sentiment polarity labelling precisions of around 70 80 , which is far from satisfaction and restricts its application in many practical tasks. In this paper, we focus on the problem of phrase-level sentiment polarity labelling and attempt to bridge the gap between phrase-level and review-level sentiment analysis. We investigate the inconsistency between the numerical star ratings and the sentiment orientation of textual user reviews. Although they have long been treated as identical, which serves as a basic assumption in previous work, we find that this assumption is not necessarily true. We further propose to leverage the results of review-level sentiment classification to boost the performance of phrase-level polarity labelling using a novel constrained convex optimization framework. Besides, the framework is capable of integrating various kinds of information sources and heuristics, while giving the global optimal solution due to its convexity. Experimental results on both English and Chinese reviews show that our framework achieves high labelling precisions of up to 89 , which is a significant improvement from current approaches.
Various information and heuristics could be used in the process of polarity labelling of the feature-opinion pairs. For example, it is often assumed that the overall sentiment orientation of a review is aggregated from all the feature-opinion pairs in it @cite_29 @cite_18 . Besides, some seed opinion words that express "fixed" sentiments are usually provided, which are used to propagate the sentiment polarities of the other words @cite_32 @cite_18 . Some work takes advantage of linguistic heuristics @cite_22 @cite_27 @cite_18 . For example, two feature-opinion pairs concatenated with the conjunctive "and" might have the same sentiment, while they might have opposite sentiments if connected by "but". The assumption of linguistic heuristic is further extended by sentential sentiment consistency in @cite_40 .
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_29", "@cite_32", "@cite_27", "@cite_40" ], "mid": [ "2131305515", "2587281424", "1964613733", "2160660844", "187383899", "2086277751" ], "abstract": [ "The explosion of Web opinion data has made essential the need for automatic tools to analyze and understand people's sentiments toward different topics. In most sentiment analysis applications, the sentiment lexicon plays a central role. However, it is well known that there is no universally optimal sentiment lexicon since the polarity of words is sensitive to the topic domain. Even worse, in the same domain the same word may indicate different polarities with respect to different aspects. For example, in a laptop review, \"large\" is negative for the battery aspect while being positive for the screen aspect. In this paper, we focus on the problem of learning a sentiment lexicon that is not only domain specific but also dependent on the aspect in context given an unlabeled opinionated text collection. We propose a novel optimization framework that provides a unified and principled way to combine different sources of information for learning such a context-dependent sentiment lexicon. Experiments on two data sets (hotel reviews and customer feedback surveys on printers) show that our approach can not only identify new sentiment words specific to the given domain but also determine the different polarities of a word depending on the aspect in context. In further quantitative evaluation, our method is proved to be effective in constructing a high quality lexicon by comparing with a human annotated gold standard. In addition, using the learned context-dependent sentiment lexicon improved the accuracy in an aspect-level sentiment classification task.", "An intraocular lens for implantation in an eye comprising an optic configured so that the optic can be deformed to permit the intraocular lens to be passed through an incision into the eye. A peripheral zone circumscribes the optical zone of the optic and one or more fixation members coupled to the peripheral zone and extending outwardly from the peripheral zone to retain the optic in the eye are provided. In one embodiment the fixation member or members are located so that the optical zone is free of such member or members. The peripheral zone preferably has a maximum axial thickness which is larger than the maximum axial thickness of the periphery of the optical zone.", "One of the important types of information on the Web is the opinions expressed in the user generated content, e.g., customer reviews of products, forum posts, and blogs. In this paper, we focus on customer reviews of products. In particular, we study the problem of determining the semantic orientations (positive, negative or neutral) of opinions expressed on product features in reviews. This problem has many applications, e.g., opinion mining, summarization and search. Most existing techniques utilize a list of opinion (bearing) words (also called opinion lexicon) for the purpose. Opinion words are words that express desirable (e.g., great, amazing, etc.) or undesirable (e.g., bad, poor, etc) states. These approaches, however, all have some major shortcomings. In this paper, we propose a holistic lexicon-based approach to solving the problem by exploiting external evidences and linguistic conventions of natural language expressions. This approach allows the system to handle opinion words that are context dependent, which cause major difficulties for existing algorithms. It also deals with many special words, phrases and language constructs which have impacts on opinions based on their linguistic patterns. It also has an effective function for aggregating multiple conflicting opinion words in a sentence. A system, called Opinion Observer, based on the proposed technique has been implemented. Experimental results using a benchmark product review data set and some additional reviews show that the proposed technique is highly effective. It outperforms existing methods significantly", "Merchants selling products on the Web often ask their customers to review the products that they have purchased and the associated services. As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. For a popular product, the number of reviews can be in hundreds or even thousands. This makes it difficult for a potential customer to read them to make an informed decision on whether to purchase the product. It also makes it difficult for the manufacturer of the product to keep track and to manage customer opinions. For the manufacturer, there are additional difficulties because many merchant sites may sell the same product and the manufacturer normally produces many kinds of products. In this research, we aim to mine and to summarize all the customer reviews of a product. This summarization task is different from traditional text summarization because we only mine the features of the product on which the customers have expressed their opinions and whether the opinions are positive or negative. We do not summarize the reviews by selecting a subset or rewrite some of the original sentences from the reviews to capture the main points as in the classic text summarization. Our task is performed in three steps: (1) mining product features that have been commented on by customers; (2) identifying opinion sentences in each review and deciding whether each opinion sentence is positive or negative; (3) summarizing the results. This paper proposes several novel techniques to perform these tasks. Our experimental results using reviews of a number of products sold online demonstrate the effectiveness of the techniques.", "The explosion of social media services presents a great opportunity to understand the sentiment of the public via analyzing its large-scale and opinion-rich data. In social media, it is easy to amass vast quantities of unlabeled data, but very costly to obtain sentiment labels, which makes unsupervised sentiment analysis essential for various applications. It is challenging for traditional lexicon-based unsupervised methods due to the fact that expressions in social media are unstructured, informal, and fast-evolving. Emoticons and product ratings are examples of emotional signals that are associated with sentiments expressed in posts or words. Inspired by the wide availability of emotional signals in social media, we propose to study the problem of unsupervised sentiment analysis with emotional signals. In particular, we investigate whether the signals can potentially help sentiment analysis by providing a unified way to model two main categories of emotional signals, i.e., emotion indication and emotion correlation. We further incorporate the signals into an unsupervised learning framework for sentiment analysis. In the experiment, we compare the proposed framework with the state-of-the-art methods on two Twitter datasets and empirically evaluate our proposed framework to gain a deep understanding of the effects of emotional signals.", "This paper proposes an unsupervised lexicon building method for the detection of polar clauses, which convey positive or negative aspects in a specific domain. The lexical entries to be acquired are called polar atoms, the minimum human-understandable syntactic structures that specify the polarity of clauses. As a clue to obtain candidate polar atoms, we use context coherency, the tendency for same polarities to appear successively in contexts. Using the overall density and precision of coherency in the corpus, the statistical estimation picks up appropriate polar atoms among candidates, without any manual tuning of the threshold values. The experimental results show that the precision of polarity assignment with the automatically acquired lexicon was 94 on average, and our method is robust for corpora in diverse domains and for the size of the initial lexicon." ] }
1502.03322
1669910309
Sentiment analysis on user reviews helps to keep track of user reactions towards products, and make advices to users about what to buy. State-of-the-art review-level sentiment classification techniques could give pretty good precisions of above 90 . However, current phrase-level sentiment analysis approaches might only give sentiment polarity labelling precisions of around 70 80 , which is far from satisfaction and restricts its application in many practical tasks. In this paper, we focus on the problem of phrase-level sentiment polarity labelling and attempt to bridge the gap between phrase-level and review-level sentiment analysis. We investigate the inconsistency between the numerical star ratings and the sentiment orientation of textual user reviews. Although they have long been treated as identical, which serves as a basic assumption in previous work, we find that this assumption is not necessarily true. We further propose to leverage the results of review-level sentiment classification to boost the performance of phrase-level polarity labelling using a novel constrained convex optimization framework. Besides, the framework is capable of integrating various kinds of information sources and heuristics, while giving the global optimal solution due to its convexity. Experimental results on both English and Chinese reviews show that our framework achieves high labelling precisions of up to 89 , which is a significant improvement from current approaches.
In this paper, we consider two main disadvantages of previous work. First, seldom of them combine various heuristics in a unified framework @cite_24 @cite_29 @cite_20 @cite_32 @cite_7 . Second, they simply use the numerical star rating as the overall sentiment polarity of the review text to supervise the process of phrase-level polarity labelling @cite_18 @cite_29 @cite_31 @cite_4 . In this work, we propose to boost phrase-level polarity labelling with review-level sentiment classification, while incorporating many of the commonly used heuristics in a unified framework.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_7", "@cite_29", "@cite_32", "@cite_24", "@cite_31", "@cite_20" ], "mid": [ "2131305515", "2115023510", "2022204871", "1964613733", "2160660844", "2084046180", "", "2141631351" ], "abstract": [ "The explosion of Web opinion data has made essential the need for automatic tools to analyze and understand people's sentiments toward different topics. In most sentiment analysis applications, the sentiment lexicon plays a central role. However, it is well known that there is no universally optimal sentiment lexicon since the polarity of words is sensitive to the topic domain. Even worse, in the same domain the same word may indicate different polarities with respect to different aspects. For example, in a laptop review, \"large\" is negative for the battery aspect while being positive for the screen aspect. In this paper, we focus on the problem of learning a sentiment lexicon that is not only domain specific but also dependent on the aspect in context given an unlabeled opinionated text collection. We propose a novel optimization framework that provides a unified and principled way to combine different sources of information for learning such a context-dependent sentiment lexicon. Experiments on two data sets (hotel reviews and customer feedback surveys on printers) show that our approach can not only identify new sentiment words specific to the given domain but also determine the different polarities of a word depending on the aspect in context. In further quantitative evaluation, our method is proved to be effective in constructing a high quality lexicon by comparing with a human annotated gold standard. In addition, using the learned context-dependent sentiment lexicon improved the accuracy in an aspect-level sentiment classification task.", "The web contains a wealth of product reviews, but sifting through them is a daunting task. Ideally, an opinion mining tool would process a set of search results for a given item, generating a list of product attributes (quality, features, etc.) and aggregating opinions about each of them (poor, mixed, good). We begin by identifying the unique properties of this problem and develop a method for automatically distinguishing between positive and negative reviews. Our classifier draws on information retrieval techniques for feature extraction and scoring, and the results for various metrics and heuristics vary depending on the testing situation. The best methods work as well as or better than traditional machine learning. When operating on individual sentences collected from web searches, performance is limited due to noise and ambiguity. But in the context of a complete web-based tool and aided by a simple method for grouping sentences into attributes, the results are qualitatively quite useful.", "This paper presents a new approach to phrase-level sentiment analysis that first determines whether an expression is neutral or polar and then disambiguates the polarity of the polar expressions. With this approach, the system is able to automatically identify the contextual polarity for a large subset of sentiment expressions, achieving results that are significantly better than baseline.", "One of the important types of information on the Web is the opinions expressed in the user generated content, e.g., customer reviews of products, forum posts, and blogs. In this paper, we focus on customer reviews of products. In particular, we study the problem of determining the semantic orientations (positive, negative or neutral) of opinions expressed on product features in reviews. This problem has many applications, e.g., opinion mining, summarization and search. Most existing techniques utilize a list of opinion (bearing) words (also called opinion lexicon) for the purpose. Opinion words are words that express desirable (e.g., great, amazing, etc.) or undesirable (e.g., bad, poor, etc) states. These approaches, however, all have some major shortcomings. In this paper, we propose a holistic lexicon-based approach to solving the problem by exploiting external evidences and linguistic conventions of natural language expressions. This approach allows the system to handle opinion words that are context dependent, which cause major difficulties for existing algorithms. It also deals with many special words, phrases and language constructs which have impacts on opinions based on their linguistic patterns. It also has an effective function for aggregating multiple conflicting opinion words in a sentence. A system, called Opinion Observer, based on the proposed technique has been implemented. Experimental results using a benchmark product review data set and some additional reviews show that the proposed technique is highly effective. It outperforms existing methods significantly", "Merchants selling products on the Web often ask their customers to review the products that they have purchased and the associated services. As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. For a popular product, the number of reviews can be in hundreds or even thousands. This makes it difficult for a potential customer to read them to make an informed decision on whether to purchase the product. It also makes it difficult for the manufacturer of the product to keep track and to manage customer opinions. For the manufacturer, there are additional difficulties because many merchant sites may sell the same product and the manufacturer normally produces many kinds of products. In this research, we aim to mine and to summarize all the customer reviews of a product. This summarization task is different from traditional text summarization because we only mine the features of the product on which the customers have expressed their opinions and whether the opinions are positive or negative. We do not summarize the reviews by selecting a subset or rewrite some of the original sentences from the reviews to capture the main points as in the classic text summarization. Our task is performed in three steps: (1) mining product features that have been commented on by customers; (2) identifying opinion sentences in each review and deciding whether each opinion sentence is positive or negative; (3) summarizing the results. This paper proposes several novel techniques to perform these tasks. Our experimental results using reviews of a number of products sold online demonstrate the effectiveness of the techniques.", "We present a lexicon-based approach to extracting sentiment from text. The Semantic Orientation CALculator (SO-CAL) uses dictionaries of words annotated with their semantic orientation (polarity and strength), and incorporates intensification and negation. SO-CAL is applied to the polarity classification task, the process of assigning a positive or negative label to a text that captures the text's opinion towards its main subject matter. We show that SO-CAL's performance is consistent across domains and in completely unseen data. Additionally, we describe the process of dictionary creation, and our use of Mechanical Turk to check dictionaries for consistency and reliability.", "", "The Web has become an excellent source for gathering consumer opinions. There are now numerous Web sites containing such opinions, e.g., customer reviews of products, forums, discussion groups, and blogs. This paper focuses on online customer reviews of products. It makes two contributions. First, it proposes a novel framework for analyzing and comparing consumer opinions of competing products. A prototype system called Opinion Observer is also implemented. The system is such that with a single glance of its visualization, the user is able to clearly see the strengths and weaknesses of each product in the minds of consumers in terms of various product features. This comparison is useful to both potential customers and product manufacturers. For a potential customer, he she can see a visual side-by-side and feature-by-feature comparison of consumer opinions on these products, which helps him her to decide which product to buy. For a product manufacturer, the comparison enables it to easily gather marketing intelligence and product benchmarking information. Second, a new technique based on language pattern mining is proposed to extract product features from Pros and Cons in a particular type of reviews. Such features form the basis for the above comparison. Experimental results show that the technique is highly effective and outperform existing methods significantly." ] }
1502.02973
2949715761
The rapid development of signal processing on graphs provides a new perspective for processing large-scale data associated with irregular domains. In many practical applications, it is necessary to handle massive data sets through complex networks, in which most nodes have limited computing power. Designing efficient distributed algorithms is critical for this task. This paper focuses on the distributed reconstruction of a time-varying bandlimited graph signal based on observations sampled at a subset of selected nodes. A distributed least square reconstruction (DLSR) algorithm is proposed to recover the unknown signal iteratively, by allowing neighboring nodes to communicate with one another and make fast updates. DLSR uses a decay scheme to annihilate the out-of-band energy occurring in the reconstruction process, which is inevitably caused by the transmission delay in distributed systems. Proof of convergence and error bounds for DLSR are provided in this paper, suggesting that the algorithm is able to track time-varying graph signals and perfectly reconstruct time-invariant signals. The DLSR algorithm is numerically experimented with synthetic data and real-world sensor network data, which verifies its ability in tracking slowly time-varying graph signals.
Some theoretical results have been established for the sampling problem of bandlimited graph-based signals; see e.g., @cite_30 @cite_24 @cite_20 . The relation between the sample size necessary to obtain unique reconstruction and the cutoff frequency of bandlimited signal space has been studied. Similar to classical results on time-domain irregular sampling, the idea of frame" has been introduced for graph signal processing. The unique reconstruction conditions have been derived for normalized and unnormalized Laplacians @cite_30 @cite_43 . As the field of graph signal processing is rapidly developing, we summarize some recent related works as follows. A least square approach has been proposed in @cite_27 to reconstruct bandlimited graph signal from signal values observed on sampled vertices, using a centralized algorithm. An iterative method of bandlimited graph signal reconstruction has been proposed in @cite_45 , with the practical consideration of balancing a tradeoff between smoothness and data-fitting. Two more efficient iterative reconstruction methods using the local set have been considered in @cite_2 @cite_38 . A necessary and sufficient condition for perfect reconstruction of bandlimited graph signal has been derived in @cite_0 . Readers may refer to Section for more details of related works.
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_24", "@cite_43", "@cite_27", "@cite_45", "@cite_0", "@cite_2", "@cite_20" ], "mid": [ "2024457004", "2030643321", "2014368295", "", "2119699244", "2952679970", "2030319053", "2030368230", "" ], "abstract": [ "A notion of Paley-Wiener spaces on combinatorial graphs is introduced. It is shown that functions from some of these spaces are uniquely determined by their values on some sets of vertices which are called the uniqueness sets. Such uniqueness sets are described in terms of Poincare-Wirtinger-type inequalities. A reconstruction algorithm of Paley-Wiener functions from uniqueness sets which uses the idea of frames in Hilbert spaces is developed. Special consideration is given to the n-dimensional lattice, homogeneous trees, and eigenvalue and eigenfunction problems on finite graphs.", "Signal processing on graph is attracting more and more attentions. For a graph signal in the low-frequency subspace, the missing data associated with unsampled vertices can be reconstructed through the sampled data by exploiting the smoothness of the graph signal. In this paper, the concept of local set is introduced and two local-set-based iterative methods are proposed to reconstruct bandlimited graph signal from sampled data. In each iteration, one of the proposed methods reweights the sampled residuals for different vertices, while the other propagates the sampled residuals in their respective local sets. These algorithms are built on frame theory and the concept of local sets, based on which several frames and contraction operators are proposed. We then prove that the reconstruction methods converge to the original signal under certain conditions and demonstrate the new methods lead to a significantly faster convergence compared with the baseline method. Furthermore, the correspondence between graph signal sampling and time-domain irregular sampling is analyzed comprehensively, which may be helpful to future works on graph signals. Computer simulations are conducted. The experimental results demonstrate the effectiveness of the reconstruction methods in various sampling geometries, imprecise priori knowledge of cutoff frequency, and noisy scenarios.", "Notions of interpolating variational splines and Paley–Wiener spaces are introduced on a combinatorial graph G. Both of these definitions explore existence of a combinatorial Laplace operator on G. The existence and uniqueness of interpolating variational splines on a graph is shown. As an application of variational splines, the paper presents a reconstruction algorithm of Paley–Wiener functions on graphs from their uniqueness sets.", "", "In this paper, we propose a novel algorithm to interpolate data defined on graphs, using signal processing concepts. The interpolation of missing values from known samples appears in various applications, such as matrix vector completion, sampling of high-dimensional data, semi-supervised learning etc. In this paper, we formulate the data interpolation problem as a signal reconstruction problem on a graph, where a graph signal is defined as the information attached to each node (scalar or vector values mapped to the set of vertices edges of the graph). We use recent results for sampling in graphs to find classes of bandlimited (BL) graph signals that can be reconstructed from their partially observed samples. The interpolated signal is obtained by projecting the input signal into the appropriate BL graph signal space. Additionally, we impose a bilateral' weighting scheme on the links between known samples, which further improves accuracy. We use our proposed method for collaborative filtering in recommendation systems. Preliminary results show a very favorable trade-off between accuracy and complexity, compared to state of the art algorithms.", "In this paper, we present two localized graph filtering based methods for interpolating graph signals defined on the vertices of arbitrary graphs from only a partial set of samples. The first method is an extension of previous work on reconstructing bandlimited graph signals from partially observed samples. The iterative graph filtering approach very closely approximates the solution proposed in the that work, while being computationally more efficient. As an alternative, we propose a regularization based framework in which we define the cost of reconstruction to be a combination of smoothness of the graph signal and the reconstruction error with respect to the known samples, and find solutions that minimize this cost. We provide both a closed form solution and a computationally efficient iterative solution of the optimization problem. The experimental results on the recommendation system datasets demonstrate effectiveness of the proposed methods.", "In this paper, we extend the Nyquist-Shannon theory of sampling to signals defined on arbitrary graphs. Using spectral graph theory, we establish a cut-off frequency for all bandlimited graph signals that can be perfectly reconstructed from samples on a given subset of nodes. The result is analogous to the concept of Nyquist frequency in traditional signal processing. We consider practical ways of computing this cut-off and show that it is an improvement over previous results. We also propose a greedy algorithm to search for the smallest possible sampling set that guarantees unique recovery for a signal of given bandwidth. The efficacy of these results is verified through simple examples.", "Signal processing on graph is attracting more and more attention. For a graph signal in the low-frequency subspace, the missing data on the vertices of graph can be reconstructed through the sampled data by exploiting the smoothness of graph signal. In this paper, two iterative methods are proposed to reconstruct bandlimited graph signal from sampled data. In each iteration, one of the proposed methods weights the sampled residual for different vertices, while the other conducts a limited propagation operation. Both the methods are proved to converge to the original signal under certain conditions. The proposed methods lead to a significantly faster convergence compared with the baseline method. Experiment results of synthetic graph signal and the real world data demonstrate the effectiveness of the reconstruction methods.", "" ] }
1502.02973
2949715761
The rapid development of signal processing on graphs provides a new perspective for processing large-scale data associated with irregular domains. In many practical applications, it is necessary to handle massive data sets through complex networks, in which most nodes have limited computing power. Designing efficient distributed algorithms is critical for this task. This paper focuses on the distributed reconstruction of a time-varying bandlimited graph signal based on observations sampled at a subset of selected nodes. A distributed least square reconstruction (DLSR) algorithm is proposed to recover the unknown signal iteratively, by allowing neighboring nodes to communicate with one another and make fast updates. DLSR uses a decay scheme to annihilate the out-of-band energy occurring in the reconstruction process, which is inevitably caused by the transmission delay in distributed systems. Proof of convergence and error bounds for DLSR are provided in this paper, suggesting that the algorithm is able to track time-varying graph signals and perfectly reconstruct time-invariant signals. The DLSR algorithm is numerically experimented with synthetic data and real-world sensor network data, which verifies its ability in tracking slowly time-varying graph signals.
Signal processing on graph is naturally related to distributed systems. For large-scale systems in lack of a central controller, e.g., sensor networks, distributed estimation and tracking @cite_35 is an important topic. Algorithmic frameworks for distributed regression @cite_13 and inference @cite_14 have been studied to fit global functions based on local measurements in sensor networks. Consensus-based methods have been proposed in @cite_15 @cite_42 to distributively compute the maximum likelihood estimate of unknown parameters. Distributed Kalman filtering has been introduced in @cite_11 for target tracking of sensor networks. Diffusion RLS @cite_34 and LMS @cite_17 algorithms have been proposed for distributed estimation over adaptive networks. To the best knowledge of the authors, there have been few works on the distributed reconstruction problem of bandlimited graph signal reconstruction. A related work @cite_19 proposes an approximation method that calculates the graph Fourier multipliers distributively, which we will discuss in subsequent sections.
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_42", "@cite_17", "@cite_19", "@cite_15", "@cite_34", "@cite_13", "@cite_11" ], "mid": [ "2129078811", "2110048879", "2108970807", "", "1976257805", "2098863504", "", "2165004589", "" ], "abstract": [ "", "Many inference problems that arise in sensor networks require the computation of a global conclusion that is consistent with local information known to each node. A large class of these problems---including probabilistic inference, regression, and control problems---can be solved by message passing on a data structure called a junction tree. In this paper, we present a distributed architecture for solving these problems that is robust to unreliable communication and node failures. In this architecture, the nodes of the sensor network assemble themselves into a junction tree and exchange messages between neighbors to solve the inference problem efficiently and exactly. A key part of the architecture is an efficient distributed algorithm for optimizing the choice of junction tree to minimize the communication and computation required by inference. We present experimental results from a prototype implementation on a 97-node Mica2 mote network, as well as simulation results for three applications: distributed sensor calibration, optimal control, and sensor field modeling. These experiments demonstrate that our distributed architecture can solve many important inference problems exactly, efficiently, and robustly.", "We deal with distributed estimation of deterministic vector parameters using ad hoc wireless sensor networks (WSNs). We cast the decentralized estimation problem as the solution of multiple constrained convex optimization subproblems. Using the method of multipliers in conjunction with a block coordinate descent approach we demonstrate how the resultant algorithm can be decomposed into a set of simpler tasks suitable for distributed implementation. Different from existing alternatives, our approach does not require the centralized estimator to be expressible in a separable closed form in terms of averages, thus allowing for decentralized computation even of nonlinear estimators, including maximum likelihood estimators (MLE) in nonlinear and non-Gaussian data models. We prove that these algorithms have guaranteed convergence to the desired estimator when the sensor links are assumed ideal. Furthermore, our decentralized algorithms exhibit resilience in the presence of receiver and or quantization noise. In particular, we introduce a decentralized scheme for least-squares and best linear unbiased estimation (BLUE) and establish its convergence in the presence of communication noise. Our algorithms also exhibit potential for higher convergence rate with respect to existing schemes. Corroborating simulations demonstrate the merits of the novel distributed estimation algorithms.", "", "Unions of graph Fourier multipliers are an important class of linear operators for processing signals defined on graphs. We present a novel method to efficiently distribute the application of these operators to the high-dimensional signals collected by sensor networks. The proposed method features approximations of the graph Fourier multipliers by shifted Chebyshev polynomials, whose recurrence relations make them readily amenable to distributed computation. We demonstrate how the proposed method can be used in a distributed denoising task, and show that the communication requirements of the method scale gracefully with the size of the network.", "We consider a network of distributed sensors, where where each sensor takes a linear measurement of some unknown parameters, corrupted by independent Gaussian noises. We propose a simple distributed iterative scheme, based on distributed average consensus in the network, to compute the maximum-likelihood estimate of the parameters. This scheme doesn't involve explicit point-to-point message passing or routing; instead, it diffuses information across the network by updating each node's data with a weighted average of its neighbors' data (they maintain the same data structure). At each step, every node can compute a local weighted least-squares estimate, which converges to the global maximum-likelihood solution. This scheme is robust to unreliable communication links. We show that it works in a network with dynamically changing topology, provided that the infinitely occurring communication graphs are jointly connected.", "", "We present distributed regression, an efficient and general framework for in-network modeling of sensor data. In this framework, the nodes of the sensor network collaborate to optimally fit a global function to each of their local measurements. The algorithm is based upon kernel linear regression, where the model takes the form of a weighted sum of local basis functions; this provides an expressive yet tractable class of models for sensor network data. Rather than transmitting data to one another or outside the network, nodes communicate constraints on the model parameters, drastically reducing the communication required. After the algorithm is run, each node can answer queries for its local region, or the nodes can efficiently transmit the parameters of the model to a user outside the network. We present an evaluation of the algorithm based upon data from a 48-node sensor network deployment at the Intel Research - Berkeley Lab, demonstrating that our distributed algorithm converges to the optimal solution at a fast rate and is very robust to packet losses.", "" ] }
1502.03407
2952376543
Social networking services are increasingly accessed through mobile devices. This trend has prompted services such as Facebook and Google+ to incorporate location as a de facto feature of user interaction. At the same time, services based on location such as Foursquare and Shopkick are also growing as smartphone market penetration increases. In fact, this growth is happening despite concerns (growing at a similar pace) about security and third-party use of private location information (e.g., for advertising). Nevertheless, service providers have been unwilling to build truly private systems in which they do not have access to location information. In this paper, we describe an architecture and a trial implementation of a privacy-preserving location sharing system called Albatross. The system protects location information from the service provider and yet enables fine-grained location-sharing. One main feature of the system is to protect an individual's social network structure. The pattern of location sharing preferences towards contacts can reveal this structure without any knowledge of the locations themselves. Albatross protects locations sharing preferences through protocol unification and masking. Albatross has been implemented as a standalone solution, but the technology can also be integrated into location-based services to enhance privacy.
Existing production location sharing systems with a trusted server include Foursquare @cite_24 , Google Latitude @cite_5 , Apple's Find My Friends @cite_20 , and Glympse @cite_9 . There has been work on location sharing systems that enhance privacy through an untrusted server. The work by @cite_0 concentrates on a particular type of location sharing, the nearby'' sharing granularity. We aim for a more complete system that includes different granularities of location sharing and address the issue of how to hide these granularities from the server. @cite_14 , @cite_17 @cite_2 , and @cite_28 also focus on the nearby'' sharing granularity. Private versions of other types of social networking systems with an untrusted server have also been studied. @cite_21 studied micro-blogs, specifically Twitter. Their system is analogous to ours in that they attempt to provide the expected core micro-blogging services, and yet hide tweets, hashtags, and followers from the server. @cite_7 study presence systems, in which a user's presence status ( in building'', in office'', has visitor'', etc.) is hidden from the server. There is a large body of work studying recommender systems with untrusted servers; early work in this area was performed by Canny @cite_3 .
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_28", "@cite_9", "@cite_21", "@cite_3", "@cite_24", "@cite_0", "@cite_2", "@cite_5", "@cite_20", "@cite_17" ], "mid": [ "2079154980", "2112141762", "2035921576", "", "2068220504", "2069388449", "", "2394957549", "", "", "", "1560236469" ], "abstract": [ "We present a 3-week user study in which we tracked the locations of 27 subjects and asked them to rate when, where, and with whom they would have been comfortable sharing their locations. The results of analysis conducted on over 7,500 h of data suggest that the user population represented by our subjects has rich location-privacy preferences, with a number of critical dimensions, including time of day, day of week, and location. We describe a methodology for quantifying the effects, in terms of accuracy and amount of information shared, of privacy-setting types with differing levels of complexity (e.g., setting types that allow users to specify location- and or time-based rules). Using the detailed preferences we collected, we identify the best possible policy (or collection of rules granting access to one's location) for each subject and privacy-setting type. We measure the accuracy with which the resulting policies are able to capture our subjects' preferences under a variety of assumptions about the sensitivity of the information and user-burden tolerance. One practical implication of our results is that today's location-sharing applications may have failed to gain much traction due to their limited privacy settings, as they appear to be ineffective at capturing the preferences revealed by our study.", "As sensors become ever more prevalent, more and more information will be collected about each of us. A long-term research question is how best to support beneficial uses while preserving individual privacy. Presence systems are an emerging class of applications that support collaboration. These systems leverage pervasive sensors to estimate end-user location, activities, and available communication channels. Because such presence data are sensitive, to achieve wide-spread adoption, sharing models must reflect the privacy and sharing preferences of the users. To reflect users' collaborative relationships and sharing desires, we introduce CollaPSE security, in which an individual has full access to her own data, a third party processes the data without learning anything about the data values, and users higher up in the hierarchy learn only statistical information about the employees under them. We describe simple schemes that efficiently realize CollaPSE security for time series data. We implemented these protocols using readily available cryptographic functions, and integrated the protocols with FXPAL's myUnity presence system.", "We present a solution which improves the level of privacy possible in location based services (LBS). A core component of LBS is proximity testing of users. Alice wants to know if she is near to Bob (or generally some location). The presented solution support private proximity testing and is actively secure meaning it prevents a number of attacks possible in existing protocols for private proximity testing. We demonstrate that the improved security provided only implies a factor of two penalty on execution time compared to an existing passively secure protocol. We also provide a security analysis and discuss the relevance of secure multiparty computation for location based services.", "", "In the last several years, micro-blogging Online Social Networks (OSNs), such as Twitter, have taken the world by storm, now boasting over 100 million subscribers. As an unparalleled stage for an enormous audience, they offer fast and reliable centralized diffusion of pithy tweets to great multitudes of information-hungry and always-connected followers. At the same time, this information gathering and dissemination paradigm prompts some important privacy concerns about relationships between tweeters, followers and interests of the latter. In this paper, we assess privacy in today's Twitter-like OSNs and describe an architecture and a trial implementation of a privacy-preserving service called Hummingbird. It is essentially a variant of Twitter that protects tweet contents, hash tags and follower interests from the (potentially) prying eyes of the centralized server. We argue that, although inherently limited by Twitter's mission of scalable information-sharing, this degree of privacy is valuable. We demonstrate, via a working prototype, that Hummingbird's additional costs are tolerably low. We also sketch out some viable enhancements that might offer better privacy in the long term.", "To usefully query a location-based service, a mobile device must typically present its own location in its query to the server. This may not be acceptable to clients that wish to protect the privacy of their location. This paper presents the design and implementation of SybilQuery, a fully decentralized and autonomous k-anonymization-based scheme to privately query location-based services. SybilQuery is a client-side tool that generates k-1 Sybil queries for each query by the client. The location-based server is presented with a set of k queries and is unable to distinguish between the client's query and the Sybil queries, thereby achieving k-anonymity. We tested our implementation of SybilQuery on real mobility traces of approximately 500 cabs in the San Francisco Bay area. Our experiments show that SybilQuery can efficiently generate Sybil queries and that these queries are indistinguishable from real queries.", "", "We study privacy-preserving tests for proximity: Alice can test if she is close to Bob without either party revealing any other information about their location. We describe several secure protocols that support private proximity testing at various levels of granularity. We study the use of “location tags” generated from the physical environment in order to strengthen the security of proximity testing. We implemented our system on the Android platform and report on its effectiveness. Our system uses a social network (Facebook) to manage user public keys.", "", "", "", "A \"friend finder\" is a Location Based Service (LBS) that informs users about the presence of participants in a geographical area. In particular, one of the functionalities of this kind of application, reveals the users that are in proximity. Several implementations of the friend finder service already exist but, to the best of our knowledge, none of them provides a satisfactory technique to protect users' privacy. While several techniques have been proposed to protect users' privacy for other types of spatial queries, these techniques are not appropriate for range queries over moving objects, like those used in friend finders. Solutions based on cryptography in decentralized architectures have been proposed, but we show that a centralized service has several advantages in terms of communication costs, in addition to support current business models. In this paper, we propose a privacy-aware centralized solution based on an efficient three-party secure computation protocol, named Longitude . The protocol allows a user to know if any of her contacts is close-by without revealing any location information to the service provider. The protocol also ensures that user-defined minimum privacy requirements with respect to the location information revealed to other buddies are satisfied. Finally, we present an extensive experimental work that shows the applicability of the proposed technique and the advantages over alternative proposals." ] }
1502.03407
2952376543
Social networking services are increasingly accessed through mobile devices. This trend has prompted services such as Facebook and Google+ to incorporate location as a de facto feature of user interaction. At the same time, services based on location such as Foursquare and Shopkick are also growing as smartphone market penetration increases. In fact, this growth is happening despite concerns (growing at a similar pace) about security and third-party use of private location information (e.g., for advertising). Nevertheless, service providers have been unwilling to build truly private systems in which they do not have access to location information. In this paper, we describe an architecture and a trial implementation of a privacy-preserving location sharing system called Albatross. The system protects location information from the service provider and yet enables fine-grained location-sharing. One main feature of the system is to protect an individual's social network structure. The pattern of location sharing preferences towards contacts can reveal this structure without any knowledge of the locations themselves. Albatross protects locations sharing preferences through protocol unification and masking. Albatross has been implemented as a standalone solution, but the technology can also be integrated into location-based services to enhance privacy.
Location privacy in general is a well-studied topic. Problems such as anonymity and obfuscation of location data is one area of concentration. Krumm @cite_26 provides a good survey for this area. Another area of study is user preferences for location sharing, for instance, see @cite_15 @cite_23 .
{ "cite_N": [ "@cite_15", "@cite_26", "@cite_23" ], "mid": [ "2031573294", "2170166043", "" ], "abstract": [ "The rapid adoption of location tracking and mobile social networking technologies raises significant privacy challenges. Today our understanding of people's location sharing privacy preferences remains very limited, including how these preferences are impacted by the type of location tracking device or the nature of the locations visited. To address this gap, we deployed Locaccino, a mobile location sharing system, in a four week long field study, where we examined the behavior of study participants (n=28) who shared their location with their acquaintances (n=373.) Our results show that users appear more comfortable sharing their presence at locations visited by a large and diverse set of people. Our study also indicates that people who visit a wider number of places tend to also be the subject of a greater number of requests for their locations. Over time these same people tend to also evolve more sophisticated privacy preferences, reflected by an increase in time- and location-based restrictions. We conclude by discussing the implications our findings.", "This is a literature survey of computational location privacy, meaning computation-based privacy mechanisms that treat location data as geometric information. This definition includes privacy-preserving algorithms like anonymity and obfuscation as well as privacy-breaking algorithms that exploit the geometric nature of the data. The survey omits non-computational techniques like manually inspecting geotagged photos, and it omits techniques like encryption or access control that treat location data as general symbols. The paper reviews studies of peoples' attitudes about location privacy, computational threats on leaked location data, and computational countermeasures for mitigating these threats.", "" ] }
1502.02791
2951670162
Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multi-kernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks.
A related literature is transfer learning @cite_17 , which builds models that bridge different domains or tasks, explicitly taking domain discrepancy into consideration. Transfer learning aims to mitigate the effort of manual labeling for machine learning @cite_1 @cite_13 @cite_26 @cite_8 and computer vision @cite_14 @cite_9 @cite_2 @cite_12 , etc. It is widely recognized that the domain discrepancy in the probability distributions of different domains should be formally measured and reduced. The major bottleneck is how to match different domain distributions effectively. Most existing methods learn a new shallow representation model in which the domain discrepancy can be explicitly reduced. However, without learning deep features which can suppress domain-specific factors, the transferability of shallow features could be limited by the task-specific variability.
{ "cite_N": [ "@cite_26", "@cite_14", "@cite_8", "@cite_9", "@cite_1", "@cite_2", "@cite_13", "@cite_12", "@cite_17" ], "mid": [ "2153929442", "1722318740", "2147520416", "", "2115403315", "2064447488", "2158815628", "2096943734", "2165698076" ], "abstract": [ "Let X denote the feature and Y the target. We consider domain adaptation under three possible scenarios: (1) the marginal PY changes, while the conditional PX Y stays the same (target shift), (2) the marginal PY is fixed, while the conditional PX Y changes with certain constraints (conditional shift), and (3) the marginal PY changes, and the conditional PX Y changes with constraints (generalized target shift). Using background knowledge, causal interpretations allow us to determine the correct situation for a problem at hand. We exploit importance reweighting or sample transformation to find the learning machine that works well on test data, and propose to estimate the weights or transformations by reweighting or transforming training data to reproduce the covariate distribution on the test domain. Thanks to kernel embedding of conditional as well as marginal distributions, the proposed approaches avoid distribution estimation, and are applicable for high-dimensional problems. Numerical evaluations on synthetic and real-world data sets demonstrate the effectiveness of the proposed framework.", "Domain adaptation is an important emerging topic in computer vision. In this paper, we present one of the first studies of domain shift in the context of object recognition. We introduce a method that adapts object models acquired in a particular visual domain to new imaging conditions by learning a transformation that minimizes the effect of domain-induced changes in the feature distribution. The transformation is learned in a supervised manner and can be applied to categories for which there are no labeled examples in the new domain. While we focus our evaluation on object recognition tasks, the transform-based adaptation technique we develop is general and could be applied to nonimage data. Another contribution is a new multi-domain object database, freely available for download. We experimentally demonstrate the ability of our method to improve recognition on categories with few or no target domain labels and moderate to large changes in the imaging conditions.", "Transfer learning algorithms are used when one has sufficient training data for one supervised learning task (the source training domain) but only very limited training data for a second task (the target test domain) that is similar but not identical to the first. Previous work on transfer learning has focused on relatively restricted settings, where specific parts of the model are considered to be carried over between tasks. Recent work on covariate shift focuses on matching the marginal distributions on observations X across domains. Similarly, work on target conditional shift focuses on matching marginal distributions on labels Y and adjusting conditional distributions P(X|Y ), such that P(X) can be matched across domains. However, covariate shift assumes that the support of test P(X) is contained in the support of training P(X), i.e., the training set is richer than the test set. Target conditional shift makes a similar assumption for P(Y). Moreover, not much work on transfer learning has considered the case when a few labels in the test domain are available. Also little work has been done when all marginal and conditional distributions are allowed to change while the changes are smooth. In this paper, we consider a general case where both the support and the model change across domains. We transform both X and Y by a location-scale shift to achieve transfer between tasks. Since we allow more flexible transformations, the proposed method yields better results on both synthetic data and real-world data.", "", "Domain adaptation allows knowledge from a source domain to be transferred to a different but related target domain. Intuitively, discovering a good feature representation across domains is crucial. In this paper, we first propose to find such a representation through a new learning method, transfer component analysis (TCA), for domain adaptation. TCA tries to learn some transfer components across domains in a reproducing kernel Hilbert space using maximum mean miscrepancy. In the subspace spanned by these transfer components, data properties are preserved and data distributions in different domains are close to each other. As a result, with the new representations in this subspace, we can apply standard machine learning methods to train classifiers or regression models in the source domain for use in the target domain. Furthermore, in order to uncover the knowledge hidden in the relations between the data labels from the source and target domains, we extend TCA in a semisupervised learning setting, which encodes label information into transfer components learning. We call this extension semisupervised TCA. The main contribution of our work is that we propose a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation. We propose both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce the distance between domain distributions by projecting data onto the learned transfer components. Finally, our approach can handle large datasets and naturally lead to out-of-sample generalization. The effectiveness and efficiency of our approach are verified by experiments on five toy datasets and two real-world applications: cross-domain indoor WiFi localization and cross-domain text classification.", "Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and target domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a standard domain adaptation benchmark dataset.", "Learning domain-invariant features is of vital importance to unsupervised domain adaptation, where classifiers trained on the source domain need to be adapted to a different target domain for which no labeled examples are available. In this paper, we propose a novel approach for learning such features. The central idea is to exploit the existence of landmarks, which are a subset of labeled data instances in the source domain that are distributed most similarly to the target domain. Our approach automatically discovers the landmarks and use them to bridge the source to the target by constructing provably easier auxiliary domain adaptation tasks. The solutions of those auxiliary tasks form the basis to compose invariant features for the original task. We show how this composition can be optimized discriminatively without requiring labels from the target domain. We validate the method on standard benchmark datasets for visual object recognition and sentiment analysis of text. Empirical results show the proposed method outperforms the state-of-the-art significantly.", "Transfer learning is established as an effective technology in computer vision for leveraging rich labeled data in the source domain to build an accurate classifier for the target domain. However, most prior methods have not simultaneously reduced the difference in both the marginal distribution and conditional distribution between domains. In this paper, we put forward a novel transfer learning approach, referred to as Joint Distribution Adaptation (JDA). Specifically, JDA aims to jointly adapt both the marginal distribution and conditional distribution in a principled dimensionality reduction procedure, and construct new feature representation that is effective and robust for substantial distribution difference. Extensive experiments verify that JDA can significantly outperform several state-of-the-art methods on four types of cross-domain image classification problems.", "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research." ] }
1502.02791
2951670162
Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multi-kernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks.
Deep neural networks learn nonlinear representations that disentangle and hide different explanatory factors of variation behind data samples @cite_25 . The learned deep representations manifest invariant factors underlying different populations and are transferable from the original tasks to similar novel tasks @cite_33 . Hence, deep neural networks have been explored for domain adaptation @cite_21 @cite_4 , multimodal and multi-source learning problems @cite_15 @cite_27 , where significant performance gains have been obtained. However, all these methods depend on the assumption that deep neural networks can learn invariant representations that are transferable across different tasks. In reality, the domain discrepancy can be alleviated, but not removed, by deep neural networks @cite_21 . Dataset shift has posed a bottleneck to the transferability of deep networks, resulting in statistically risk for target tasks @cite_5 @cite_19 .
{ "cite_N": [ "@cite_4", "@cite_33", "@cite_21", "@cite_19", "@cite_27", "@cite_5", "@cite_15", "@cite_25" ], "mid": [ "2949821452", "2949667497", "22861983", "2104094955", "2072918779", "2953369858", "2184188583", "2163922914" ], "abstract": [ "Stacked denoising autoencoders (SDAs) have been successfully used to learn new representations for domain adaptation. Recently, they have attained record accuracy on standard benchmark tasks of sentiment analysis across different text domains. SDAs learn robust data representations by reconstruction, recovering original features from data that are artificially corrupted with noise. In this paper, we propose marginalized SDA (mSDA) that addresses two crucial limitations of SDAs: high computational cost and lack of scalability to high-dimensional features. In contrast to SDAs, our approach of mSDA marginalizes noise and thus does not require stochastic gradient descent or other optimization algorithms to learn parameters ? in fact, they are computed in closed-form. Consequently, mSDA, which can be implemented in only 20 lines of MATLAB^ TM , significantly speeds up SDAs by two orders of magnitude. Furthermore, the representations learnt by mSDA are as effective as the traditional SDAs, attaining almost identical accuracies in benchmark tasks.", "Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset.", "The exponential increase in the availability of online reviews and recommendations makes sentiment classification an interesting topic in academic and industrial research. Reviews can span so many different domains that it is difficult to gather annotated training data for all of them. Hence, this paper studies the problem of domain adaptation for sentiment classifiers, hereby a system is trained on labeled reviews from one source domain but is meant to be deployed on another. We propose a deep learning approach which learns to extract a meaningful representation for each review in an unsupervised fashion. Sentiment classifiers trained with this high-level feature representation clearly outperform state-of-the-art methods on a benchmark composed of reviews of 4 types of Amazon products. Furthermore, this method scales well and allowed us to successfully perform domain adaptation on a larger industrial-strength dataset of 22 domains.", "Discriminative learning methods for classification perform well when training and test data are drawn from the same distribution. Often, however, we have plentiful labeled training data from a source domain but wish to learn a classifier which performs well on a target domain with a different distribution and little or no labeled training data. In this work we investigate two questions. First, under what conditions can a classifier trained from source data be expected to perform well on target data? Second, given a small amount of labeled target data, how should we combine it during training with the large amount of labeled source data to achieve the lowest target error at test time? We address the first question by bounding a classifier's target error in terms of its source error and the divergence between the two domains. We give a classifier-induced divergence measure that can be estimated from finite, unlabeled samples from the domains. Under the assumption that there exists some hypothesis that performs well in both domains, we show that this quantity together with the empirical source error characterize the target error of a source-trained classifier. We answer the second question by bounding the target error of a model which minimizes a convex combination of the empirical source and target errors. Previous theoretical work has considered minimizing just the source error, just the target error, or weighting instances from the two domains equally. We show how to choose the optimal combination of source and target error as a function of the divergence, the sample sizes of both domains, and the complexity of the hypothesis class. The resulting bound generalizes the previously studied cases and is always at least as tight as a bound which considers minimizing only the target error or an equal weighting of source and target errors.", "In recent years, information trustworthiness has become a serious issue when user-generated contents prevail in our information world. In this paper, we investigate the important problem of estimating information trustworthiness from the perspective of correlating and comparing multiple data sources. To a certain extent, the consistency degree is an indicator of information reliability--Information unanimously agreed by all the sources is more likely to be reliable. Based on this principle, we develop an effective computational approach to identify consistent information from multiple data sources. Particularly, we analyze vast amounts of information collected from multiple review platforms (multiple sources) in which people can rate and review the items they have purchased. The major challenge is that different platforms attract diverse sets of users, and thus information cannot be compared directly at the surface. However, latent reasons hidden in user ratings are mostly shared by multiple sources, and thus inconsistency about an item only appears when some source provides ratings deviating from the common latent reasons. Therefore, we propose a novel two-step procedure to calculate information consistency degrees for a set of items which are rated by multiple sets of users on different platforms. We first build a Multi-Source Deep Belief Network (MSDBN) to identify the common reasons hidden in multi-source rating data, and then calculate a consistency score for each item by comparing individual sources with the reconstructed data derived from the latent reasons. We conduct experiments on real user ratings collected from Orbitz, Priceline and TripAdvisor on all the hotels in Las Vegas and New York City. Experimental results demonstrate that the proposed approach successfully finds the hotels that receive inconsistent, and possibly unreliable, ratings.", "This paper addresses the general problem of domain adaptation which arises in a variety of applications where the distribution of the labeled sample available somewhat differs from that of the test data. Building on previous work by Ben- (2007), we introduce a novel distance between distributions, discrepancy distance, that is tailored to adaptation problems with arbitrary loss functions. We give Rademacher complexity bounds for estimating the discrepancy distance from finite samples for different loss functions. Using this distance, we derive novel generalization bounds for domain adaptation for a wide family of loss functions. We also present a series of novel adaptation bounds for large classes of regularization-based algorithms, including support vector machines and kernel ridge regression based on the empirical discrepancy. This motivates our analysis of the problem of minimizing the empirical discrepancy for various loss functions for which we also give novel algorithms. We report the results of preliminary experiments that demonstrate the benefits of our discrepancy minimization algorithms for domain adaptation.", "Deep networks have been successfully applied to unsupervised feature learning for single modalities (e.g., text, images or audio). In this work, we propose a novel application of deep networks to learn features over multiple modalities. We present a series of tasks for multimodal learning and show how to train deep networks that learn features to address these tasks. In particular, we demonstrate cross modality feature learning, where better features for one modality (e.g., video) can be learned if multiple modalities (e.g., audio and video) are present at feature learning time. Furthermore, we show how to learn a shared representation between modalities and evaluate it on a unique task, where the classifier is trained with audio-only data but tested with video-only data and vice-versa. Our models are validated on the CUAVE and AVLetters datasets on audio-visual speech classification, demonstrating best published visual speech classification on AVLetters and effective shared representation learning.", "The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning." ] }
1502.02474
124447425
Abstract We address the problem of controlling a mobile robot to explore a partially known environment. The robot’s objective is the maximization of the amount of information collected about the environment. We formulate the problem as a partially observable Markov decision process (POMDP) with an information-theoretic objective function, and solve it applying forward simulation algorithms with an open-loop approximation. We present a new sample-based approximation for mutual information useful in mobile robotics. The approximation can be seamlessly integrated with forward simulation planning algorithms. We investigate the usefulness of POMDP based planning for exploration, and to alleviate some of its weaknesses propose a combination with frontier based exploration. Experimental results in simulated and real environments show that, depending on the environment, applying POMDP based planning for exploration can improve performance over frontier exploration.
Information on the location of the robot and environment features, or landmarks, may be modelled by a multivariate Gaussian distribution. The SLAM problem can then be solved for example by applying the Extended Kalman Filter. Exploration with such feature-based maps was studied by who describe an A-optimal exploration method, i.e. they minimize the trace of the state covariance matrix. They discretize the location of the robot to a grid and plan an informative trajectory in open loop as a sequence of discrete positions via a breadth-first search. A similar objective function was used by @cite_10 , adopting a model predictive control (MPC) approach for optimization over multiple time steps. Discretization of the action space was also applied by @cite_12 , who applied reinforcement learning to learn parameterized robot trajectories for exploration. A somewhat complementary approach was adopted in @cite_18 , where a set of candidate exploration targets were evaluated based on an utility function designed to balance exploration of unknown areas and seeing known landmarks to help maintain good localization information. However, an explicit information-theoretic quantification of the information gain was avoided.
{ "cite_N": [ "@cite_18", "@cite_10", "@cite_12" ], "mid": [ "2115082865", "2111983383", "2017995647" ], "abstract": [ "In this paper, we present techniques that allow one or multiple mobile robots to efficiently explore and model their environment. While much existing research in the area of Simultaneous Localization and Mapping (SLAM) focuses on issues related to uncertainty in sensor data, our work focuses on the problem of planning optimal exploration strategies. We develop a utility function that measures the quality of proposed sensing locations, give a randomized algorithm for selecting an optimal next sensing location, and provide methods for extracting features from sensor data and merging these into an incrementally constructed map. We also provide an efficient algorithm driven by our utility function. This algorithm is able to explore several steps ahead without incurring too high a computational cost. We have compared that exploration strategy with a totally greedy algorithm that optimizes our utility function with a one-step-look ahead. The planning algorithms which have been developed operate using simple but flexible models of the robot sensors and actuator abilities. Techniques that allow implementation of these sensor models on top of the capabilities of actual sensors have been provided. All of the proposed algorithms have been implemented either on real robots (for the case of individual robots) or in simulation (for the case of multiple robots), and experimental results are given. c 2005 Elsevier B.V. All rights reserved.", "In this paper, the possibility and necessity of multi-step trajectory planning in Extended Kalman Filter (EKF) based SLAM is investigated. The objective of the trajectory planning here is to minimize the estimation error of the robot and landmark locations subject to a given time horizon. We show that the problem can be regarded as an optimization problem for a gradually identified model. A numerical method is proposed for trajectory planning using a variant of the nonlinear Model Predictive Control (MPC). The proposed method is optimal in the sense that the control action is computed using all the information available at the time of decision making. Simulation results are included to compare the results from the one-step look-ahead trajectory planning and the proposed multi-step look-ahead technique.", "Automatically building maps from sensor data is a necessary and fundamental skill for mobile robots; as a result, considerable research attention has focused on the technical challenges inherent in the mapping problem. While statistical inference techniques have led to computationally efficient mapping algorithms, the next major challenge in robotic mapping is to automate the data collection process. In this paper, we address the problem of how a robot should plan to explore an unknown environment and collect data in order to maximize the accuracy of the resulting map. We formulate exploration as a constrained optimization problem and use reinforcement learning to find trajectories that lead to accurate maps. We demonstrate this process in simulation and show that the learned policy not only results in improved map building, but that the learned policy also transfers successfully to a real robot exploring on MIT campus." ] }
1502.02474
124447425
Abstract We address the problem of controlling a mobile robot to explore a partially known environment. The robot’s objective is the maximization of the amount of information collected about the environment. We formulate the problem as a partially observable Markov decision process (POMDP) with an information-theoretic objective function, and solve it applying forward simulation algorithms with an open-loop approximation. We present a new sample-based approximation for mutual information useful in mobile robotics. The approximation can be seamlessly integrated with forward simulation planning algorithms. We investigate the usefulness of POMDP based planning for exploration, and to alleviate some of its weaknesses propose a combination with frontier based exploration. Experimental results in simulated and real environments show that, depending on the environment, applying POMDP based planning for exploration can improve performance over frontier exploration.
We present a new approximation for mutual information that is useful in mobile robotics exploration problems. The approximation can be easily integrated with forward simulation planning methods, and avoids computing full SLAM filter updates during the planning phase. In contrast to e.g. @cite_16 @cite_17 @cite_2 , we do not assume a Gaussian belief state. We propose and empirically evaluate in simulated and real-world domains a exploration method combining strengths of decision-theoretic POMDP based exploration and classical frontier based exploration. In all cases, we concentrate on non-myopic planning instead of the greedy one-step maximization of utility.
{ "cite_N": [ "@cite_16", "@cite_2", "@cite_17" ], "mid": [ "2096649264", "", "1977034420" ], "abstract": [ "We address the problem of online path planning for optimal sensing with a mobile robot. The objective of the robot is to learn the most about its pose and the environment given time constraints. We use a POMDP with a utility function that depends on the belief state to model the finite horizon planning problem. We replan as the robot progresses throughout the environment. The POMDP is high-dimensional, continuous, non-differentiable, nonlinear, non-Gaussian and must be solved in real-time. Most existing techniques for stochastic planning and reinforcement learning are therefore inapplicable. To solve this extremely complex problem, we propose a Bayesian optimization method that dynamically trades off exploration (minimizing uncertainty in unknown parts of the policy space) and exploitation (capitalizing on the current best solution). We demonstrate our approach with a visually-guide mobile robot. The solution proposed here is also applicable to other closely-related domains, including active vision, sequential experimental design, dynamic sensing and calibration with mobile sensors.", "", "This work investigates the problem of planning under uncertainty, with application to mobile robotics. We propose a probabilistic framework in which the robot bases its decisions on the generalized belief, which is a probabilistic description of its own state and of external variables of interest. The approach naturally leads to a dual-layer architecture: an inner estimation layer, which performs inference to predict the outcome of possible decisions, and an outer decisional layer which is in charge of deciding the best action to undertake. The approach does not discretize the state or control space, and allows planning in continuous domain. Moreover, it allows to relax the assumption of maximum likelihood observations: predicted measurements are treated as random variables and are not considered as given. Experimental results show that our planning approach produces smooth trajectories while maintaining uncertainty within reasonable bounds." ] }
1502.02403
2952864324
Scientific workflow management systems offer features for composing complex computational pipelines from modular building blocks, for executing the resulting automated workflows, and for recording the provenance of data products resulting from workflow runs. Despite the advantages such features provide, many automated workflows continue to be implemented and executed outside of scientific workflow systems due to the convenience and familiarity of scripting languages (such as Perl, Python, R, and MATLAB), and to the high productivity many scientists experience when using these languages. YesWorkflow is a set of software tools that aim to provide such users of scripting languages with many of the benefits of scientific workflow systems. YesWorkflow requires neither the use of a workflow engine nor the overhead of adapting code to run effectively in such a system. Instead, YesWorkflow enables scientists to annotate existing scripts with special comments that reveal the computational modules and dataflows otherwise implicit in these scripts. YesWorkflow tools extract and analyze these comments, represent the scripts in terms of entities based on the typical scientific workflow model, and provide graphical renderings of this workflow-like view of the scripts. Future versions of YesWorkflow also will allow the prospective provenance of the data products of these scripts to be queried in ways similar to those available to users of scientific workflow systems.
Various tools have been proposed to capture the runtime provenance of scripts. Mechanisms that capture provenance at the operating system level @cite_29 @cite_15 @cite_33 monitor system calls to track the data dependencies between computational processes. Some tools @cite_18 @cite_20 @cite_34 @cite_8 have been developed to capture runtime provenance for Python scripts: while @cite_18 and Davison @cite_20 propose Python libraries and APIs that need to be added to the code to capture the execution steps, ProvenanceCurious @cite_34 and noWorkflow @cite_8 are transparent and do not require changes to the scripts. Similarly, RDataTracker @cite_7 captures provenance from the execution of scripts, and the approach taken by @cite_38 supports all programming languages allowed by the LLVM compiler framework. We note that the approach is complementary to these tools, since it captures prospective provenance of scripts. We argue that , along with runtime provenance approaches, provide a low-effort entry point for scientists who want to reap some of the benefits of scientific workflow systems while still using their familiar scripting environments.
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_33", "@cite_7", "@cite_8", "@cite_29", "@cite_15", "@cite_34", "@cite_20" ], "mid": [ "", "1582747576", "", "2159579350", "", "2050743563", "2158532686", "2104512707", "2079336662" ], "abstract": [ "", "In many application domains the provenance of data plays an important role. It is often required to get store detailed information of the underlying processes that led to the data (e.g., results of numerical simulations) for the purpose of documentation or checking the process for compliance to applicable regulations. Especially in science and engineering more and more applications are being developed in Python, which is used either for development of the whole application or as a glue language for coordinating codes written in other programming languages. To easily integrate provenance recording into applications developed in Python, a provenance client library with a suitable Python API is useful. In this paper we present such a Python client library for recording and querying provenance information. We show an exemplary application, explain the overall architecture of the library, and give some details on the technologies used for the implementation.", "", "", "", "The Earth System Science Server (ES3) project is developing a local infrastructure for managing Earth science data products derived from satellite remote sensing. By ‘local,’ we mean the infrastructure that a scientist uses to manage the creation and dissemination of her own data products, particularly those that are constantly incorporating corrections or improvements based on the scientist's own research. Therefore, in addition to being robust and capacious enough to support public access, ES3 is intended to be flexible enough to manage the idiosyncratic computing ensembles that typify scientific research. Instead of specifying provenance explicitly with a workflow model, ES3 extracts provenance information automatically from arbitrary applications by monitoring their interactions with their execution environment. These interactions (arguments, file I-O, system calls, etc.) are logged to the ES3 database, which assembles them into provenance graphs. These graphs resemble workflow specifications, but are really reports—they describe what actually happened, as opposed to what was requested. The ES3 database supports forward and backward navigation through provenance graphs (i.e. ancestor-descendant queries), as well as graph retrieval. Copyright © 2007 John Wiley & Sons, Ltd.", "Researchers in fields such as bioinformatics, CS, finance, and applied math have trouble managing the numerous code and data files generated by their computational experiments, comparing the results of trials executed with different parameters, and keeping up-to-date notes on what they learned from past successes and failures. We created a Linux-based system called BURRITO that automates aspects of this tedious experiment organization and notetaking process, thus freeing researchers to focus on more substantive work. BURRITO automatically captures a researcher's computational activities and provides user interfaces to annotate the captured provenance with notes and then make queries such as, \"Which script versions and command-line parameters generated the output graph that this note refers to?\"", "The increasing data volume and highly complex models used in different domains make it difficult to debug models in cases of anomalies. Data provenance provides scientists sufficient information to investigate their models. In this paper, we propose a tool which can infer fine-grained data provenance based on a given script. The tool is demonstrated using a hydrological model. The tool is also tested successfully handling other scripts in different contexts.", "Published scientific research that relies on numerical computations is too often not reproducible. For computational research to become consistently and reliably reproducible, the process must become easier to achieve, as part of day-to-day research. A combination of best practices and automated tools can make it easier to create reproducible research." ] }
1502.02171
26575457
For long time, person re-identification and image search are two separately studied tasks. However, for person re-identification, the effectiveness of local features and the "query-search" mode make it well posed for image search techniques. In the light of recent advances in image search, this paper proposes to treat person re-identification as an image search problem. Specifically, this paper claims two major contributions. 1) By designing an unsupervised Bag-of-Words representation, we are devoted to bridging the gap between the two tasks by integrating techniques from image search in person re-identification. We show that our system sets up an effective yet efficient baseline that is amenable to further supervised unsupervised improvements. 2) We contribute a new high quality dataset which uses DPM detector and includes a number of distractor images. Our dataset reaches closer to realistic settings, and new perspectives are provided. Compared with approaches that rely on feature-feature match, our method is faster by over two orders of magnitude. Moreover, on three datasets, we report competitive results compared with the state-of-the-art methods.
On the other hand, the field of image search has been greatly advanced since the introduction of the SIFT descriptor @cite_61 and the BoW model. In the last decade, a myriad of methods @cite_24 @cite_59 @cite_48 @cite_36 @cite_2 have been developed to improve search performance. For example, to improve matching precision, J ' e gou al @cite_24 embed binary SIFT features in the inverted file. Meanwhile, refined visual matching can also be produced by index-level feature fusion @cite_51 @cite_48 between complementary descriptors. Since the BoW model does not consider the spatial distribution of local features (also a problem in person re-identification), another direction is to model the spatial constraints @cite_9 @cite_2 @cite_31 . The geometry-preserving visual phrases (GVP) @cite_16 and the spatial coding @cite_9 methods both calculate the relative position among features, and check the geometric consistency between images by the offset maps. Zhang al @cite_54 propose to use descriptive visual phrases to build pairwise constraints, and Liu al @cite_52 encode geometric cues into binary features embedded in the inverted file.
{ "cite_N": [ "@cite_61", "@cite_36", "@cite_48", "@cite_9", "@cite_54", "@cite_52", "@cite_24", "@cite_59", "@cite_2", "@cite_31", "@cite_16", "@cite_51" ], "mid": [ "2151103935", "", "", "2118355530", "2133671054", "", "1556531089", "", "", "", "", "2031332477" ], "abstract": [ "This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.", "", "", "The state-of-the-art image retrieval approaches represent images with a high dimensional vector of visual words by quantizing local features, such as SIFT, in the descriptor space. The geometric clues among visual words in an image is usually ignored or exploited for full geometric verification, which is computationally expensive. In this paper, we focus on partial-duplicate web image retrieval, and propose a novel scheme, spatial coding, to encode the spatial relationships among local features in an image. Our spatial coding is both efficient and effective to discover false matches of local features between images, and can greatly improve retrieval performance. Experiments in partial-duplicate web image search, using a database of one million images, reveal that our approach achieves a 53 improvement in mean average precision and 46 reduction in time cost over the baseline bag-of-words approach.", "The Bag-of-visual Words (BoW) image representation has been applied for various problems in the fields of multimedia and computer vision. The basic idea is to represent images as visual documents composed of repeatable and distinctive visual elements, which are comparable to the words in texts. However, massive experiments show that the commonly used visual words are not as expressive as the text words, which is not desirable because it hinders their effectiveness in various applications. In this paper, Descriptive Visual Words (DVWs) and Descriptive Visual Phrases (DVPs) are proposed as the visual correspondences to text words and phrases, where visual phrases refer to the frequently co-occurring visual word pairs. Since images are the carriers of visual objects and scenes, novel descriptive visual element set can be composed by the visual words and their combinations which are effective in representing certain visual objects or scenes. Based on this idea, a general framework is proposed for generating DVWs and DVPs from classic visual words for various applications. In a large-scale image database containing 1506 object and scene categories, the visual words and visual word pairs descriptive to certain scenes or objects are identified as the DVWs and DVPs. Experiments show that the DVWs and DVPs are compact and descriptive, thus are more comparable with the text words than the classic visual words. We apply the identified DVWs and DVPs in several applications including image retrieval, image re-ranking, and object recognition. The DVW and DVP combination outperforms the classic visual words by 19.5 and 80 in image retrieval and object recognition tasks, respectively. The DVW and DVP based image re-ranking algorithm: DWPRank outperforms the state-of-the-art VisualRank by 12.4 in accuracy and about 11 times faster in efficiency.", "", "This paper improves recent methods for large scale image search. State-of-the-art methods build on the bag-of-features image representation. We, first, analyze bag-of-features in the framework of approximate nearest neighbor search. This shows the sub-optimality of such a representation for matching descriptors and leads us to derive a more precise representation based on 1) Hamming embedding (HE) and 2) weak geometric consistency constraints (WGC). HE provides binary signatures that refine the matching based on visual words. WGC filters matching descriptors that are not consistent in terms of angle and scale. HE and WGC are integrated within the inverted file and are efficiently exploited for all images, even in the case of very large datasets. Experiments performed on a dataset of one million of images show a significant improvement due to the binary signature and the weak geometric consistency constraints, as well as their efficiency. Estimation of the full geometric transformation, i.e., a re-ranking step on a short list of images, is complementary to our weak geometric consistency constraints and allows to further improve the accuracy.", "", "", "", "", "In Bag-of-Words (BoW) based image retrieval, the SIFT visual word has a low discriminative power, so false positive matches occur prevalently. Apart from the information loss during quantization, another cause is that the SIFT feature only describes the local gradient distribution. To address this problem, this paper proposes a coupled Multi-Index (c-MI) framework to perform feature fusion at indexing level. Basically, complementary features are coupled into a multi-dimensional inverted index. Each dimension of c-MI corresponds to one kind of feature, and the retrieval process votes for images similar in both SIFT and other feature spaces. Specifically, we exploit the fusion of local color feature into c-MI. While the precision of visual match is greatly enhanced, we adopt Multiple Assignment to improve recall. The joint cooperation of SIFT and color features significantly reduces the impact of false positive matches. Extensive experiments on several benchmark datasets demonstrate that c-MI improves the retrieval accuracy significantly, while consuming only half of the query time compared to the baseline. Importantly, we show that c-MI is well complementary to many prior techniques. Assembling these methods, we have obtained an mAP of 85.8 and N-S score of 3.85 on Holidays and Ukbench datasets, respectively, which compare favorably with the state-of-the-arts." ] }
1502.02065
1975257416
Armed groups of civilians known as "self-defense forces" have ousted the powerful Knights Templar drug cartel from several towns in Michoacan. This militia uprising has unfolded on social media, particularly in the "VXM" ("Valor por Michoacan," Spanish for "Courage for Michoacan") Facebook page, gathering more than 170,000 fans. Previous work on the Drug War has documented the use of social media for real-time reports of violent clashes. However, VXM goes one step further by taking on a pro-militia propagandist role, engaging in two-way communication with its audience. This paper presents a descriptive analysis of VXM and its audience. We examined nine months of posts, from VXM's inception until May 2014, totaling 6,000 posts by VXM administrators and more than 108,000 comments from its audience. We describe the main conversation themes, post frequency and relationships with offline events and public figures. We also characterize the behavior of VXM's most active audience members. Our work illustrates VXM's online mobilization strategies, and how its audience takes part in defining the narrative of this armed conflict. We conclude by discussing possible applications of our findings for the design of future communication technologies.
We position our work within the area of social computing and collective action, particularly focusing on social movements. Social movements are defined as organized collective activities which take place outside political structures, but that attempt to transform existing political, economic, or societ al structures @cite_43 . Militias are considered a type of social movement @cite_12 .
{ "cite_N": [ "@cite_43", "@cite_12" ], "mid": [ "1995582724", "566697182" ], "abstract": [ "New Information and Communication Technologies (ICTs) are changing the ways in which activists communicate, collaborate and demonstrate. Scholars from a wide range of disciplines, among them sociology, political science and communication, are working to understand these changes. The diversity of perspectives represented enriches the literature, providing an abundant repertoire of tools for examining these phenomena, but it is also an obstacle to understanding. Few works are commonly cited across the field, and most are known only within the confines of their discipline. The absence of a common set of organizing theoretical principles can make it difficult to find connections between these disparate works beyond their common subject matter. This paper responds by locating existing scholarship within a common framework for explaining the emergence, development and outcomes of social movement activity. This provides a logical structure that facilitates conversations across the field around common issues of c...", "Chapter 1 Introduction: Approaching the Miltia Movement Chapter 2 Fuel: The Cultural Foundations of the Militia Movement Chapter 3 Heat: The Myth of the Militia in the American Revolution Chapter 4 Friction: Militia Ideology and the Rationalization of Rage Chapter 5 The Spark: Randy Weaver and the Standoff at Ruby Ridge Chapter 6 The Fire: David Koresh, The Branch Davidians, and the Fire at Waco, TX Chapter 7 The Inferno: Timothy McVeigh and the Bombing in Oklahoma City Chapter 8 Embers: The Decline of the Militia Movement Chapter 9 Epilogue: The Movement and Homeland Security" ] }
1502.02065
1975257416
Armed groups of civilians known as "self-defense forces" have ousted the powerful Knights Templar drug cartel from several towns in Michoacan. This militia uprising has unfolded on social media, particularly in the "VXM" ("Valor por Michoacan," Spanish for "Courage for Michoacan") Facebook page, gathering more than 170,000 fans. Previous work on the Drug War has documented the use of social media for real-time reports of violent clashes. However, VXM goes one step further by taking on a pro-militia propagandist role, engaging in two-way communication with its audience. This paper presents a descriptive analysis of VXM and its audience. We examined nine months of posts, from VXM's inception until May 2014, totaling 6,000 posts by VXM administrators and more than 108,000 comments from its audience. We describe the main conversation themes, post frequency and relationships with offline events and public figures. We also characterize the behavior of VXM's most active audience members. Our work illustrates VXM's online mobilization strategies, and how its audience takes part in defining the narrative of this armed conflict. We conclude by discussing possible applications of our findings for the design of future communication technologies.
The literature @cite_43 @cite_13 has identified three universal themes recurring across all social movements: mobilizing structures, opportunity structures, and framing processes. We use these themes to interpret our results and further understand what is occurring with VXM. Mobilizing Structures: mechanisms through which information communication technologies (ICTs) facilitate mobilizing people to promote an ideology or political cause @cite_43 . For instance, in the Arab Spring'' and Occupy'' movements, people used social media to mobilize crowds for street rallies or protests. Similarly, Facebook facilitated the mobilization of political activists in Palestine by letting people connect their private life with their political activities @cite_18 . Lastly, platforms such as Turkopticon allow crowdworkers to mobilize and hold virtual employers accountable for their actions @cite_4 .
{ "cite_N": [ "@cite_43", "@cite_18", "@cite_13", "@cite_4" ], "mid": [ "1995582724", "2139074879", "2058427515", "2147603330" ], "abstract": [ "New Information and Communication Technologies (ICTs) are changing the ways in which activists communicate, collaborate and demonstrate. Scholars from a wide range of disciplines, among them sociology, political science and communication, are working to understand these changes. The diversity of perspectives represented enriches the literature, providing an abundant repertoire of tools for examining these phenomena, but it is also an obstacle to understanding. Few works are commonly cited across the field, and most are known only within the confines of their discipline. The absence of a common set of organizing theoretical principles can make it difficult to find connections between these disparate works beyond their common subject matter. This paper responds by locating existing scholarship within a common framework for explaining the emergence, development and outcomes of social movement activity. This provides a logical structure that facilitates conversations across the field around common issues of c...", "We analyze practices of political activists in a Palestinian village located in the West Bank. Activists organize weekly demonstrations against Israel's settlement policy and the separation wall. Over a period of 28 months, we conducted a field study consisting of eight days 'on the ground' observation and interviewing, and extensive monitoring of Internet communication. We describe the activists' background and their efforts to organize these demonstrations under conditions of military occupation. Over time, we observe the role both digital and material factors play in the organization of protest. Specifically, we analyze how Email and Facebook were appropriated to facilitate interaction 'on the ground'.", "Introduction: opportunities mobilizing structures and framing processes Doug McAdam Part I. Political Opportunities: 1. Clarifying the concept of political opportunities Doug McAdam 2. States and opportunities: the political structuring of social movements Sidney Tarrow 3. Social movements and the state: thoughts on the policing of protest Donatella della Porta 4. Opportunities and framing in the East European revolts of 1989 Anthony Oberschall 5. Opportunities and Framing in the Political Cycle of Perestroika Elena Zdravomyslova Part II. Mobilizing Structures: 6. Mobilizing structures: constraints and opportunities in adopting, adapting and inventing John D. McCarthy 7. The organizational structure of new social movements in relation to their political context Hanspeter Kriesi 8. The impact of national contexts on social movement structures: a cross-movement and cross-national comparison Dieter Rucht 9. Organizational form as frame: collective identity and political strategy in the American Labor Movement 1880-1920 Elisabeth S. Clemens 10. The collapse of a social movement: the interplay of mobilizing structures, framing, and political opportunities in the Knights of Labor Kim Voss Part III. Framing Processes: 11. Culture ideology and strategic framing Mayer N. Zald 12. Accessing public media electoral and governmental agendas John D. McCarthy, Jackie Smith, and Mayer N. Zald 13. Media discourse, movement publicity, and the generation of collective action frames: theoretical and empirical exercises in meaning construction Bert Klandermans and Sjoerd Goslinga 14. Framing political opportunity William A. Gamson and David S. Meyer 15. The framing function of movement tactics: strategic dramaturgy in the American civil rights movement Doug McAdam.", "As HCI researchers have explored the possibilities of human computation, they have paid less attention to ethics and values of crowdwork. This paper offers an analysis of Amazon Mechanical Turk, a popular human computation system, as a site of technically mediated worker-employer relations. We argue that human computation currently relies on worker invisibility. We then present Turkopticon, an activist system that allows workers to publicize and evaluate their relationships with employers. As a common infrastructure, Turkopticon also enables workers to engage one another in mutual aid. We conclude by discussing the potentials and challenges of sustaining activist technologies that intervene in large, existing socio-technical systems." ] }
1502.02065
1975257416
Armed groups of civilians known as "self-defense forces" have ousted the powerful Knights Templar drug cartel from several towns in Michoacan. This militia uprising has unfolded on social media, particularly in the "VXM" ("Valor por Michoacan," Spanish for "Courage for Michoacan") Facebook page, gathering more than 170,000 fans. Previous work on the Drug War has documented the use of social media for real-time reports of violent clashes. However, VXM goes one step further by taking on a pro-militia propagandist role, engaging in two-way communication with its audience. This paper presents a descriptive analysis of VXM and its audience. We examined nine months of posts, from VXM's inception until May 2014, totaling 6,000 posts by VXM administrators and more than 108,000 comments from its audience. We describe the main conversation themes, post frequency and relationships with offline events and public figures. We also characterize the behavior of VXM's most active audience members. Our work illustrates VXM's online mobilization strategies, and how its audience takes part in defining the narrative of this armed conflict. We conclude by discussing possible applications of our findings for the design of future communication technologies.
Opportunity Structures: characteristics of a social system that facilitate or hinder the activity of a social movement @cite_43 . For instance, online donation campaigns take advantage of the immediate emotions after a disaster or of friendships to harvest higher levels of participation and donations @cite_3 .
{ "cite_N": [ "@cite_43", "@cite_3" ], "mid": [ "1995582724", "2029253040" ], "abstract": [ "New Information and Communication Technologies (ICTs) are changing the ways in which activists communicate, collaborate and demonstrate. Scholars from a wide range of disciplines, among them sociology, political science and communication, are working to understand these changes. The diversity of perspectives represented enriches the literature, providing an abundant repertoire of tools for examining these phenomena, but it is also an obstacle to understanding. Few works are commonly cited across the field, and most are known only within the confines of their discipline. The absence of a common set of organizing theoretical principles can make it difficult to find connections between these disparate works beyond their common subject matter. This paper responds by locating existing scholarship within a common framework for explaining the emergence, development and outcomes of social movement activity. This provides a logical structure that facilitates conversations across the field around common issues of c...", "Every day, thousands of people make donations to humanitarian, political, environmental, and other causes, a large amount of which occur on the Internet. The solicitations for support, the acknowledgment of a donation and the discussion of corresponding issues are often conducted via email, leaving a record of these social phenomena. In this paper, we describe a comprehensive large-scale data-driven study of donation behavior. We analyze a two-month anonymized email log from several perspectives motivated by past studies on charitable giving: (i) demographics, (ii) user interest, (iii) external time-related factors and (iv) social network influence. We show that email captures the demographic peculiarities of different interest groups, for instance, predicting demographic distributions found in US 2012 Presidential Election exit polls. Furthermore, we find that people respond to major national events, as well as to solicitations with special promotions, and that social connections are the most important factor in predicting donation behavior. Specifically, we identify trends not only for individual charities and campaigns, but also for high-level categories such as political campaigns, medical illnesses, and humanitarian relief. Thus, we show the extent to which large-scale email datasets reveal human donation behavior, and explore the limitations of such analysis." ] }
1502.02065
1975257416
Armed groups of civilians known as "self-defense forces" have ousted the powerful Knights Templar drug cartel from several towns in Michoacan. This militia uprising has unfolded on social media, particularly in the "VXM" ("Valor por Michoacan," Spanish for "Courage for Michoacan") Facebook page, gathering more than 170,000 fans. Previous work on the Drug War has documented the use of social media for real-time reports of violent clashes. However, VXM goes one step further by taking on a pro-militia propagandist role, engaging in two-way communication with its audience. This paper presents a descriptive analysis of VXM and its audience. We examined nine months of posts, from VXM's inception until May 2014, totaling 6,000 posts by VXM administrators and more than 108,000 comments from its audience. We describe the main conversation themes, post frequency and relationships with offline events and public figures. We also characterize the behavior of VXM's most active audience members. Our work illustrates VXM's online mobilization strategies, and how its audience takes part in defining the narrative of this armed conflict. We conclude by discussing possible applications of our findings for the design of future communication technologies.
Framing Processes: ways to interpret reality by labeling one's individual experiences within the experiences of society @cite_36 @cite_43 . In social movements, frames represent a set of beliefs and meanings that help to legitimate and motivate the actions of the group. Frames usually emerge by negotiating shared meaning. @cite_36 found that people used social media to crowd-source framing processes, and collectively create an interpretation of street harassment.
{ "cite_N": [ "@cite_36", "@cite_43" ], "mid": [ "2042976082", "1995582724" ], "abstract": [ "CSCW systems are playing an increasing role in activism. How can new communications technologies support social movements? The possibilities are intriguing, but as yet not fully understood. One key technique traditionally leveraged by social movements is storytelling. In this paper, we examine the use of collective storytelling online in the context of a social movement organization called Hollaback, an organization working to stop street harassment. Can sharing a story of experienced harassment really make a difference to an individual or a community? Using Emancipatory Action Research and qualitative methods, we interviewed people who contributed stories of harassment online. We found that sharing stories shifted participants' cognitive and emotional orientation towards their experience. The theory of \"framing\" from social movement research explains the surprising power of this experience for Hollaback participants. We contribute a way of looking at activism online using social movement theory. Our work illustrates that technology can help crowd-sourced framing processes that have traditionally been done by social movement organizations.", "New Information and Communication Technologies (ICTs) are changing the ways in which activists communicate, collaborate and demonstrate. Scholars from a wide range of disciplines, among them sociology, political science and communication, are working to understand these changes. The diversity of perspectives represented enriches the literature, providing an abundant repertoire of tools for examining these phenomena, but it is also an obstacle to understanding. Few works are commonly cited across the field, and most are known only within the confines of their discipline. The absence of a common set of organizing theoretical principles can make it difficult to find connections between these disparate works beyond their common subject matter. This paper responds by locating existing scholarship within a common framework for explaining the emergence, development and outcomes of social movement activity. This provides a logical structure that facilitates conversations across the field around common issues of c..." ] }
1502.01972
2157052214
We consider a dynamic vehicle routing problem with time windows and stochastic customers (DS-VRPTW), such that customers may request for services as vehicles have already started their tours. To solve this problem, the goal is to provide a decision rule for choosing, at each time step, the next action to perform in light of known requests and probabilistic knowledge on requests likelihood. We introduce a new decision rule, called Global Stochastic Assessment (GSA) rule for the DS-VRPTW, and we compare it with existing decision rules, such as MSA. In particular, we show that GSA fully integrates nonanticipativity constraints so that it leads to better decisions in our stochastic context. We describe a new heuristic approach for efficiently approximating our GSA rule. We introduce a new waiting strategy. Experiments on dynamic and stochastic benchmarks, which include instances of different degrees of dynamism, show that not only our approach is competitive with state-of-the-art methods, but also enables to compute meaningful offline solutions to fully dynamic problems where absolutely no a priori customer request is provided.
The first D-VRP is proposed in @cite_16 , which introduces a single vehicle Dynamic Dial-a-Ride Problem (D-DARP) in which customer requests appear dynamically. Then, @cite_19 introduced the concept of immediate requests that must be serviced as soon as possible, implying a replanning of the current vehicle route. Complete reviews on D-VRP may be found in @cite_7 @cite_6 . In this section, we more particularly focus on stochastic D-VRP. @cite_6 classifies approaches for stochastic D-VRP in two categories, either based on or on . Stochastic modeling approaches formally capture the stochastic nature of the problem, so that solutions are computed in the light of an overall stochastic context. Such holistic approaches usually require strong assumptions and efficient computation of complex expected values. Sampling approaches try to capture stochastic knowledge by sampling scenarios, so that they tend to be more focused on local stochastic evidences. Their local decisions however allow sample-based methods to scale up to larger problem instances, even under challenging timing constraints. One usually needs to find a good compromise between having a high number of scenarios, providing a better representation of the real distributions, and a more restricted number of these leading to less computational effort.
{ "cite_N": [ "@cite_19", "@cite_16", "@cite_6", "@cite_7" ], "mid": [ "2017893044", "", "2031236641", "2079487902" ], "abstract": [ "An investigation of the single-vehicle, many-to-many, immediate-request dial-a-ride problem is developed in two parts I and II. Part I focuses on the “static” case of the problem. In this case, intermediate requests that may appear during the execution of the route are not considered. A generalized objective function is examined, the minimization of a weighted combination of the time to service all customers and of the total degree of “dissatisfaction” experienced by them while waiting for service. This dissatisfaction is assumed to be a linear function of the waiting and riding times of each customer. Vehicle capacity constraints and special priority rules are part of the problem. A Dynamic Programming approach is developed. The algorithm exhibits a computational effort which, although an exponential function of the size of the problem, is asymptotically lower than the corresponding effort of the classical Dynamic Programming algorithm applied to a Traveling Salesman Problem of the same size. Part II extends this approach to solving the equivalent “dynamic” case. In this case, new customer requests are automatically eligible for consideration at the time they occur. The procedure is an open-ended sequence of updates, each following every new customer request. The algorithm optimizes only over known inputs and does not anticipate future customer requests. Indefinite deferment of a customer's request is prevented by the priority rules introduced in Part I. Examples in both “static” and “dynamic” cases are presented.", "", "A number of technological advances have led to a renewed interest in dynamic vehicle routing problems. This survey classifies routing problems from the perspective of information quality and evolution. After presenting a general description of dynamic routing, we introduce the notion of degree of dynamism, and present a comprehensive review of applications and solution methods for dynamic vehicle routing problems.", "Although most real-world vehicle routing problems are dynamic, the traditional methodological arsenal for this class of problems has been based on adaptations of static algorithms. Still, some important new methodological approaches have recently emerged. In addition, computer-based technologies such as electronic data interchange (EDI), geographic information systems (GIS), global positioning systems (GPS), and intelligent vehicle-highway systems (IVHS) have significantly enhanced the possibilities for efficient dynamic routing and have opened interesting directions for new research. This paper examines the main issues in this rapidly growing area, and surveys recent results and other advances. The assessment of possible impact of new technologies and the distinction of dynamic problems vis-a-vis their static counterparts are given emphasis." ] }
1502.01972
2157052214
We consider a dynamic vehicle routing problem with time windows and stochastic customers (DS-VRPTW), such that customers may request for services as vehicles have already started their tours. To solve this problem, the goal is to provide a decision rule for choosing, at each time step, the next action to perform in light of known requests and probabilistic knowledge on requests likelihood. We introduce a new decision rule, called Global Stochastic Assessment (GSA) rule for the DS-VRPTW, and we compare it with existing decision rules, such as MSA. In particular, we show that GSA fully integrates nonanticipativity constraints so that it leads to better decisions in our stochastic context. We describe a new heuristic approach for efficiently approximating our GSA rule. We introduce a new waiting strategy. Experiments on dynamic and stochastic benchmarks, which include instances of different degrees of dynamism, show that not only our approach is competitive with state-of-the-art methods, but also enables to compute meaningful offline solutions to fully dynamic problems where absolutely no a priori customer request is provided.
@cite_24 studies the DS-VRPTW and introduces the Multiple Scenario Approach (MSA). A key element of MSA is an adaptive memory that stores a pool of solutions. Each solution is computed by considering a particular scenario which is optimized for a few seconds. The pool is continuously populated and filtered such that all solutions are consistent with the current system state. Another important element of MSA is the used to make operational decisions involving idle vehicles. The authors designed 3 algorithms for that purpose: @cite_4 @cite_29 samples a set of scenarios and selects the next request to be serviced by considering its average cost on the sampled set of scenarios. Algorithm @cite_2 depicts how it chooses the next action @math to perform. It requires an optimization for each action @math and each scenario @math (lines 3-4), which is computationally very expensive, even with a heuristic approach.
{ "cite_N": [ "@cite_24", "@cite_29", "@cite_4", "@cite_2" ], "mid": [ "2294444152", "1244995983", "140607747", "2070311542" ], "abstract": [ "", "This paper reconsiders online packet scheduling in computer networks, where the goal is to minimize weighted packet loss and where the arrival distributions of packets, or approximations thereof, are available for sampling. Earlier work proposed an expectation approach, which chooses the next packet to schedule by approximating the expected loss of each decision over a set of scenarios. The expectation approach was shown to significantly outperform traditional approaches ignoring stochastic information. This paper proposes a novel stochastic approach for online packet scheduling, whose key idea is to select the next packet as the one which is scheduled first most often in the optimal solutions of the scenarios. This consensus approach is shown to outperform the expectation approach significantly whenever time constraints and the problem features limit the number of scenarios that can be solved before making a decision. More importantly perhaps, the paper shows that the consensus and expectation approaches can be integrated to combine the benefits of both approaches. These novel online stochastic optimization algorithms are generic and problem-independent, they apply to other online applications as well, and they shed new light on why existing online stochastic algorithms behave well.", "This paper considers online stochastic optimization problems where time constraints severely limit the number of offline optimizations which can be performed at decision time and or in between decisions. It proposes a novel approach which combines the salient features of the earlier approaches: the evaluation of every decision on all samples (expectatio0n) and the ability to avoid distributing the samples among decisions (consensus). The key idea underlying the novel algorithm is to approximate the regret of a decision d. The regret algorithm is evaluated on two fundamentally different applications: online packet scheduling in networks and online multiple vehicle routing with time windows. On both applications, it produces significant benefits over prior approaches.", "Recent advances in both computational power and communications technologies have created novel opportunities for research in combinatorial optimizations. Many applications in routing, scheduling, and networking raise exciting challenges, in particular in uncertainty. These applications are often online optimization problems, where the input is not known a priori, but characterized by a probabilistic model, and where decisions are made under severe time constraints. This thesis presents an online stochastic optimization framework whose goal is to maximize expected profit. It presents two algorithms, consensus and regret, designed to operate under severe time constraints. Theoretical results show that the framework and algorithms provide strong guarantees on solution quality under reasonable assumptions about the input distribution). Moreover, experimental results on a variety of applications in packet scheduling, vehicle dispatching, and vehicle routing indicate that the approach provides significant improvements in quality of service, even under severe time constraints. The framework is also shown to be robust even when the distribution itself is unknown. Finally, a large neighborhood search that hybridizes constraint programming and local search in novel ways is presented for its success in online vehicle routing problems." ] }
1502.01972
2157052214
We consider a dynamic vehicle routing problem with time windows and stochastic customers (DS-VRPTW), such that customers may request for services as vehicles have already started their tours. To solve this problem, the goal is to provide a decision rule for choosing, at each time step, the next action to perform in light of known requests and probabilistic knowledge on requests likelihood. We introduce a new decision rule, called Global Stochastic Assessment (GSA) rule for the DS-VRPTW, and we compare it with existing decision rules, such as MSA. In particular, we show that GSA fully integrates nonanticipativity constraints so that it leads to better decisions in our stochastic context. We describe a new heuristic approach for efficiently approximating our GSA rule. We introduce a new waiting strategy. Experiments on dynamic and stochastic benchmarks, which include instances of different degrees of dynamism, show that not only our approach is competitive with state-of-the-art methods, but also enables to compute meaningful offline solutions to fully dynamic problems where absolutely no a priori customer request is provided.
Quite similar to the consensus algorithm is the Dynamic Sample Scenario Hedging Heuristic introduced by @cite_8 for the stochastic VRP. Also, @cite_27 designed a Tabu Search heuristic for the DS-VRPTW and introduced a vehicle-waiting strategy computed on a future request probability threshold in the near region. Finally, @cite_1 extends MSA with waiting and relocation strategies so that the vehicles are now able to relocate to promising but unrequested yet vertices. As the performances of MSA has been demonstrated in several studies @cite_1 @cite_3 @cite_17 @cite_23 , it is still considered as a state-of-the-art method for dealing with DS-VRPTW.
{ "cite_N": [ "@cite_8", "@cite_1", "@cite_3", "@cite_27", "@cite_23", "@cite_17" ], "mid": [ "2124521189", "96661160", "22198660", "2070175767", "2064404468", "1998816621" ], "abstract": [ "The statement of the standard vehicle routing problem cannot always capture all aspects of real-world applications. As a result, extensions or modifications to the model are warranted. Here we consider the case when customers can call in orders during the daily operations; i.e., both customer locations and demands may be unknown in advance. This is modeled as a combined dynamic and stochastic programming problem, and a heuristic solution method is developed where sample scenarios are generated, solved heuristically, and combined iteratively to form a solution to the overall problem.", "This paper considers online stochastic multiple vehicle routing with time windows in which requests arrive dynamically and the goal is to maximize the number of serviced customers. Contrary to earlier algorithms which only move vehicles to known customers, this paper investigates waiting and relocation strategies in which vehicles may wait at their current location or relocate to arbitrary sites. Experimental results show that waiting and relocation strategies may dramatically improve customer service, especially for problems that are highly dynamic and contain many late requests. The decisions to wait and to relocate do not exploit any problem-specific features but rather are obtained by including choices in the online algorithm that are necessarily sub-optimal in an offline setting.", "The VRP is a key to efficient transportation logistics. It is a computationally very hard problem. Whereas classical OR models are static and deterministic, these assumptions are rarely warranted in an industrial setting. Lately, there has been an increased focus on dynamic and stochastic vehicle routing in the research community. However, very few generic routing tools based on stochastic or dynamic models are available. We illustrate the need for dynamics and stochastic models in industrial routing, describe the Dynamic and Stochastic VRP, and how we have extended a generic VRP solver to cope with dynamics and uncertainty", "An important, but seldom investigated, issue in the field of dynamic vehicle routing and dispatching is how to exploit information about future events to improve decision making. In this paper, we address this issue in a real-time setting with a strategy based on probabilistic knowledge about future request arrivals to better manage the fleet of vehicles. More precisely, the new strategy introduces dummy customers (representing forecasted requests) in vehicle routes to provide a good coverage of the territory. This strategy is assessed through computational experiments performed in a simulated environment.", "The real-time operation of a fleet of vehicles introduces challenging optimization problems. In this work, we propose an event-driven framework that anticipates unknown changes arising in the context of dynamic vehicle routing. The framework is intrinsically parallelized to take advantage of modern multi-core and multi-threaded computing architectures. It is also designed to be easily embeddable in decision support systems that cope with a wide range of contexts and side constraints. We illustrate the flexibility of the framework by showing how it can be adapted to tackle the dynamic vehicle routing problem with stochastic demands.", "The problem of transporting patients or elderly people has been widely studied in literature and is usually modeled as a dial-a-ride problem (DARP). In this paper we analyze the corresponding problem arising in the daily operation of the Austrian Red Cross. This nongovernmental organization is the largest organization performing patient transportation in Austria. The aim is to design vehicle routes to serve partially dynamic transportation requests using a fixed vehicle fleet. Each request requires transportation from a patient's home location to a hospital (outbound request) or back home from the hospital (inbound request). Some of these requests are known in advance. Some requests are dynamic in the sense that they appear during the day without any prior information. Finally, some inbound requests are stochastic. More precisely, with a certain probability each outbound request causes a corresponding inbound request on the same day. Some stochastic information about these return transports is available from historical data. The purpose of this study is to investigate, whether using this information in designing the routes has a significant positive effect on the solution quality. The problem is modeled as a dynamic stochastic dial-a-ride problem with expected return transports. We propose four different modifications of metaheuristic solution approaches for this problem. In detail, we test dynamic versions of variable neighborhood search (VNS) and stochastic VNS (S-VNS) as well as modified versions of the multiple plan approach (MPA) and the multiple scenario approach (MSA). Tests are performed using 12 sets of test instances based on a real road network. Various demand scenarios are generated based on the available real data. Results show that using the stochastic information on return transports leads to average improvements of around 15 . Moreover, improvements of up to 41 can be achieved for some test instances." ] }
1502.01972
2157052214
We consider a dynamic vehicle routing problem with time windows and stochastic customers (DS-VRPTW), such that customers may request for services as vehicles have already started their tours. To solve this problem, the goal is to provide a decision rule for choosing, at each time step, the next action to perform in light of known requests and probabilistic knowledge on requests likelihood. We introduce a new decision rule, called Global Stochastic Assessment (GSA) rule for the DS-VRPTW, and we compare it with existing decision rules, such as MSA. In particular, we show that GSA fully integrates nonanticipativity constraints so that it leads to better decisions in our stochastic context. We describe a new heuristic approach for efficiently approximating our GSA rule. We introduce a new waiting strategy. Experiments on dynamic and stochastic benchmarks, which include instances of different degrees of dynamism, show that not only our approach is competitive with state-of-the-art methods, but also enables to compute meaningful offline solutions to fully dynamic problems where absolutely no a priori customer request is provided.
Other studies of particular interest for our paper are @cite_15 , on the dynamic and stochastic pickup and delivery problem, and @cite_17 , on the DS-DARP. Both consider local search based algorithms. Instead of a solution pool, they exploit one single solution that minimizes the expected cost over a set of scenarios. However, in order to limit computational effort, only near future requests are sampled within each scenario. Although the approach of @cite_17 is similar to the one of @cite_15 , the set of scenarios considered is reduced to one scenario. Although these later papers show some similarities with the approach we propose, they do not provide any mathematical motivation and analysis of their methods.
{ "cite_N": [ "@cite_15", "@cite_17" ], "mid": [ "2067969121", "1998816621" ], "abstract": [ "This paper describes anticipatory algorithms for the dynamic vehicle dispatching problem with pickups and deliveries, a problem faced by local area courier companies. These algorithms evaluate alternative solutions through a short-term demand sampling and a fully sequential procedure for indifference zone selection. They also exploit an unified and integrated approach in order to address all the issues involved in real-time fleet management, namely assigning requests to vehicles, routing the vehicles, scheduling the routes and relocating idle vehicles. Computational results show that the anticipatory algorithms provide consistently better solutions than their reactive counterparts.", "The problem of transporting patients or elderly people has been widely studied in literature and is usually modeled as a dial-a-ride problem (DARP). In this paper we analyze the corresponding problem arising in the daily operation of the Austrian Red Cross. This nongovernmental organization is the largest organization performing patient transportation in Austria. The aim is to design vehicle routes to serve partially dynamic transportation requests using a fixed vehicle fleet. Each request requires transportation from a patient's home location to a hospital (outbound request) or back home from the hospital (inbound request). Some of these requests are known in advance. Some requests are dynamic in the sense that they appear during the day without any prior information. Finally, some inbound requests are stochastic. More precisely, with a certain probability each outbound request causes a corresponding inbound request on the same day. Some stochastic information about these return transports is available from historical data. The purpose of this study is to investigate, whether using this information in designing the routes has a significant positive effect on the solution quality. The problem is modeled as a dynamic stochastic dial-a-ride problem with expected return transports. We propose four different modifications of metaheuristic solution approaches for this problem. In detail, we test dynamic versions of variable neighborhood search (VNS) and stochastic VNS (S-VNS) as well as modified versions of the multiple plan approach (MPA) and the multiple scenario approach (MSA). Tests are performed using 12 sets of test instances based on a real road network. Various demand scenarios are generated based on the available real data. Results show that using the stochastic information on return transports leads to average improvements of around 15 . Moreover, improvements of up to 41 can be achieved for some test instances." ] }
1502.01682
2951205656
This paper describes the resource- and system-building efforts of an eight-week Johns Hopkins University Human Language Technology Center of Excellence Summer Camp for Applied Language Exploration (SCALE-2009) on Semantically-Informed Machine Translation (SIMT). We describe a new modality negation (MN) annotation scheme, the creation of a (publicly available) MN lexicon, and two automated MN taggers that we built using the annotation scheme and lexicon. Our annotation scheme isolates three components of modality and negation: a trigger (a word that conveys modality or negation), a target (an action associated with modality or negation) and a holder (an experiencer of modality). We describe how our MN lexicon was semi-automatically produced and we demonstrate that a structure-based MN tagger results in precision around 86 (depending on genre) for tagging of a standard LDC data set. We apply our MN annotation scheme to statistical machine translation using a syntactic framework that supports the inclusion of semantic annotations. Syntactic tags enriched with semantic annotations are assigned to parse trees in the target-language training texts through a process of tree grafting. While the focus of our work is modality and negation, the tree grafting procedure is general and supports other types of semantic information. We exploit this capability by including named entities, produced by a pre-existing tagger, in addition to the MN elements produced by the taggers described in this paper. The resulting system significantly outperformed a linguistically naive baseline model (Hiero), and reached the highest scores yet reported on the NIST 2009 Urdu-English test set. This finding supports the hypothesis that both syntactic and semantic information can improve translation quality.
The development of annotation schemes has become an area of computational linguistics development in its own right, often separate from machine learning applications. Some projects began as strictly linguistic projects that were later adapted for computational linguistics. When an annotation scheme is consistent and well developed, its subsequent application to NLP systems is most effective. For example, the syntactic annotation of parse trees in the Penn Treebank @cite_18 had a tremendous effect on parsing and on Natural Language Processing in general.
{ "cite_N": [ "@cite_18" ], "mid": [ "1632114991" ], "abstract": [ "Abstract : As a result of this grant, the researchers have now published oil CDROM a corpus of over 4 million words of running text annotated with part-of- speech (POS) tags, with over 3 million words of that material assigned skelet al grammatical structure. This material now includes a fully hand-parsed version of the classic Brown corpus. About one half of the papers at the ACL Workshop on Using Large Text Corpora this past summer were based on the materials generated by this grant." ] }
1502.01682
2951205656
This paper describes the resource- and system-building efforts of an eight-week Johns Hopkins University Human Language Technology Center of Excellence Summer Camp for Applied Language Exploration (SCALE-2009) on Semantically-Informed Machine Translation (SIMT). We describe a new modality negation (MN) annotation scheme, the creation of a (publicly available) MN lexicon, and two automated MN taggers that we built using the annotation scheme and lexicon. Our annotation scheme isolates three components of modality and negation: a trigger (a word that conveys modality or negation), a target (an action associated with modality or negation) and a holder (an experiencer of modality). We describe how our MN lexicon was semi-automatically produced and we demonstrate that a structure-based MN tagger results in precision around 86 (depending on genre) for tagging of a standard LDC data set. We apply our MN annotation scheme to statistical machine translation using a syntactic framework that supports the inclusion of semantic annotations. Syntactic tags enriched with semantic annotations are assigned to parse trees in the target-language training texts through a process of tree grafting. While the focus of our work is modality and negation, the tree grafting procedure is general and supports other types of semantic information. We exploit this capability by including named entities, produced by a pre-existing tagger, in addition to the MN elements produced by the taggers described in this paper. The resulting system significantly outperformed a linguistically naive baseline model (Hiero), and reached the highest scores yet reported on the NIST 2009 Urdu-English test set. This finding supports the hypothesis that both syntactic and semantic information can improve translation quality.
Propbank @cite_1 is a set of annotations of predicate-argument structure over parse trees. First annotated as an overlay to the Penn Treebank, Propbank annotation now exists for other corpora. Propbank annotation aims to answer the question Who did what to whom? for individual predicates. It is tightly coupled with the behavior of individual verbs. FrameNet @cite_21 , a frame-based lexical database that associates each word in the database with a semantic frame and semantic roles, is also associated with annotations at the lexical level. WordNet @cite_32 is a very widely used online lexical taxonomy which has been developed in numerous languages. WordNet nouns, verbs, adjectives and adverbs are organized into synonym sets. PropBank, FrameNet, and WordNet cover the word senses and argument-taking properties of many modal predicates.
{ "cite_N": [ "@cite_21", "@cite_1", "@cite_32" ], "mid": [ "2115792525", "2158847908", "" ], "abstract": [ "FrameNet is a three-year NSF-supported project in corpus-based computational lexicography, now in its second year (NSF IRI-9618838, \"Tools for Lexicon Building\"). The project's key features are (a) a commitment to corpus evidence for semantic and syntactic generalizations, and (b) the representation of the valences of its target words (mostly nouns, adjectives, and verbs) in which the semantic portion makes use of frame semantics. The resulting database will contain (a) descriptions of the semantic frames underlying the meanings of the words described, and (b) the valence representation (semantic and syntactic) of several thousand words and phrases, each accompanied by (c) a representative collection of annotated corpus attestations, which jointly exemplify the observed linkings between \"frame elements\" and their syntactic realizations (e.g. grammatical function, phrase type, and other syntactic traits). This report will present the project's goals and workflow, and information about the computational tools that have been adapted or created in-house for this work.", "The Proposition Bank project takes a practical approach to semantic representation, adding a layer of predicate-argument information, or semantic role labels, to the syntactic structures of the Penn Treebank. The resulting resource can be thought of as shallow, in that it does not represent coreference, quantification, and many other higher-order phenomena, but also broad, in that it covers every instance of every verb in the corpus and allows representative statistics to be calculated.We discuss the criteria used to define the sets of semantic roles used in the annotation process and to analyze the frequency of syntactic semantic alternations in the corpus. We describe an automatic system for semantic role tagging trained on the corpus and discuss the effect on its performance of various types of information, including a comparison of full syntactic parsing with a flat representation and the contribution of the empty ''trace'' categories of the treebank.", "" ] }
1502.01682
2951205656
This paper describes the resource- and system-building efforts of an eight-week Johns Hopkins University Human Language Technology Center of Excellence Summer Camp for Applied Language Exploration (SCALE-2009) on Semantically-Informed Machine Translation (SIMT). We describe a new modality negation (MN) annotation scheme, the creation of a (publicly available) MN lexicon, and two automated MN taggers that we built using the annotation scheme and lexicon. Our annotation scheme isolates three components of modality and negation: a trigger (a word that conveys modality or negation), a target (an action associated with modality or negation) and a holder (an experiencer of modality). We describe how our MN lexicon was semi-automatically produced and we demonstrate that a structure-based MN tagger results in precision around 86 (depending on genre) for tagging of a standard LDC data set. We apply our MN annotation scheme to statistical machine translation using a syntactic framework that supports the inclusion of semantic annotations. Syntactic tags enriched with semantic annotations are assigned to parse trees in the target-language training texts through a process of tree grafting. While the focus of our work is modality and negation, the tree grafting procedure is general and supports other types of semantic information. We exploit this capability by including named entities, produced by a pre-existing tagger, in addition to the MN elements produced by the taggers described in this paper. The resulting system significantly outperformed a linguistically naive baseline model (Hiero), and reached the highest scores yet reported on the NIST 2009 Urdu-English test set. This finding supports the hypothesis that both syntactic and semantic information can improve translation quality.
A major annotation effort for temporal and event expressions is the TimeML specification language, which has been developed in the context of reasoning for question answering @cite_10 . TimeML, which includes modality annotation on events, is the basis for creating the TimeBank and FactBank corpora @cite_6 @cite_16 . In FactBank, event mentions are marked with their degree of factuality.
{ "cite_N": [ "@cite_16", "@cite_10", "@cite_6" ], "mid": [ "2006941876", "1542291579", "" ], "abstract": [ "Recent work in computational linguistics points out the need for systems to be sensitive to the veracity or factuality of events as mentioned in text; that is, to recognize whether events are presented as corresponding to actual situations in the world, situations that have not happened, or situations of uncertain interpretation. Event factuality is an important aspect of the representation of events in discourse, but the annotation of such information poses a representational challenge, largely because factuality is expressed through the interaction of numerous linguistic markers and constructions. Many of these markers are already encoded in existing corpora, albeit in a somewhat fragmented way. In this article, we present FactBank, a corpus annotated with information concerning the factuality of events. Its annotation has been carried out from a descriptive framework of factuality grounded on both theoretical findings and data analysis. FactBank is built on top of TimeBank, adding to it an additional level of semantic information.", "Current results in basic Information Extraction tasks such as Named Entity Recognition or Event Extraction suggest that we are close to achieving a stage where the fundamental units for text understanding are put together; namely, predicates and their arguments. However, other layers of information, such as event modality, are essential for understanding, since the inferences derivable from factual events are obviously different from those judged as possible or non-existent. In this paper, we first map out the scope of modality in natural language; we propose a specification language for annotating this information in text; and finally we describe two tools that automatically recognizing modal information in natural language text.", "" ] }
1502.01682
2951205656
This paper describes the resource- and system-building efforts of an eight-week Johns Hopkins University Human Language Technology Center of Excellence Summer Camp for Applied Language Exploration (SCALE-2009) on Semantically-Informed Machine Translation (SIMT). We describe a new modality negation (MN) annotation scheme, the creation of a (publicly available) MN lexicon, and two automated MN taggers that we built using the annotation scheme and lexicon. Our annotation scheme isolates three components of modality and negation: a trigger (a word that conveys modality or negation), a target (an action associated with modality or negation) and a holder (an experiencer of modality). We describe how our MN lexicon was semi-automatically produced and we demonstrate that a structure-based MN tagger results in precision around 86 (depending on genre) for tagging of a standard LDC data set. We apply our MN annotation scheme to statistical machine translation using a syntactic framework that supports the inclusion of semantic annotations. Syntactic tags enriched with semantic annotations are assigned to parse trees in the target-language training texts through a process of tree grafting. While the focus of our work is modality and negation, the tree grafting procedure is general and supports other types of semantic information. We exploit this capability by including named entities, produced by a pre-existing tagger, in addition to the MN elements produced by the taggers described in this paper. The resulting system significantly outperformed a linguistically naive baseline model (Hiero), and reached the highest scores yet reported on the NIST 2009 Urdu-English test set. This finding supports the hypothesis that both syntactic and semantic information can improve translation quality.
Recent work incorporating modality annotation includes work on detecting certainty and uncertainty. Rubin describes a scheme for five levels of certainty, referred to as Epistemic modality, in news texts. Annotators identify explicit certainty markers and also take into account Perspective, Focus, and Time. Focus separates certainty into facts and opinions, to include attitudes. In our scheme, focus would be covered by want and belief modality. Also, separating focus and uncertainty can allow the annotation of both on one trigger word. describe a scheme for automatic committed belief tagging. Committed belief indicates the writer believes the proposition. The authors use a previously annotated corpus of committed belief, non-committed belief, and not applicable @cite_13 , and derive features for machine learning from parse trees. The authors desire to combine their work with FactBank annotation.
{ "cite_N": [ "@cite_13" ], "mid": [ "2027344326" ], "abstract": [ "We present a preliminary pilot study of belief annotation and automatic tagging. Our objective is to explore semantic meaning beyond surface propositions. We aim to model people's cognitive states, namely their beliefs as expressed through linguistic means. We model the strength of their beliefs and their (the human) degree of commitment to their utterance. We explore only the perspective of the author of a text. We classify predicates into one of three possibilities: committed belief, non committed belief, or not applicable. We proceed to manually annotate data to that end, then we build a supervised framework to test the feasibility of automatically predicting these belief states. Even though the data is relatively small, we show that automatic prediction of a belief class is a feasible task. Using syntactic features, we are able to obtain significant improvements over a simple baseline of 23 F-measure absolute points. The best performing automatic tagging condition is where we use POS tag, word type feature AlphaNumeric, and shallow syntactic chunk information CHUNK. Our best overall performance is 53.97 F-measure." ] }
1502.01682
2951205656
This paper describes the resource- and system-building efforts of an eight-week Johns Hopkins University Human Language Technology Center of Excellence Summer Camp for Applied Language Exploration (SCALE-2009) on Semantically-Informed Machine Translation (SIMT). We describe a new modality negation (MN) annotation scheme, the creation of a (publicly available) MN lexicon, and two automated MN taggers that we built using the annotation scheme and lexicon. Our annotation scheme isolates three components of modality and negation: a trigger (a word that conveys modality or negation), a target (an action associated with modality or negation) and a holder (an experiencer of modality). We describe how our MN lexicon was semi-automatically produced and we demonstrate that a structure-based MN tagger results in precision around 86 (depending on genre) for tagging of a standard LDC data set. We apply our MN annotation scheme to statistical machine translation using a syntactic framework that supports the inclusion of semantic annotations. Syntactic tags enriched with semantic annotations are assigned to parse trees in the target-language training texts through a process of tree grafting. While the focus of our work is modality and negation, the tree grafting procedure is general and supports other types of semantic information. We exploit this capability by including named entities, produced by a pre-existing tagger, in addition to the MN elements produced by the taggers described in this paper. The resulting system significantly outperformed a linguistically naive baseline model (Hiero), and reached the highest scores yet reported on the NIST 2009 Urdu-English test set. This finding supports the hypothesis that both syntactic and semantic information can improve translation quality.
The CoNLL-2010 shared task @cite_26 was about the detection of cues for uncertainty and their scope. The task was described as hedge detection,'' that is, finding statements which do not or cannot be backed up with facts. Auxiliary verbs such as may , might , can , etc. are one type of hedge cue. The training data for the shared task included the BioScope corpus @cite_8 , which is manually annotated with negation and speculation cues and their scope, and paragraphs from Wikipedia possibly containing hedge information. Our scheme also identifies cues in the form of triggers, but our desired outcome is to cover the full range of modalities and not just certainty and uncertainty. To identify scope, we use syntactic parse trees, as was allowed in the CoNLL task.
{ "cite_N": [ "@cite_26", "@cite_8" ], "mid": [ "2110871096", "2043335066" ], "abstract": [ "The CoNLL-2010 Shared Task was dedicated to the detection of uncertainty cues and their linguistic scope in natural language texts. The motivation behind this task was that distinguishing factual and uncertain information in texts is of essential importance in information extraction. This paper provides a general overview of the shared task, including the annotation protocols of the training and evaluation datasets, the exact task definitions, the evaluation metrics employed and the overall results. The paper concludes with an analysis of the prominent approaches and an overview of the systems submitted to the shared task.", "This article reports on a corpus annotation project that has produced a freely available resource for research on handling negation and uncertainty in biomedical texts (we call this corpus the BioScope corpus). The corpus consists of three parts, namely medical free texts, biological full papers and biological scientific abstracts. The dataset contains annotations at the token level for negative and speculative keywords and at the sentence level for their linguistic scope. The annotation process was carried out by two independent linguist annotators and a chief annotator -- also responsible for setting up the annotation guidelines -- who resolved cases where the annotators disagreed. We will report our statistics on corpus size, ambiguity levels and the consistency of annotations." ] }
1502.01682
2951205656
This paper describes the resource- and system-building efforts of an eight-week Johns Hopkins University Human Language Technology Center of Excellence Summer Camp for Applied Language Exploration (SCALE-2009) on Semantically-Informed Machine Translation (SIMT). We describe a new modality negation (MN) annotation scheme, the creation of a (publicly available) MN lexicon, and two automated MN taggers that we built using the annotation scheme and lexicon. Our annotation scheme isolates three components of modality and negation: a trigger (a word that conveys modality or negation), a target (an action associated with modality or negation) and a holder (an experiencer of modality). We describe how our MN lexicon was semi-automatically produced and we demonstrate that a structure-based MN tagger results in precision around 86 (depending on genre) for tagging of a standard LDC data set. We apply our MN annotation scheme to statistical machine translation using a syntactic framework that supports the inclusion of semantic annotations. Syntactic tags enriched with semantic annotations are assigned to parse trees in the target-language training texts through a process of tree grafting. While the focus of our work is modality and negation, the tree grafting procedure is general and supports other types of semantic information. We exploit this capability by including named entities, produced by a pre-existing tagger, in addition to the MN elements produced by the taggers described in this paper. The resulting system significantly outperformed a linguistically naive baseline model (Hiero), and reached the highest scores yet reported on the NIST 2009 Urdu-English test set. This finding supports the hypothesis that both syntactic and semantic information can improve translation quality.
The textual entailment literature includes modality annotation schemes. Identifying modalities is important to determine whether a text entails a hypothesis. Bar- include polarity based rules and negation and modality annotation rules. The polarity rules are based on an independent polarity lexicon @cite_9 . The annotation rules for negation and modality of predicates are based on identifying modal verbs, as well as conditional sentences and modal adverbials. The authors read the modality off parse trees directly using simple structural rules for modifiers.
{ "cite_N": [ "@cite_9" ], "mid": [ "3313028" ], "abstract": [ "Treebank parsing can be seen as the search for an optimally refined grammar consistent with a coarse training treebank. We describe a method in which a minimal grammar is hierarchically refined using EM to give accurate, compact grammars. The resulting grammars are extremely compact compared to other high-performance parsers, yet the parser gives the best published accuracies on several languages, as well as the best generative parsing numbers in English. In addition, we give an associated coarse-to-fine inference scheme which vastly improves inference time with no loss in test set accuracy." ] }
1502.01682
2951205656
This paper describes the resource- and system-building efforts of an eight-week Johns Hopkins University Human Language Technology Center of Excellence Summer Camp for Applied Language Exploration (SCALE-2009) on Semantically-Informed Machine Translation (SIMT). We describe a new modality negation (MN) annotation scheme, the creation of a (publicly available) MN lexicon, and two automated MN taggers that we built using the annotation scheme and lexicon. Our annotation scheme isolates three components of modality and negation: a trigger (a word that conveys modality or negation), a target (an action associated with modality or negation) and a holder (an experiencer of modality). We describe how our MN lexicon was semi-automatically produced and we demonstrate that a structure-based MN tagger results in precision around 86 (depending on genre) for tagging of a standard LDC data set. We apply our MN annotation scheme to statistical machine translation using a syntactic framework that supports the inclusion of semantic annotations. Syntactic tags enriched with semantic annotations are assigned to parse trees in the target-language training texts through a process of tree grafting. While the focus of our work is modality and negation, the tree grafting procedure is general and supports other types of semantic information. We exploit this capability by including named entities, produced by a pre-existing tagger, in addition to the MN elements produced by the taggers described in this paper. The resulting system significantly outperformed a linguistically naive baseline model (Hiero), and reached the highest scores yet reported on the NIST 2009 Urdu-English test set. This finding supports the hypothesis that both syntactic and semantic information can improve translation quality.
Related work in syntax-based MT includes that of Huang and Knight , where a series of syntax rules are applied to a source language string to produce a target language phrase structure tree. The Penn English Treebank @cite_18 is used as the source for the syntactic labels and syntax trees are relabeled to improve translation quality. In this work, node-internal and node-external information is used to relabel nodes, similar to earlier work where structural context was used to relabel nodes in the parsing domain @cite_35 . Klein and Manning's methods include lexicalizing determiners and percent markers, making more fine-grained VP categories, and marking the properties of sister nodes on nodes. All of these labels are derivable from the trees themselves and not from an auxiliary source. employ this type of node splitting in machine translation and report a small increase in BLEU score.
{ "cite_N": [ "@cite_35", "@cite_18" ], "mid": [ "2097606805", "1632114991" ], "abstract": [ "We demonstrate that an unlexicalized PCFG can parse much more accurately than previously shown, by making use of simple, linguistically motivated state splits, which break down false independence assumptions latent in a vanilla treebank grammar. Indeed, its performance of 86.36 (LP LR F1) is better than that of early lexicalized PCFG models, and surprisingly close to the current state-of-the-art. This result has potential uses beyond establishing a strong lower bound on the maximum possible accuracy of unlexicalized models: an unlexicalized PCFG is much more compact, easier to replicate, and easier to interpret than more complex lexical models, and the parsing algorithms are simpler, more widely understood, of lower asymptotic complexity, and easier to optimize.", "Abstract : As a result of this grant, the researchers have now published oil CDROM a corpus of over 4 million words of running text annotated with part-of- speech (POS) tags, with over 3 million words of that material assigned skelet al grammatical structure. This material now includes a fully hand-parsed version of the classic Brown corpus. About one half of the papers at the ACL Workshop on Using Large Text Corpora this past summer were based on the materials generated by this grant." ] }
1502.01682
2951205656
This paper describes the resource- and system-building efforts of an eight-week Johns Hopkins University Human Language Technology Center of Excellence Summer Camp for Applied Language Exploration (SCALE-2009) on Semantically-Informed Machine Translation (SIMT). We describe a new modality negation (MN) annotation scheme, the creation of a (publicly available) MN lexicon, and two automated MN taggers that we built using the annotation scheme and lexicon. Our annotation scheme isolates three components of modality and negation: a trigger (a word that conveys modality or negation), a target (an action associated with modality or negation) and a holder (an experiencer of modality). We describe how our MN lexicon was semi-automatically produced and we demonstrate that a structure-based MN tagger results in precision around 86 (depending on genre) for tagging of a standard LDC data set. We apply our MN annotation scheme to statistical machine translation using a syntactic framework that supports the inclusion of semantic annotations. Syntactic tags enriched with semantic annotations are assigned to parse trees in the target-language training texts through a process of tree grafting. While the focus of our work is modality and negation, the tree grafting procedure is general and supports other types of semantic information. We exploit this capability by including named entities, produced by a pre-existing tagger, in addition to the MN elements produced by the taggers described in this paper. The resulting system significantly outperformed a linguistically naive baseline model (Hiero), and reached the highest scores yet reported on the NIST 2009 Urdu-English test set. This finding supports the hypothesis that both syntactic and semantic information can improve translation quality.
We use the methods described in @cite_25 @cite_29 to induce synchronous grammar rules, a process which requires phrase alignments and syntactic parse trees. use generic non-terminal category symbols, as in @cite_7 , as well as grammatical categories from the Stanford parser @cite_35 . Their method for rule induction generalizes to any set of non-terminals. We further refine this process by adding semantic notations onto the syntactic non-terminals produced by a Penn Treebank trained parser, thus making the categories more informative.
{ "cite_N": [ "@cite_35", "@cite_29", "@cite_25", "@cite_7" ], "mid": [ "2097606805", "", "2168966090", "2152263452" ], "abstract": [ "We demonstrate that an unlexicalized PCFG can parse much more accurately than previously shown, by making use of simple, linguistically motivated state splits, which break down false independence assumptions latent in a vanilla treebank grammar. Indeed, its performance of 86.36 (LP LR F1) is better than that of early lexicalized PCFG models, and surprisingly close to the current state-of-the-art. This result has potential uses beyond establishing a strong lower bound on the maximum possible accuracy of unlexicalized models: an unlexicalized PCFG is much more compact, easier to replicate, and easier to interpret than more complex lexical models, and the parsing algorithms are simpler, more widely understood, of lower asymptotic complexity, and easier to optimize.", "", "We present translation results on the shared task \"Exploiting Parallel Texts for Statistical Machine Translation\" generated by a chart parsing decoder operating on phrase tables augmented and generalized with target language syntactic categories. We use a target language parser to generate parse trees for each sentence on the target side of the bilingual training corpus, matching them with phrase table lattices built for the corresponding source sentence. Considering phrases that correspond to syntactic categories in the parse trees we develop techniques to augment (declare a syntactically motivated category for a phrase pair) and generalize (form mixed terminal and nonterminal phrases) the phrase table into a synchronous bilingual grammar. We present results on the French-to-English task for this workshop, representing significant improvements over the workshop's baseline system. Our translation system is available open-source under the GNU General Public License.", "We present a statistical phrase-based translation model that uses hierarchical phrases---phrases that contain subphrases. The model is formally a synchronous context-free grammar but is learned from a bitext without any syntactic information. Thus it can be seen as a shift to the formal machinery of syntax-based translation systems without any linguistic commitment. In our experiments using BLEU as a metric, the hierarchical phrase-based model achieves a relative improvement of 7.5 over Pharaoh, a state-of-the-art phrase-based system." ] }
1502.01782
1974809759
In this paper we propose a novel approach to multi-action recognition that performs joint segmentation and classification. This approach models each action using a Gaussian mixture using robust low-dimensional action features. Segmentation is achieved by performing classification on overlapping temporal windows, which are then merged to produce the final result. This approach is considerably less complicated than previous methods which use dynamic programming or computationally expensive hidden Markov models (HMMs). Initial experiments on a stitched version of the KTH dataset show that the proposed approach achieves an accuracy of 78.3 , outperforming a recent HMM-based approach which obtained 71.2 .
Several descriptors have been used to represent human actions. Among them, we can find descriptors based on Spatio-Temporal Interest Points (STIPs) @cite_16 , gradients @cite_21 , and optical flow @cite_22 . The STIP based detector finds interest points and is a space-time extension of the Harris and F " o rstner interest point detector @cite_12 @cite_5 . Recently, STIP descriptors have led to relatively good action recognition performance @cite_25 . However, STIP based descriptors have some drawbacks as reported in @cite_24 @cite_0 @cite_18 @cite_19 . These drawbacks include: focus on local spatio-temporal information instead of global motion, can be unstable and imprecise (varying number of STIP detections) leading to low repeatability, redundancy can occur, are computationally expensive and the detections can be overly sparse (see Fig. ). Gradients have been used as a robust image and video representation @cite_21 . Each pixel in the gradient image helps extract relevant information, e.g. edges. Gradients can be computed at every spatio-temporal location @math in any direction in a video.
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_21", "@cite_24", "@cite_0", "@cite_19", "@cite_5", "@cite_16", "@cite_25", "@cite_12" ], "mid": [ "1576762698", "2172207578", "1981781955", "2107861736", "2002706836", "", "2111308925", "2020163092", "11594795", "" ], "abstract": [ "Action Recognition in videos is an active research field that is fueled by an acute need, spanning several application domains. Still, existing systems fall short of the applications' needs in real-world scenarios, where the quality of the video is less than optimal and the viewpoint is uncontrolled and often not static. In this paper, we consider the key elements of motion encoding and focus on capturing local changes in motion directions. In addition, we decouple image edges from motion edges using a suppression mechanism, and compensate for global camera motion by using an especially fitted registration scheme. Combined with a standard bag-of-words technique, our methods achieves state-of-the-art performance in the most recent and challenging benchmarks.", "We propose a set of kinematic features that are derived from the optical flow for human action recognition in videos. The set of kinematic features includes divergence, vorticity, symmetric and antisymmetric flow fields, second and third principal invariants of flow gradient and rate of strain tensor, and third principal invariant of rate of rotation tensor. Each kinematic feature, when computed from the optical flow of a sequence of images, gives rise to a spatiotemporal pattern. It is then assumed that the representative dynamics of the optical flow are captured by these spatiotemporal patterns in the form of dominant kinematic trends or kinematic modes. These kinematic modes are computed by performing principal component analysis (PCA) on the spatiotemporal volumes of the kinematic features. For classification, we propose the use of multiple instance learning (MIL) in which each action video is represented by a bag of kinematic modes. Each video is then embedded into a kinematic-mode-based feature space and the coordinates of the video in that space are used for classification using the nearest neighbor algorithm. The qualitative and quantitative results are reported on the benchmark data sets.", "Action recognition on large categories of unconstrained videos taken from the web is a very challenging problem compared to datasets like KTH (6 actions), IXMAS (13 actions), and Weizmann (10 actions). Challenges like camera motion, different viewpoints, large interclass variations, cluttered background, occlusions, bad illumination conditions, and poor quality of web videos cause the majority of the state-of-the-art action recognition approaches to fail. Also, an increased number of categories and the inclusion of actions with high confusion add to the challenges. In this paper, we propose using the scene context information obtained from moving and stationary pixels in the key frames, in conjunction with motion features, to solve the action recognition problem on a large (50 actions) dataset with videos from the web. We perform a combination of early and late fusion on multiple features to handle the very large number of categories. We demonstrate that scene context is a very important feature to perform action recognition on very large datasets. The proposed method does not require any kind of video stabilization, person detection, or tracking and pruning of features. Our approach gives good performance on a large number of action categories; it has been tested on the UCF50 dataset with 50 action categories, which is an extension of the UCF YouTube Action (UCF11) dataset containing 11 action categories. We also tested our approach on the KTH and HMDB51 datasets for comparison.", "This paper considers the problem of detecting actions from cluttered videos. Compared with the classical action recognition problem, this paper aims to estimate not only the scene category of a given video sequence, but also the spatial-temporal locations of the action instances. In recent years, many feature extraction schemes have been designed to describe various aspects of actions. However, due to the difficulty of action detection, e.g., the cluttered background and potential occlusions, a single type of features cannot solve the action detection problems perfectly in cluttered videos. In this paper, we attack the detection problem by combining multiple Spatial-Temporal Interest Point (STIP) features, which detect salient patches in the video domain, and describe these patches by feature of local regions. The difficulty of combining multiple STIP features for action detection is two folds: First, the number of salient patches detected by different STIP methods varies across different salient patches. How to combine such features is not considered by existing fusion methods [13] [5]. Second, the detection in the videos should be efficient, which excludes many slow machine learning algorithms. To handle these two difficulties, we propose a new approach which combines Gaussian Mixture Model with Branch-and-Bound search to efficiently locate the action of interest. We build a new challenging dataset for our action detection task, and our algorithm obtains impressive results. On classical KTH dataset, our method outperforms the state-of-the-art methods.", "Recent progress in the field of human action recognition points towards the use of Spatio-Temporal Interest Points (STIPs) for local descriptor-based recognition strategies. In this paper, we present a novel approach for robust and selective STIP detection, by applying surround suppression combined with local and temporal constraints. This new method is significantly different from existing STIP detection techniques and improves the performance by detecting more repeatable, stable and distinctive STIPs for human actors, while suppressing unwanted background STIPs. For action representation we use a bag-of-video words (BoV) model of local N-jet features to build a vocabulary of visual-words. To this end, we introduce a novel vocabulary building strategy by combining spatial pyramid and vocabulary compression techniques, resulting in improved performance and efficiency. Action class specific Support Vector Machine (SVM) classifiers are trained for categorization of human actions. A comprehensive set of experiments on popular benchmark datasets (KTH and Weizmann), more challenging datasets of complex scenes with background clutter and camera motion (CVC and CMU), movie and YouTube video clips (Hollywood 2 and YouTube), and complex scenes with multiple actors (MSR I and Multi-KTH), validates our approach and show state-of-the-art performance. Due to the unavailability of ground truth action annotation data for the Multi-KTH dataset, we introduce an actor specific spatio-temporal clustering of STIPs to address the problem of automatic action annotation of multiple simultaneous actors. Additionally, we perform cross-data action recognition by training on source datasets (KTH and Weizmann) and testing on completely different and more challenging target datasets (CVC, CMU, MSR I and Multi-KTH). This documents the robustness of our proposed approach in the realistic scenario, using separate training and test datasets, which in general has been a shortcoming in the performance evaluation of human action recognition techniques.", "", "The problem we are addressing in Alvey Project MMI149 is that of using computer vision to understand the unconstrained 3D world, in which the viewed scenes will in general contain too wide a diversity of objects for topdown recognition techniques to work. For example, we desire to obtain an understanding of natural scenes, containing roads, buildings, trees, bushes, etc., as typified by the two frames from a sequence illustrated in Figure 1. The solution to this problem that we are pursuing is to use a computer vision system based upon motion analysis of a monocular image sequence from a mobile camera. By extraction and tracking of image features, representations of the 3D analogues of these features can be constructed.", "Local image features or interest points provide compact and abstract representations of patterns in an image. In this paper, we extend the notion of spatial interest points into the spatio-temporal domain and show how the resulting features often reflect interesting events that can be used for a compact representation of video data as well as for interpretation of spatio-temporal events. To detect spatio-temporal events, we build on the idea of the Harris and Forstner interest point operators and detect local structures in space-time where the image values have significant local variations in both space and time. We estimate the spatio-temporal extents of the detected events by maximizing a normalized spatio-temporal Laplacian operator over spatial and temporal scales. To represent the detected events, we then compute local, spatio-temporal, scale-invariant N-jets and classify each event with respect to its jet descriptor. For the problem of human motion analysis, we illustrate how a video representation in terms of local space-time features allows for detection of walking people in scenes with occlusions and dynamic cluttered backgrounds.", "The problem of human action recognition has received increasing attention in recent years for its importance in many applications. Local representations and in particular STIP descriptors have gained increasing popularity for action recognition. Yet, the main limitation of those approaches is that they do not capture the spatial relationships in the subject performing the action. This paper proposes a novel method based on the fusion of global spatial relationships provided by graph embedding and the local spatio-temporal information of STIP descriptors. Experiments on an action recognition dataset reported in the paper show that recognition accuracy can be significantly improved by combining the structural information with the spatio-temporal features.", "" ] }
1502.01782
1974809759
In this paper we propose a novel approach to multi-action recognition that performs joint segmentation and classification. This approach models each action using a Gaussian mixture using robust low-dimensional action features. Segmentation is achieved by performing classification on overlapping temporal windows, which are then merged to produce the final result. This approach is considerably less complicated than previous methods which use dynamic programming or computationally expensive hidden Markov models (HMMs). Initial experiments on a stitched version of the KTH dataset show that the proposed approach achieves an accuracy of 78.3 , outperforming a recent HMM-based approach which obtained 71.2 .
Since the task of action recognition is based on a sequence of frames in order to analyse the various motion patterns in the video @cite_21 , optical-flow provides an efficient way of capturing the local dynamics in the scene @cite_18 . Optical flow describes the motion dynamics of an action, calculating the absolute motion between two frames, which contains motion from many sources @cite_22 @cite_28 @cite_6 .
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_28", "@cite_21", "@cite_6" ], "mid": [ "1576762698", "2172207578", "2019245255", "1981781955", "2068611653" ], "abstract": [ "Action Recognition in videos is an active research field that is fueled by an acute need, spanning several application domains. Still, existing systems fall short of the applications' needs in real-world scenarios, where the quality of the video is less than optimal and the viewpoint is uncontrolled and often not static. In this paper, we consider the key elements of motion encoding and focus on capturing local changes in motion directions. In addition, we decouple image edges from motion edges using a suppression mechanism, and compensate for global camera motion by using an especially fitted registration scheme. Combined with a standard bag-of-words technique, our methods achieves state-of-the-art performance in the most recent and challenging benchmarks.", "We propose a set of kinematic features that are derived from the optical flow for human action recognition in videos. The set of kinematic features includes divergence, vorticity, symmetric and antisymmetric flow fields, second and third principal invariants of flow gradient and rate of strain tensor, and third principal invariant of rate of rotation tensor. Each kinematic feature, when computed from the optical flow of a sequence of images, gives rise to a spatiotemporal pattern. It is then assumed that the representative dynamics of the optical flow are captured by these spatiotemporal patterns in the form of dominant kinematic trends or kinematic modes. These kinematic modes are computed by performing principal component analysis (PCA) on the spatiotemporal volumes of the kinematic features. For classification, we propose the use of multiple instance learning (MIL) in which each action video is represented by a bag of kinematic modes. Each video is then embedded into a kinematic-mode-based feature space and the coordinates of the video in that space are used for classification using the nearest neighbor algorithm. The qualitative and quantitative results are reported on the benchmark data sets.", "We propose a general framework for fast and accurate recognition of actions in video using empirical covariance matrices of features. A dense set of spatio-temporal feature vectors are computed from video to provide a localized description of the action, and subsequently aggregated in an empirical covariance matrix to compactly represent the action. Two supervised learning methods for action recognition are developed using feature covariance matrices. Common to both methods is the transformation of the classification problem in the closed convex cone of covariance matrices into an equivalent problem in the vector space of symmetric matrices via the matrix logarithm. The first method applies nearest-neighbor classification using a suitable Riemannian metric for covariance matrices. The second method approximates the logarithm of a query covariance matrix by a sparse linear combination of the logarithms of training covariance matrices. The action label is then determined from the sparse coefficients. Both methods achieve state-of-the-art classification performance on several datasets, and are robust to action variability, viewpoint changes, and low object resolution. The proposed framework is conceptually simple and has low storage and computational requirements making it attractive for real-time implementation.", "Action recognition on large categories of unconstrained videos taken from the web is a very challenging problem compared to datasets like KTH (6 actions), IXMAS (13 actions), and Weizmann (10 actions). Challenges like camera motion, different viewpoints, large interclass variations, cluttered background, occlusions, bad illumination conditions, and poor quality of web videos cause the majority of the state-of-the-art action recognition approaches to fail. Also, an increased number of categories and the inclusion of actions with high confusion add to the challenges. In this paper, we propose using the scene context information obtained from moving and stationary pixels in the key frames, in conjunction with motion features, to solve the action recognition problem on a large (50 actions) dataset with videos from the web. We perform a combination of early and late fusion on multiple features to handle the very large number of categories. We demonstrate that scene context is a very important feature to perform action recognition on very large datasets. The proposed method does not require any kind of video stabilization, person detection, or tracking and pruning of features. Our approach gives good performance on a large number of action categories; it has been tested on the UCF50 dataset with 50 action categories, which is an extension of the UCF YouTube Action (UCF11) dataset containing 11 action categories. We also tested our approach on the KTH and HMDB51 datasets for comparison.", "This paper introduces a video representation based on dense trajectories and motion boundary descriptors. Trajectories capture the local motion information of the video. A dense representation guarantees a good coverage of foreground motion as well as of the surrounding context. A state-of-the-art optical flow algorithm enables a robust and efficient extraction of dense trajectories. As descriptors we extract features aligned with the trajectories to characterize shape (point coordinates), appearance (histograms of oriented gradients) and motion (histograms of optical flow). Additionally, we introduce a descriptor based on motion boundary histograms (MBH) which rely on differential optical flow. The MBH descriptor shows to consistently outperform other state-of-the-art descriptors, in particular on real-world videos that contain a significant amount of camera motion. We evaluate our video representation in the context of action classification on nine datasets, namely KTH, YouTube, Hollywood2, UCF sports, IXMAS, UIUC, Olympic Sports, UCF50 and HMDB51. On all datasets our approach outperforms current state-of-the-art results." ] }
1502.01782
1974809759
In this paper we propose a novel approach to multi-action recognition that performs joint segmentation and classification. This approach models each action using a Gaussian mixture using robust low-dimensional action features. Segmentation is achieved by performing classification on overlapping temporal windows, which are then merged to produce the final result. This approach is considerably less complicated than previous methods which use dynamic programming or computationally expensive hidden Markov models (HMMs). Initial experiments on a stitched version of the KTH dataset show that the proposed approach achieves an accuracy of 78.3 , outperforming a recent HMM-based approach which obtained 71.2 .
Hidden Markov models (HMMs) have a long history of use in activity recognition. One action is a sequence of events ordered in space and time, and HMMs capture structural and transitional features and therefore the dynamics of the system @cite_20 . Gaussian Mixture Models (GMMs) have also been explored for recognising single actions. @cite_24 , each set of feature vectors is modelled with a GMM. Then, the likelihood of a feature vector belonging to a given human action can be estimated.
{ "cite_N": [ "@cite_24", "@cite_20" ], "mid": [ "2107861736", "1588452126" ], "abstract": [ "This paper considers the problem of detecting actions from cluttered videos. Compared with the classical action recognition problem, this paper aims to estimate not only the scene category of a given video sequence, but also the spatial-temporal locations of the action instances. In recent years, many feature extraction schemes have been designed to describe various aspects of actions. However, due to the difficulty of action detection, e.g., the cluttered background and potential occlusions, a single type of features cannot solve the action detection problems perfectly in cluttered videos. In this paper, we attack the detection problem by combining multiple Spatial-Temporal Interest Point (STIP) features, which detect salient patches in the video domain, and describe these patches by feature of local regions. The difficulty of combining multiple STIP features for action detection is two folds: First, the number of salient patches detected by different STIP methods varies across different salient patches. How to combine such features is not considered by existing fusion methods [13] [5]. Second, the detection in the videos should be efficient, which excludes many slow machine learning algorithms. To handle these two difficulties, we propose a new approach which combines Gaussian Mixture Model with Branch-and-Bound search to efficiently locate the action of interest. We build a new challenging dataset for our action detection task, and our algorithm obtains impressive results. On classical KTH dataset, our method outperforms the state-of-the-art methods.", "This paper describes an experimental study about a robust contour feature (shape-context) for using in action recognition based on continuous hidden Markov models (HMM). We ran different experimental setting using the KTH's database of actions. The image contours are extracted using a standard algorithm. The shape-contextfeature vector is build from of histogram of a set ofnon-overlapping regions in the image. We show that the combined use of HMM and this feature gives equivalent o better results, in term of action detection, that current approaches in the literature." ] }
1502.01782
1974809759
In this paper we propose a novel approach to multi-action recognition that performs joint segmentation and classification. This approach models each action using a Gaussian mixture using robust low-dimensional action features. Segmentation is achieved by performing classification on overlapping temporal windows, which are then merged to produce the final result. This approach is considerably less complicated than previous methods which use dynamic programming or computationally expensive hidden Markov models (HMMs). Initial experiments on a stitched version of the KTH dataset show that the proposed approach achieves an accuracy of 78.3 , outperforming a recent HMM-based approach which obtained 71.2 .
Other successful methods for single action recognition include Riemannian manifold based approaches and those that use dense trajectory tracking. For single action recognition, Riemannian manifolds have been investigated in @cite_17 @cite_3 . @cite_17 optical flow features are extracted and then a compact covariance matrix representation of such features is calculated. Such a covariance matrix can be thought of as a point on a Riemannian manifold. An action and gesture recognition method based on spatio-temporal covariance descriptors obtained from optical flow and gradient descriptors is presented in @cite_3 .
{ "cite_N": [ "@cite_3", "@cite_17" ], "mid": [ "1993014362", "2125389556" ], "abstract": [ "We propose a new action and gesture recognition method based on spatio-temporal covariance descriptors and a weighted Riemannian locality preserving projection approach that takes into account the curved space formed by the descriptors. The weighted projection is then exploited during boosting to create a final multiclass classification algorithm that employs the most useful spatio-temporal regions. We also show how the descriptors can be computed quickly through the use of integral video representations. Experiments on the UCF sport, CK+ facial expression and Cambridge hand gesture datasets indicate superior performance of the proposed method compared to several recent state-of-the-art techniques. The proposed method is robust and does not require additional processing of the videos, such as foreground detection, interest-point detection or tracking.", "A novel approach to action recognition in video based onthe analysis of optical flow is presented. Properties of opticalflow useful for action recognition are captured usingonly the empirical covariance matrix of a bag of featuressuch as flow velocity, gradient, and divergence. The featurecovariance matrix is a low-dimensional representationof video dynamics that belongs to a Riemannian manifold.The Riemannian manifold of covariance matrices is transformedinto the vector space of symmetric matrices underthe matrix logarithm mapping. The log-covariance matrixof a test action segment is approximated by a sparse linearcombination of the log-covariance matrices of training actionsegments using a linear program and the coefficients ofthe sparse linear representation are used to recognize actions.This approach based on the unique blend of a logcovariance-descriptor and a sparse linear representation istested on the Weizmann and KTH datasets. The proposedapproach attains leave-one-out cross validation scores of94.4 correct classification rate for the Weizmann datasetand 98.5 for the KTH dataset. Furthermore, the methodis computationally efficient and easy to implement." ] }
1502.01782
1974809759
In this paper we propose a novel approach to multi-action recognition that performs joint segmentation and classification. This approach models each action using a Gaussian mixture using robust low-dimensional action features. Segmentation is achieved by performing classification on overlapping temporal windows, which are then merged to produce the final result. This approach is considerably less complicated than previous methods which use dynamic programming or computationally expensive hidden Markov models (HMMs). Initial experiments on a stitched version of the KTH dataset show that the proposed approach achieves an accuracy of 78.3 , outperforming a recent HMM-based approach which obtained 71.2 .
Two recent approaches for single action recognition based on tracking of interest points makes use of dense trajectories @cite_6 @cite_15 . These dense trajectories allow the description of videos by sampling dense points from each frame and then tracking them based on displacement information from a dense optical flow field. Although this approach obtains good performance, it is computationally expensive, especially the calculation of the dense optical flow which is calculated at several scales.
{ "cite_N": [ "@cite_15", "@cite_6" ], "mid": [ "2105101328", "2068611653" ], "abstract": [ "Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art.", "This paper introduces a video representation based on dense trajectories and motion boundary descriptors. Trajectories capture the local motion information of the video. A dense representation guarantees a good coverage of foreground motion as well as of the surrounding context. A state-of-the-art optical flow algorithm enables a robust and efficient extraction of dense trajectories. As descriptors we extract features aligned with the trajectories to characterize shape (point coordinates), appearance (histograms of oriented gradients) and motion (histograms of optical flow). Additionally, we introduce a descriptor based on motion boundary histograms (MBH) which rely on differential optical flow. The MBH descriptor shows to consistently outperform other state-of-the-art descriptors, in particular on real-world videos that contain a significant amount of camera motion. We evaluate our video representation in the context of action classification on nine datasets, namely KTH, YouTube, Hollywood2, UCF sports, IXMAS, UIUC, Olympic Sports, UCF50 and HMDB51. On all datasets our approach outperforms current state-of-the-art results." ] }
1502.01782
1974809759
In this paper we propose a novel approach to multi-action recognition that performs joint segmentation and classification. This approach models each action using a Gaussian mixture using robust low-dimensional action features. Segmentation is achieved by performing classification on overlapping temporal windows, which are then merged to produce the final result. This approach is considerably less complicated than previous methods which use dynamic programming or computationally expensive hidden Markov models (HMMs). Initial experiments on a stitched version of the KTH dataset show that the proposed approach achieves an accuracy of 78.3 , outperforming a recent HMM-based approach which obtained 71.2 .
Multi-action recognition, in our context, consists of segmenting and recognising single actions from a video sequence where one person performs a sequence of such actions @cite_10 . The process for segmenting and recognising multiple actions in a video can be solved either as two independent problems or a joint problem.
{ "cite_N": [ "@cite_10" ], "mid": [ "2168347767" ], "abstract": [ "Given an input video sequence of one person conducting a sequence of continuous actions, we consider the problem of jointly segmenting and recognizing actions. We propose a discriminative approach to this problem under a semi-Markov model framework, where we are able to define a set of features over input-output space that captures the characteristics on boundary frames, action segments and neighboring action segments, respectively. In addition, we show that this method can also be used to recognize the person who performs in this video sequence. A Viterbi-like algorithm is devised to help efficiently solve the induced optimization problem. Experiments on a variety of datasets demonstrate the effectiveness of the proposed method." ] }
1502.01782
1974809759
In this paper we propose a novel approach to multi-action recognition that performs joint segmentation and classification. This approach models each action using a Gaussian mixture using robust low-dimensional action features. Segmentation is achieved by performing classification on overlapping temporal windows, which are then merged to produce the final result. This approach is considerably less complicated than previous methods which use dynamic programming or computationally expensive hidden Markov models (HMMs). Initial experiments on a stitched version of the KTH dataset show that the proposed approach achieves an accuracy of 78.3 , outperforming a recent HMM-based approach which obtained 71.2 .
One of the first methods to multi-action recognition, Multi-Task Conditional Random Field (MT-CRF), was proposed in @cite_23 . This method consists of classifying motions into multi-labels, e.g. a person folding their arms while sitting. Despite this approach being presented as robust, it has been only applied on two synthetic datasets. Two more recent methods @cite_2 @cite_8 have been applied to more realistic multi-action datasets. @cite_2 deal with the dual problem of human action segmentation and classification. This approach is presented as a learning framework that simultaneously performs temporal segmentation and event recognition in time series. The supervised training is done via multi-class SVM, where the SVM weight vectors have to be learnt, as well as the other SVM parameters. For the segmentation, the learnt weight vectors are used and other set of parameters are optimised (number of segments, and minimum and maximum lengths of segments). The segmentation is done using dynamic programming. The feature mapping depends on the dataset employed, and includes trajectories, features extracted from binary masks and STIPs.
{ "cite_N": [ "@cite_8", "@cite_23", "@cite_2" ], "mid": [ "2019586054", "2098989639", "1978511849" ], "abstract": [ "Hidden Markov models (HMMs) provide joint segmentation and classification of sequential data by efficient inference algorithms and have therefore been employed in fields as diverse as speech recognition, document processing, and genomics. However, conventional HMMs do not suit action segmentation in video due to the nature of the measurements which are often irregular in space and time, high dimensional and affected by outliers. For this reason, in this paper we present a joint action segmentation and classification approach based on an extended model: the hidden Markov model for multiple, irregular observations (HMM-MIO). Experiments performed over a concatenated version of the popular KTH action dataset and the challenging CMU multi-modal activity dataset (CMU-MMAC) report accuracies comparable to or higher than those of a bag-of-features approach, showing the usefulness of improved sequential models for joint action segmentation and classification tasks.", "In this paper, we propose a robust recognition and segmentation method for daily actions with a novel multi-task sequence labeling algorithm called multi-task conditional random field (MT-CRF). Multi-Task sequence labeling is a task of assigning input sequence to sequence of multi-labels that consist of one or multiple symbols in single frame. Multi-Task sequence labeling is essential for action recognition, since motions can be often classified into multi-labels, e.g. he is folding arms while sitting. The MT-CRFs: extensions of conditional random fields (CRFs), incorporate jointly interaction between action labels as well as Markov property of actions, to improve the performance of the joint accuracy: the accuracy for whole labels at specific time. The MT-CRFs offer several advantages over the generative dynamic Bayesian networks (DBNs), which are often utilized as multi-task sequence labelers. First, the MT-CRFs allow relaxing the strong assumption of conditional independence of observed motion, which is used in DBNs. Second, the MT-CRFs exploit the power of non-Markovian discriminative classification frameworks instead of generative models in DBNs. With deep insight of the problem Multi-Task sequence labeling, the inference process of the classifier gains more efficiency than the previous Markov random fields that tackle multi-task sequence labeling. The experimental results show that classifiers with MT-CRFs have better performance than cascaded classifiers with a couple of CRFs.", "Automatic video segmentation and action recognition has been a long-standing problem in computer vision. Much work in the literature treats video segmentation and action recognition as two independent problems; while segmentation is often done without a temporal model of the activity, action recognition is usually performed on pre-segmented clips. In this paper we propose a novel method that avoids the limitations of the above approaches by jointly performing video segmentation and action recognition. Unlike standard approaches based on extensions of dynamic Bayesian networks, our method is based on a discriminative temporal extension of the spatial bag-of-words model that has been very popular in object recognition. The classification is performed robustly within a multi-class SVM framework whereas the inference over the segments is done efficiently with dynamic programming. Experimental results on honeybee, Weizmann, and Hollywood datasets illustrate the benefits of our approach compared to state-of-the-art methods." ] }
1502.01782
1974809759
In this paper we propose a novel approach to multi-action recognition that performs joint segmentation and classification. This approach models each action using a Gaussian mixture using robust low-dimensional action features. Segmentation is achieved by performing classification on overlapping temporal windows, which are then merged to produce the final result. This approach is considerably less complicated than previous methods which use dynamic programming or computationally expensive hidden Markov models (HMMs). Initial experiments on a stitched version of the KTH dataset show that the proposed approach achieves an accuracy of 78.3 , outperforming a recent HMM-based approach which obtained 71.2 .
An approach termed Hidden Markov Model for Multiple, Irregular Observations (HMM-MIO) @cite_8 has also been proposed for the multi-action recognition task. HMM-MIO jointly segments and classifies observations which are irregular in time and space, and characterised by high dimensionality. The high dimensionality is reduced by using PPCA (probabilistic Principal Component Analysis). Moreover, HMM-MIO deals with heavy tails and outliers exhibited by empirical distributions by modelling the observation densities with a long-tailed distribution, the student's @math . HMM-MIO requires the search of the following 4 optimal parameters: (i) the resulting reduced dimension ( @math ), (ii) the number of components in each observation mixture ( @math ), (iii) the degree of the @math -distribution ( @math ), and (iv) the number of cells (or regions) used to deal with the space irregularity @math . To this end, for multiple observations of a frame, they postulate:
{ "cite_N": [ "@cite_8" ], "mid": [ "2019586054" ], "abstract": [ "Hidden Markov models (HMMs) provide joint segmentation and classification of sequential data by efficient inference algorithms and have therefore been employed in fields as diverse as speech recognition, document processing, and genomics. However, conventional HMMs do not suit action segmentation in video due to the nature of the measurements which are often irregular in space and time, high dimensional and affected by outliers. For this reason, in this paper we present a joint action segmentation and classification approach based on an extended model: the hidden Markov model for multiple, irregular observations (HMM-MIO). Experiments performed over a concatenated version of the popular KTH action dataset and the challenging CMU multi-modal activity dataset (CMU-MMAC) report accuracies comparable to or higher than those of a bag-of-features approach, showing the usefulness of improved sequential models for joint action segmentation and classification tasks." ] }
1502.01782
1974809759
In this paper we propose a novel approach to multi-action recognition that performs joint segmentation and classification. This approach models each action using a Gaussian mixture using robust low-dimensional action features. Segmentation is achieved by performing classification on overlapping temporal windows, which are then merged to produce the final result. This approach is considerably less complicated than previous methods which use dynamic programming or computationally expensive hidden Markov models (HMMs). Initial experiments on a stitched version of the KTH dataset show that the proposed approach achieves an accuracy of 78.3 , outperforming a recent HMM-based approach which obtained 71.2 .
where each observation consists of the pair @math of the descriptor @math and the cell index where it occurs @math . The index frame, the observation index, the total number of observations, and the hidden states are given by @math , @math , @math , and @math , respectively. As feature descriptors, HMM-MIO extracts STIPs proposed in @cite_16 , with the default @math -dimension descriptor. The classification is carried out on a per frame basis. HMM-MIO also suffers from the drawback of a large search of optimal parameters and the use of STIP descriptors.
{ "cite_N": [ "@cite_16" ], "mid": [ "2020163092" ], "abstract": [ "Local image features or interest points provide compact and abstract representations of patterns in an image. In this paper, we extend the notion of spatial interest points into the spatio-temporal domain and show how the resulting features often reflect interesting events that can be used for a compact representation of video data as well as for interpretation of spatio-temporal events. To detect spatio-temporal events, we build on the idea of the Harris and Forstner interest point operators and detect local structures in space-time where the image values have significant local variations in both space and time. We estimate the spatio-temporal extents of the detected events by maximizing a normalized spatio-temporal Laplacian operator over spatial and temporal scales. To represent the detected events, we then compute local, spatio-temporal, scale-invariant N-jets and classify each event with respect to its jet descriptor. For the problem of human motion analysis, we illustrate how a video representation in terms of local space-time features allows for detection of walking people in scenes with occlusions and dynamic cluttered backgrounds." ] }
1502.01385
2114789426
We consider the problem of robustly recovering a @math -sparse coefficient vector from the Fourier series that it generates, restricted to the interval @math . The difficulty of this problem is linked to the superresolution factor SRF, equal to the ratio of the Rayleigh length (inverse of @math ) by the spacing of the grid supporting the sparse vector. In the presence of additive deterministic noise of norm @math , we show upper and lower bounds on the minimax error rate that both scale like @math , providing a partial answer to a question posed by Donoho in 1992. The scaling arises from comparing the noise level to a restricted isometry constant at sparsity @math , or equivalently from comparing @math to the so-called @math -spark of the Fourier system. The proof involves new bounds on the singular values of restricted Fourier matrices, obtained in part from old techniques in complex analysis.
Corollary addresses a special case of a question originally raised by Donoho in 1992 in @cite_16 . In that paper, Donoho recognizes that the sparse clumps" signal model is the right notion to achieve superresolution. Given a vector @math , he lets @math for the smallest integer such that the number of nonzero elements of @math is at most @math within any consecutive subset of cardinality @math times the Rayleigh length. Clearly, the set of vectors that satisfies Donoho's model at level @math includes the @math -sparse vectors. If @math denotes the minimax error of estimating a vector at level @math , under deterministic noise of level @math in @math , then Donoho showed that [ C_ 1,r (SRF)^ 2r-1 E(r, ) C_ 2,r (SRF)^ 2r+1 . ] Corollary is the statement that there is no gap in this sequence of inequalities --- and that Donoho's lower bound gives the correct scaling --- albeit when @math is understood as sparsity rather than the more general (and more relevant) sparse clumps" model. It would be very interesting to close the exponent gap in the latter case as well.
{ "cite_N": [ "@cite_16" ], "mid": [ "2018778093" ], "abstract": [ "Consider the problem of recovering a measure @math supported on a lattice of span @math , when measurements are only available concerning the Fourier Transform @math at frequencies @math . If @math is much smaller than the Nyquist frequency @math and the measurements are noisy, then, in general, stable recovery of @math is impossible. In this paper it is shown that if, in addition, we know that the measure @math satisfies certain sparsity constraints, then stable recovery is possible. Say that a set has Rayleigh index less than or equal to R if in any interval of length @math there are at most R elements. Indeed, if the (unknown) support of @math is known, a priori, to have Rayleigh index at most R, then stable recovery is possible with a stability coefficient that grows at most like @math as @math . This result validates certain practical efforts, in spectroscopy, seismic prospecting, and astrono..." ] }
1502.01385
2114789426
We consider the problem of robustly recovering a @math -sparse coefficient vector from the Fourier series that it generates, restricted to the interval @math . The difficulty of this problem is linked to the superresolution factor SRF, equal to the ratio of the Rayleigh length (inverse of @math ) by the spacing of the grid supporting the sparse vector. In the presence of additive deterministic noise of norm @math , we show upper and lower bounds on the minimax error rate that both scale like @math , providing a partial answer to a question posed by Donoho in 1992. The scaling arises from comparing the noise level to a restricted isometry constant at sparsity @math , or equivalently from comparing @math to the so-called @math -spark of the Fourier system. The proof involves new bounds on the singular values of restricted Fourier matrices, obtained in part from old techniques in complex analysis.
Around the same time, @cite_6 established that perfect recovery of @math -sparse vectors was possible from @math low-frequency measurements, and that the mere positivity requirement is a sufficient condition to obtain unique recovery. It is worth comparing this result to very classical work on the trigonometric moment problem @cite_11 , where @math complex measurements suffice to determine @math real-valued phases and @math real-valued positive ampitudes in a model of the form ), sampled uniformly in @math . The observation that @math is the minimum number of noiseless measurements necessary for recovery of a @math -sparse vector is also clear from the more recent literature on sparse approximation.
{ "cite_N": [ "@cite_6", "@cite_11" ], "mid": [ "129423020", "2045463928" ], "abstract": [ "SUMMARY Maximum entropy (ME) inversion is a non-linear inversion technique for inverse problems where the object to be recovered is known to be positive. It has been applied in areas ranging from radio astronomy to various forms of spectroscopy, sometimes with dramatic success. In some cases, ME has attained an order of magnitude finer resolution and or an order of magnitude smaller noise level than that obtainable by standard linear methods. The dramatic successes all seem to occur in cases where the object to be recovered is 'nearly black': essentially zero in the vast majority of samples. We show that near-blackness is required, both for signal-to-noise enhancements and for superresolution. However, other methods-in particular, minimum 1-norm reconstruction-may exploit near-blackness to an even greater extent.", "Part I: Toeplitz Forms: Preliminaries Orthogonal polynomials. Algebraic properties Orthogonal polynomials. Limit properties The trigonometric moment problem Eigenvalues of Toeplitz forms Generalizations and analogs of Toeplitz forms Further generalizations Certain matrices and integral equations of the Toeplitz type Part II: Applications of Toeplitz Forms: Applications to analytic functions Applications to probability theory Applications to statistics Appendix: Notes and references Bibliography Index." ] }
1502.01385
2114789426
We consider the problem of robustly recovering a @math -sparse coefficient vector from the Fourier series that it generates, restricted to the interval @math . The difficulty of this problem is linked to the superresolution factor SRF, equal to the ratio of the Rayleigh length (inverse of @math ) by the spacing of the grid supporting the sparse vector. In the presence of additive deterministic noise of norm @math , we show upper and lower bounds on the minimax error rate that both scale like @math , providing a partial answer to a question posed by Donoho in 1992. The scaling arises from comparing the noise level to a restricted isometry constant at sparsity @math , or equivalently from comparing @math to the so-called @math -spark of the Fourier system. The proof involves new bounds on the singular values of restricted Fourier matrices, obtained in part from old techniques in complex analysis.
The significance of @math as a threshold for recovery of @math -sparse vectors also plays a prominent role in Donoho and Elad's later work @cite_29 . They define the spark @math of a matrix @math to be the smallest number of linearly dependent columns, and go on to show that the representation of the form @math is unique for any @math -sparse vector @math . We explain in section why our results can be seen as a noise-robust version of this observation: the functional inverse of the lower restricted isometry constant @math , i.e., @math as a function of @math , qualifies as the @math of @math , and equals twice the sparsity level of vectors @math that are robustly recoverable from @math .
{ "cite_N": [ "@cite_29" ], "mid": [ "2949718909" ], "abstract": [ "This paper studies sparse spikes deconvolution over the space of measures. We focus our attention to the recovery properties of the support of the measure, i.e. the location of the Dirac masses. For non-degenerate sums of Diracs, we show that, when the signal-to-noise ratio is large enough, total variation regularization (which is the natural extension of the L1 norm of vectors to the setting of measures) recovers the exact same number of Diracs. We also show that both the locations and the heights of these Diracs converge toward those of the input measure when the noise drops to zero. The exact speed of convergence is governed by a specific dual certificate, which can be computed by solving a linear system. We draw connections between the support of the recovered measure on a continuous domain and on a discretized grid. We show that when the signal-to-noise level is large enough, the solution of the discretized problem is supported on pairs of Diracs which are neighbors of the Diracs of the input measure. This gives a precise description of the convergence of the solution of the discretized problem toward the solution of the continuous grid-free problem, as the grid size tends to zero." ] }
1502.01446
1954109191
Language model is one of the most important modules in statistical machine translation and currently the word-based language model dominants this community. However, many translation models (e.g. phrase-based models) generate the target language sentences by rendering and compositing the phrases rather than the words. Thus, it is much more reasonable to model dependency between phrases, but few research work succeed in solving this problem. In this paper, we tackle this problem by designing a novel phrase-based language model which attempts to solve three key sub-problems: 1, how to define a phrase in language model; 2, how to determine the phrase boundary in the large-scale monolingual data in order to enlarge the training set; 3, how to alleviate the data sparsity problem due to the huge vocabulary size of phrases. By carefully handling these issues, the extensive experiments on Chinese-to-English translation show that our phrase-based language model can significantly improve the translation quality by up to +1.47 absolute BLEU score.
The most relevant work to ours is the bilingual n-gram translation model @cite_0 @cite_15 @cite_17 @cite_7 @cite_3 . Their Markov model which generates translation by arranging sequence of tuples is very similar to an n-gram language model. The tuple can be any bilingual phrase pair at the early time @cite_0 @cite_15 . Recently, tuples become the minimal translation units (MTU) which are the smallest bilingual phrases satisfying the word alignment. durrani2013model and zhang2013beyond perform translation by compositing the MTUs with a Markov model. hu2014minimum apply a recurrent neural network to address the sparsity problem of MTUs.
{ "cite_N": [ "@cite_7", "@cite_3", "@cite_0", "@cite_15", "@cite_17" ], "mid": [ "2250651922", "2250610505", "2144879357", "122999227", "2153999629" ], "abstract": [ "Standard phrase-based translation models do not explicitly model context dependence between translation units. As a result, they rely on large phrase pairs and target language models to recover contextual e ects in translation. In this work, we explore n-gram models over Minimal Translation Units (MTUs) to explicitly capture contextual dependencies across phrase boundaries in the channel model. As there is no single best direction in which contextual information should flow, we explore multiple decomposition structures as well as dynamic bidirectional decomposition. The resulting models are evaluated in an intrinsic task of lexical selection for MT as well as a full MT system, through n-best reranking. These experiments demonstrate that additional contextual modeling does indeed benefit a phrase-based system and that the direction of conditioning is important. Integrating multiple conditioning orders provides consistent benefit, and the most important directions di er by language pair.", "We introduce recurrent neural networkbased Minimum Translation Unit (MTU) models which make predictions based on an unbounded history of previous bilingual contexts. Traditional back-off n-gram models suffer under the sparse nature of MTUs which makes estimation of highorder sequence models challenging. We tackle the sparsity problem by modeling MTUs both as bags-of-words and as a sequence of individual source and target words. Our best results improve the output of a phrase-based statistical machine translation system trained on WMT 2012 French-English data by up to 1.5 BLEU, and we outperform the traditional n-gram based MTU approach by up to 0.8 BLEU.", "This article describes in detail an n-gram approach to statistical machine translation. This approach consists of a log-linear combination of a translation model based on n-grams of bilingual units, which are referred to as tuples, along with four specific feature functions. Translation performance, which happens to be in the state of the art, is demonstrated with Spanish-to-English and English-to-Spanish translations of the European Parliament Plenary Sessions (EPPS).", "We present a new reordering model estimated as a standard n-gram language model with units built from morpho-syntactic information of the source and target languages. It can be seen as a model that translates the morpho-syntactic structure of the input sentence, in contrast to standard translation models which take care of the surface word forms. We take advantage from the fact that such units are less sparse than standard translation units to increase the size of bilingual context that is considered during the translation process, thus effectively accounting for mid-range reorderings. Empirical results on French-English and German-English translation tasks show that our model achieves higher translation accuracy levels than those obtained with the widely used lexicalized reordering model.", "N-gram-based models co-exist with their phrase-based counterparts as an alternative SMT framework. Both techniques have pros and cons. While the N-gram-based framework provides a better model that captures both source and target contexts and avoids spurious phrasal segmentation, the ability to memorize and produce larger translation units gives an edge to the phrase-based systems during decoding, in terms of better search performance and superior selection of translation units. In this paper we combine N-grambased modeling with phrase-based decoding, and obtain the benefits of both approaches. Our experiments show that using this combination not only improves the search accuracy of the N-gram model but that it also improves the BLEU scores. Our system outperforms state-of-the-art phrase-based systems (Moses and Phrasal) and N-gram-based systems by a significant margin on German, French and Spanish to English translation tasks." ] }
1502.01423
2949861479
Image feature representation plays an essential role in image recognition and related tasks. The current state-of-the-art feature learning paradigm is supervised learning from labeled data. However, this paradigm requires large-scale category labels, which limits its applicability to domains where labels are hard to obtain. In this paper, we propose a new data-driven feature learning paradigm which does not rely on category labels. Instead, we learn from user behavior data collected on social media. Concretely, we use the image relationship discovered in the latent space from the user behavior data to guide the image feature learning. We collect a large-scale image and user behavior dataset from Behance.net. The dataset consists of 1.9 million images and over 300 million view records from 1.9 million users. We validate our feature learning paradigm on this dataset and find that the learned feature significantly outperforms the state-of-the-art image features in learning better image similarities. We also show that the learned feature performs competitively on various recognition benchmarks.
Image features play an important role in various image recognition problems. There is a rich body of literature in Computer Vision on image features. It is beyond the scope of this paper to do a comprehensive review. Early methods @cite_3 @cite_16 use low-level features which are more about appearance and recent methods, such as @cite_13 @cite_4 @cite_9 @cite_2 , focus on high-level features which are more about semantics. Different from hand-crafted features, features learned directly from data are the current state-of-the-art @cite_1 . Data-driven features are shown to be able to effectively encode both semantics and appearance and outperform previous methods on many recognition benchmarks. But they need a lot of labeled images (on the order of millions) to train properly. Unsupervised feature learning methods, just to name a few @cite_5 @cite_6 @cite_18 @cite_11 @cite_19 , hold significant promise in terms of overcoming the labeled dataset limitation.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_9", "@cite_1", "@cite_3", "@cite_6", "@cite_19", "@cite_2", "@cite_5", "@cite_16", "@cite_13", "@cite_11" ], "mid": [ "2139427956", "", "2122528955", "2130325614", "2162915993", "", "2951702175", "2110628941", "", "2124386111", "2098411764", "" ], "abstract": [ "We present an unsupervised method for learning a hierarchy of sparse feature detectors that are invariant to small shifts and distortions. The resulting feature extractor consists of multiple convolution filters, followed by a feature-pooling layer that computes the max of each filter output within adjacent windows, and a point-wise sigmoid non-linearity. A second level of larger and more invariant features is obtained by training the same algorithm on patches of features from the first level. Training a supervised classifier on these features yields 0.64 error on MNIST, and 54 average recognition rate on Caltech 101 with 30 training samples per category. While the resulting architecture is similar to convolutional networks, the layer-wise unsupervised training procedure alleviates the over-parameterization problems that plague purely supervised learning procedures, and yields good performance with very few labeled training samples.", "", "We introduce a new descriptor for images which allows the construction of efficient and compact classifiers with good accuracy on object category recognition. The descriptor is the output of a large number of weakly trained object category classifiers on the image. The trained categories are selected from an ontology of visual concepts, but the intention is not to encode an explicit decomposition of the scene. Rather, we accept that existing object category classifiers often encode not the category per se but ancillary image characteristics; and that these ancillary characteristics can combine to represent visual classes unrelated to the constituent categories' semantic meanings. The advantage of this descriptor is that it allows object-category queries to be made against image databases using efficient classifiers (efficient at test time) such as linear support vector machines, and allows these queries to be for novel categories. Even when the representation is reduced to 200 bytes per image, classification accuracy on object category recognition is comparable with the state of the art (36 versus 42 ), but at orders of magnitude lower computational cost.", "There has been much interest in unsupervised learning of hierarchical generative models such as deep belief networks. Scaling such models to full-sized, high-dimensional images remains a difficult problem. To address this problem, we present the convolutional deep belief network, a hierarchical generative model which scales to realistic image sizes. This model is translation-invariant and supports efficient bottom-up and top-down probabilistic inference. Key to our approach is probabilistic max-pooling, a novel technique which shrinks the representations of higher layers in a probabilistically sound way. Our experiments show that the algorithm learns useful high-level visual features, such as object parts, from unlabeled images of objects and natural scenes. We demonstrate excellent performance on several visual recognition tasks and show that our model can perform hierarchical (bottom-up and top-down) inference over full-sized images.", "This paper presents a method for recognizing scene categories based on approximate global geometric correspondence. This technique works by partitioning the image into increasingly fine sub-regions and computing histograms of local features found inside each sub-region. The resulting \"spatial pyramid\" is a simple and computationally efficient extension of an orderless bag-of-features image representation, and it shows significantly improved performance on challenging scene categorization tasks. Specifically, our proposed method exceeds the state of the art on the Caltech-101 database and achieves high accuracy on a large database of fifteen natural scene categories. The spatial pyramid framework also offers insights into the success of several recently proposed image descriptions, including Torralba’s \"gist\" and Lowe’s SIFT descriptors.", "", "The goal of this paper is to discover a set of discriminative patches which can serve as a fully unsupervised mid-level visual representation. The desired patches need to satisfy two requirements: 1) to be representative, they need to occur frequently enough in the visual world; 2) to be discriminative, they need to be different enough from the rest of the visual world. The patches could correspond to parts, objects, \"visual phrases\", etc. but are not restricted to be any one of them. We pose this as an unsupervised discriminative clustering problem on a huge dataset of image patches. We use an iterative procedure which alternates between clustering and training discriminative classifiers, while applying careful cross-validation at each step to prevent overfitting. The paper experimentally demonstrates the effectiveness of discriminative patches as an unsupervised mid-level visual representation, suggesting that it could be used in place of visual words for many tasks. Furthermore, discriminative patches can also be used in a supervised regime, such as scene classification, where they demonstrate state-of-the-art performance on the MIT Indoor-67 dataset.", "It is a remarkable fact that images are related to objects constituting them. In this paper, we propose to represent images by using objects appearing in them. We introduce the novel concept of object bank (OB), a high-level image representation encoding object appearance and spatial location information in images. OB represents an image based on its response to a large number of pre-trained object detectors, or object filters', blind to the testing dataset and visual recognition task. Our OB representation demonstrates promising potential in high level image recognition tasks. It significantly outperforms traditional low level image representations in image classification on various benchmark image datasets by using simple, off-the-shelf classification algorithms such as linear SVM and logistic regression. In this paper, we analyze OB in detail, explaining our design choice of OB for achieving its best potential on different types of datasets. We demonstrate that object bank is a high level representation, from which we can easily discover semantic information of unknown images. We provide guidelines for effectively applying OB to high level image recognition tasks where it could be easily compressed for efficient computation in practice and is very robust to various classifiers.", "", "An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales. The keys are used as input to a nearest neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low residual least squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.", "We propose to shift the goal of recognition from naming to describing. Doing so allows us not only to name familiar objects, but also: to report unusual aspects of a familiar object (“spotty dog”, not just “dog”); to say something about unfamiliar objects (“hairy and four-legged”, not just “unknown”); and to learn how to recognize new objects with few or no visual examples. Rather than focusing on identity assignment, we make inferring attributes the core problem of recognition. These attributes can be semantic (“spotty”) or discriminative (“dogs have it but sheep do not”). Learning attributes presents a major new challenge: generalization across object categories, not just across instances within a category. In this paper, we also introduce a novel feature selection method for learning attributes that generalize well across categories. We support our claims by thorough evaluation that provides insights into the limitations of the standard recognition paradigm of naming and demonstrates the new abilities provided by our attribute-based framework.", "" ] }
1502.01423
2949861479
Image feature representation plays an essential role in image recognition and related tasks. The current state-of-the-art feature learning paradigm is supervised learning from labeled data. However, this paradigm requires large-scale category labels, which limits its applicability to domains where labels are hard to obtain. In this paper, we propose a new data-driven feature learning paradigm which does not rely on category labels. Instead, we learn from user behavior data collected on social media. Concretely, we use the image relationship discovered in the latent space from the user behavior data to guide the image feature learning. We collect a large-scale image and user behavior dataset from Behance.net. The dataset consists of 1.9 million images and over 300 million view records from 1.9 million users. We validate our feature learning paradigm on this dataset and find that the learned feature significantly outperforms the state-of-the-art image features in learning better image similarities. We also show that the learned feature performs competitively on various recognition benchmarks.
Our method uses singular value decomposition based collaborative filtering which is a well studied area in recommender systems @cite_14 . In particular, we adopt the ideas in @cite_8 to handle implicit feedback data and combine them with the negative sampling strategy proposed in @cite_10 .
{ "cite_N": [ "@cite_14", "@cite_10", "@cite_8" ], "mid": [ "", "2146682077", "2101409192" ], "abstract": [ "", "KDD-Cup 2011 challenged the community to identify user tastes in music by leveraging Yahoo! Music user ratings. The competition hosted two tracks, which were based on two datasets sampled from the raw data, including hundreds of millions of ratings. The underlying ratings were given to four types of musical items: tracks, albums, artists, and genres, forming a four level hierarchical taxonomy. The challenge started on March 15, 2011 and ended on June 30, 2011 attracting 2389 participants, 2100 of which were active by the end of the competition. The popularity of the challenge is related to the fact that learning a large scale recommender systems is a generic problem, highly relevant to the industry. In addition, the contest drew interest by introducing a number of scientific and technical challenges including dataset size, hierarchical structure of items, high resolution timestamps of ratings, and a non-conventional ranking-based task. This paper provides the organizers' account of the contest, including: a detailed analysis of the datasets, discussion of the contest goals and actual conduct, and lessons learned throughout the contest.", "A common task of recommender systems is to improve customer experience through personalized recommendations based on prior implicit feedback. These systems passively track different sorts of user behavior, such as purchase history, watching habits and browsing activity, in order to model user preferences. Unlike the much more extensively researched explicit feedback, we do not have any direct input from the users regarding their preferences. In particular, we lack substantial evidence on which products consumer dislike. In this work we identify unique properties of implicit feedback datasets. We propose treating the data as indication of positive and negative preference associated with vastly varying confidence levels. This leads to a factor model which is especially tailored for implicit feedback recommenders. We also suggest a scalable optimization procedure, which scales linearly with the data size. The algorithm is used successfully within a recommender system for television shows. It compares favorably with well tuned implementations of other known methods. In addition, we offer a novel way to give explanations to recommendations given by this factor model." ] }
1502.00377
2004284939
In order to track moving objects in long range against occlusion, interruption, and background clutter, this paper proposes a unified approach for global trajectory analysis. Instead of the traditional frame-by-frame tracking, our method recovers target trajectories based on a short sequence of video frames, e.g., 15 frames. We initially calculate a foreground map at each frame obtained from a state-of-the-art background model. An attribute graph is then extracted from the foreground map, where the graph vertices are image primitives represented by the composite features. With this graph representation, we pose trajectory analysis as a joint task of spatial graph partitioning and temporal graph matching. The task can be formulated by maximizing a posteriori under the Bayesian framework, in which we integrate the spatio-temporal contexts and the appearance models. The probabilistic inference is achieved by a data-driven Markov chain Monte Carlo algorithm. Given a period of observed frames, the algorithm simulates an ergodic and aperiodic Markov chain, and it visits a sequence of solution states in the joint space of spatial graph partitioning and temporal graph matching. In the experiments, our method is tested on several challenging videos from the public datasets of visual surveillance, and it outperforms the state-of-the-art methods.
In the literature, video object tracking has been intensively studied and many effective methods have been proposed. For single-target tracking, various object appearance models and motion models are well exploited to estimate target state (location, velocity, etc.) @cite_7 @cite_28 @cite_40 @cite_16 @cite_0 . Recently, a class of techniques called tracking by detection'' has been shown to provide promising results @cite_36 @cite_13 @cite_23 @cite_21 @cite_42 . For multi-object tracking (i.e. trajectory analysis), which our method addresses, we shall identify multiple moving targets by associating correspondences between observations and objects as well as estimating the state of each target @cite_2 @cite_22 .
{ "cite_N": [ "@cite_22", "@cite_7", "@cite_28", "@cite_36", "@cite_21", "@cite_42", "@cite_0", "@cite_40", "@cite_23", "@cite_2", "@cite_16", "@cite_13" ], "mid": [ "2160388022", "1995903777", "2126701066", "2167089254", "2154398964", "2116877472", "2099263216", "2135221441", "2170626353", "2129475291", "2122088301", "2102772283" ], "abstract": [ "The objective of this paper is to parse object trajectories in surveillance video against occlusion, interruption, and background clutter. We present a spatio-temporal graph (ST-Graph) representation and a cluster sampling algorithm via deferred inference. An object trajectory in the ST-Graph is represented by a bundle of “motion primitives”, each of which consists of a small number of matched features (interesting patches) generated by adaptive feature pursuit and a tracking process. Each motion primitive is a graph vertex and has six bonds connecting to neighboring vertices. Based on the ST-Graph, we jointly solve three tasks: 1) spatial segmentation; 2) temporal correspondence and 3) object recognition, by flipping the labels of the motion primitives. We also adapt the scene geometric and statistical information as strong prior. Then the inference computation is formulated in a Markov chain and solved by an efficient cluster sampling. We apply the proposed approach to various challenging videos from a number of public datasets and show it outperform other state of the art methods.", "The goal of this article is to review the state-of-the-art tracking methods, classify them into different categories, and identify new trends. Object tracking, in general, is a challenging problem. Difficulties in tracking objects can arise due to abrupt object motion, changing appearance patterns of both the object and the scene, nonrigid object structures, object-to-object and object-to-scene occlusions, and camera motion. Tracking is usually performed in the context of higher-level applications that require the location and or shape of the object in every frame. Typically, assumptions are made to constrain the tracking problem in the context of a particular application. In this survey, we categorize the tracking methods on the basis of the object and motion representations used, provide detailed descriptions of representative methods in each category, and examine their pros and cons. Moreover, we discuss the important issues related to tracking including the use of appropriate image features, selection of motion models, and detection of objects.", "The use of head movements in control applications leaves the hands free for other tasks and utilizes the mobility of the head to acquire and track targets over a wide field of view. We present the results of applying a Kalman filter to generate prediction estimates for tracking head positions. A simple kinematics approach based on the assumption of a piecewise constant acceleration process is suggested and is shown to track head positions with an rms error under 2 spl deg for head movements with accelerations smaller than 3000 spl deg s. To account for the wide range of head dynamic characteristics, an adaptive approach with input estimation is developed. The performance of the Kalman filter is compared to that based on a simple polynomial predictor.", "In this paper, we address the problem of learning an adaptive appearance model for object tracking. In particular, a class of tracking techniques called “tracking by detection” have been shown to give promising results at real-time speeds. These methods train a discriminative classifier in an online manner to separate the object from the background. This classifier bootstraps itself by using the current tracker state to extract positive and negative examples from the current frame. Slight inaccuracies in the tracker can therefore lead to incorrectly labeled training examples, which degrades the classifier and can cause further drift. In this paper we show that using Multiple Instance Learning (MIL) instead of traditional supervised learning avoids these problems, and can therefore lead to a more robust tracker with fewer parameter tweaks. We present a novel online MIL algorithm for object tracking that achieves superior results with real-time performance.", "Compared to the traditional tracking with fixed cameras, the PTZ-camera-based tracking is more challenging due to (i) lacking of reliable background modeling and subtraction; (ii) the appearance and scale of target changing suddenly and drastically. Tackling these problems, this paper proposes a novel tracking algorithm using patch-based object models and demonstrates its advantages with the PTZ-camera in the application of visual surveillance. In our method, the target model is learned and represented by a set of feature patches whose discriminative power is higher than others. The target model is matched and evaluated by both appearance and motion consistency measurements. The homography between frames is also calculated for scale adaptation. The experiment on several surveillance videos shows that our method outperforms the state-of-arts approaches.", "Matching the visual appearances of the target over consecutive image frames is the most critical issue in video-based object tracking. Choosing an appropriate distance metric for matching determines its accuracy and robustness, and thus significantly influences the tracking performance. Most existing tracking methods employ fixed pre-specified distance metrics. However, this simple treatment is problematic and limited in practice, because a pre-specified metric does not likely to guarantee the closest match to be the true target of interest. This paper presents a new tracking approach that incorporates adaptive metric learning into the framework of visual object tracking. Collecting a set of supervised training samples on-the-fly in the observed video, this new approach automatically learns the optimal distance metric for more accurate matching. The design of the learned metric ensures that the closest match is very likely to be the true target of interest based on the supervised training. Such a learned metric is discriminative and adaptive. This paper substantializes this new approach in a solid case study of adaptive-metric differential tracking, and obtains a closed-form analytical solution to motion estimation and visual tracking. Moreover, this paper extends the basic linear distance metric learning method to a more powerful nonlinear kernel metric learning method. Extensive experiments validate the effectiveness of the proposed approach, and demonstrate the improved performance of the proposed new tracking method.", "In this paper, we develop a novel solution for particle filtering on general graphs. We provide an exact solution for particle filtering on directed cycle-free graphs. The proposed approach relies on a partial-order relation in an antichain decomposition that forms a high-order Markov chain over the partitioned graph. We subsequently derive a closed-form sequential updating scheme for conditional density propagation using particle filtering on directed cycle-free graphs. We also provide an approximate solution for particle filtering on general graphs by splitting graphs with cycles into multiple directed cycle-free subgraphs. We then use the sequential updating scheme by alternating among the directed cycle-free subgraphs to obtain an estimate of the density propagation. We rely on the proposed method for particle filtering on general graphs for two video tracking applications: 1) object tracking using high-order Markov chains; and 2) distributed multiple object tracking based on multi-object graphical interaction models. Experimental results demonstrate the improved performance of the proposed approach to particle filtering on graphs compared with existing methods for video tracking.", "Accurate 3D registration is a key issue in the Augmented Reality (AR) applications, particularly where are no markers placed manually. In this paper, an efficient markerless registration algorithm is presented for both outdoor and indoor AR system. This algorithm first calculates the correspondences among frames using fixed region tracking, and then estimates the motion parameters on projective transformation following the homography of the tracked region. To achieve the illumination insensitive tracking, the illumination parameters are solved jointly with motion parameters in each step. Based on the perspective motion parameters of the tracked region, the 3D registration, the camera's pose and position, can be calculated with calibrated intrinsic parameters. A marker-less AR system is described using this algorithm, and the system architecture and working flow are also proposed. Experimental results with comparison quantitatively demonstrate the correctness of the theoretical analysis and the robustness of the registration algorithm.", "A decision philosophy that seeks the avoidance of error by trading off belief of truth and value of information is applied to the problem of recognizing tracks from multiple targets (MTT). A successful MTT methodology should be robust in that its performance degrades gracefully as the conditions of the collection become less favorable to optimal operation. By stressing the avoidance, rather than the explicit minimization, of error, the authors obtain a decision rule for trajectory-data association that does not require the resolution of all conflicting hypotheses when the database does not contain sufficient information to do so reliably. This rule, coupled with a set-valued Kalman filter for trajectory estimation, results in a methodology that does not attempt to extract more information from the database than it contains. >", "In multi-target tracking, the maintaining of the correct identity of targets is challenging. In the presented tracking method, accurate target identification is achieved by incorporating the appearance information of the spatial and temporal context of each target. The spatial context of a target involves local background and nearby targets. The first contribution of the paper is to provide a new discriminative model for multi-target tracking with the embedded classification of each target against its context. As a result, the tracker not only searches for the image region similar to the target but also avoids latching on nearby targets or on a background region. The temporal context of a target includes its appearances seen during tracking in the past. The past appearances are used to train a probabilistic PCA that is used as the measurement model of the target at the present. As the second contribution, we develop a new incremental scheme for probabilistic PCA. It can update accurately the full set of parameters including a noise parameter still ignored in related literature. The experiments show robust tracking performance under the condition of severe clutter, occlusions and pose changes.", "We propose a learning-based hierarchical approach of multi-target tracking from a single camera by progressively associating detection responses into longer and longer track fragments (tracklets) and finally the desired target trajectories. To define tracklet affinity for association, most previous work relies on heuristically selected parametric models; while our approach is able to automatically select among various features and corresponding non-parametric models, and combine them to maximize the discriminative power on training data by virtue of a HybridBoost algorithm. A hybrid loss function is used in this algorithm because the association of tracklet is formulated as a joint problem of ranking and classification: the ranking part aims to rank correct tracklet associations higher than other alternatives; the classification part is responsible to reject wrong associations when no further association should be done. Experiments are carried out by tracking pedestrians in challenging datasets. We compare our approach with state-of-the-art algorithms to show its improvement in terms of tracking accuracy.", "This paper presents an adaptive tracking algorithm by learning hybrid object templates online in video. The templates consist of multiple types of features, each of which describes one specific appearance structure, such as flatness, texture, or edge corner. Our proposed solution consists of three aspects. First, in order to make the features of different types comparable with each other, a unified statistical measure is defined to select the most informative features to construct the hybrid template. Second, we propose a simple yet powerful generative model for representing objects. This model is characterized by its simplicity since it could be efficiently learnt from the currently observed frames. Last, we present an iterative procedure to learn the object template from the currently observed frames, and to locate every feature of the object template within the observed frames. The former step is referred to as feature pursuit, and the latter step is referred to as feature alignment, both of which are performed over a batch of observations. We fuse the results of feature alignment to locate objects within frames. The proposed solution to object tracking is in essence robust against various challenges, including background clutters, low-resolution, scale changes, and severe occlusions. Extensive experiments are conducted over several publicly available databases and the results with comparisons show that our tracking algorithm clearly outperforms the state-of-the-art methods." ] }
1502.00377
2004284939
In order to track moving objects in long range against occlusion, interruption, and background clutter, this paper proposes a unified approach for global trajectory analysis. Instead of the traditional frame-by-frame tracking, our method recovers target trajectories based on a short sequence of video frames, e.g., 15 frames. We initially calculate a foreground map at each frame obtained from a state-of-the-art background model. An attribute graph is then extracted from the foreground map, where the graph vertices are image primitives represented by the composite features. With this graph representation, we pose trajectory analysis as a joint task of spatial graph partitioning and temporal graph matching. The task can be formulated by maximizing a posteriori under the Bayesian framework, in which we integrate the spatio-temporal contexts and the appearance models. The probabilistic inference is achieved by a data-driven Markov chain Monte Carlo algorithm. Given a period of observed frames, the algorithm simulates an ergodic and aperiodic Markov chain, and it visits a sequence of solution states in the joint space of spatial graph partitioning and temporal graph matching. In the experiments, our method is tested on several challenging videos from the public datasets of visual surveillance, and it outperforms the state-of-the-art methods.
(I) Sequential inference based methods use the information of the currently observed frame to predict the states of moving targets and assign their target identities. The classical examples are particle filtering @cite_41 @cite_0 @cite_28 and optical flow @cite_25 . Recently, Avidan @cite_12 proposed a learning-based tracker using the online Adaboost algorithm, which maintains a discriminative detector to track targets in the current frame. @cite_36 significantly improved the tracking performance using Multiple Instance Learning (MIL). Despite great success, these approaches may yield identity lossing (or switching) and trajectory fragmentation in terms of mutual-interaction, occlusion and spurious motion, because they make online decisions while discarding global information.
{ "cite_N": [ "@cite_41", "@cite_28", "@cite_36", "@cite_0", "@cite_25", "@cite_12" ], "mid": [ "1499578337", "2126701066", "2167089254", "2099263216", "1990472920", "" ], "abstract": [ "We describe a Markov chain Monte Carlo based particle filter that effectively deals with interacting targets, i.e., targets that are influenced by the proximity and or behavior of other targets. Such interactions cause problems for traditional approaches to the data association problem. In response, we developed a joint tracker that includes a more sophisticated motion model to maintain the identity of targets throughout an interaction, drastically reducing tracker failures. The paper presents two main contributions: (1) we show how a Markov random field (MRF) motion prior, built on the fly at each time step, can substantially improve tracking when targets interact, and (2) we show how this can be done efficiently using Markov chain Monte Carlo (MCMC) sampling. We prove that incorporating an MRF to model interactions is equivalent to adding an additional interaction factor to the importance weights in a joint particle filter. Since a joint particle filter suffers from exponential complexity in the number of tracked targets, we replace the traditional importance sampling step in the particle filter with an MCMC sampling step. The resulting filter deals efficiently and effectively with complicated interactions when targets approach each other. We present both qualitative and quantitative results to substantiate the claims made in the paper, including a large scale experiment on a video-sequence of over 10,000 frames in length.", "The use of head movements in control applications leaves the hands free for other tasks and utilizes the mobility of the head to acquire and track targets over a wide field of view. We present the results of applying a Kalman filter to generate prediction estimates for tracking head positions. A simple kinematics approach based on the assumption of a piecewise constant acceleration process is suggested and is shown to track head positions with an rms error under 2 spl deg for head movements with accelerations smaller than 3000 spl deg s. To account for the wide range of head dynamic characteristics, an adaptive approach with input estimation is developed. The performance of the Kalman filter is compared to that based on a simple polynomial predictor.", "In this paper, we address the problem of learning an adaptive appearance model for object tracking. In particular, a class of tracking techniques called “tracking by detection” have been shown to give promising results at real-time speeds. These methods train a discriminative classifier in an online manner to separate the object from the background. This classifier bootstraps itself by using the current tracker state to extract positive and negative examples from the current frame. Slight inaccuracies in the tracker can therefore lead to incorrectly labeled training examples, which degrades the classifier and can cause further drift. In this paper we show that using Multiple Instance Learning (MIL) instead of traditional supervised learning avoids these problems, and can therefore lead to a more robust tracker with fewer parameter tweaks. We present a novel online MIL algorithm for object tracking that achieves superior results with real-time performance.", "In this paper, we develop a novel solution for particle filtering on general graphs. We provide an exact solution for particle filtering on directed cycle-free graphs. The proposed approach relies on a partial-order relation in an antichain decomposition that forms a high-order Markov chain over the partitioned graph. We subsequently derive a closed-form sequential updating scheme for conditional density propagation using particle filtering on directed cycle-free graphs. We also provide an approximate solution for particle filtering on general graphs by splitting graphs with cycles into multiple directed cycle-free subgraphs. We then use the sequential updating scheme by alternating among the directed cycle-free subgraphs to obtain an estimate of the density propagation. We rely on the proposed method for particle filtering on general graphs for two video tracking applications: 1) object tracking using high-order Markov chains; and 2) distributed multiple object tracking based on multi-object graphical interaction models. Experimental results demonstrate the improved performance of the proposed approach to particle filtering on graphs compared with existing methods for video tracking.", "Optical flow can be used to segment a moving object from its background provided the velocity of the object is distinguishable from that of the background, and has expected characteristics. Existing optical flow techniques often detect flow (and thus the object) in the background. To overcome this, we propose a new optical flow technique, which only determines optical flow in regions of motion. We also propose a method by which output from a tracking system can be fed back into the motion segmenter optical flow system to reinforce the detected motion, or aid in predicting the optical flow. This technique has been developed for use in person tracking systems, and our testing shows that for this application it is more effective than other commonly used optical flow techniques. When tested within a tracking system, it works with an average position error of less than six and a half pixels, outperforming the current CAVIAR benchmark system.", "" ] }
1502.00256
2145307410
This paper aims at a newly raising task in visual surveillance: re-identifying people at a distance by matching body information, given several reference examples. Most of existing works solve this task by matching a reference template with the target individual, but often suffer from large human appearance variability (e.g. different poses views, illumination) and high false positives in matching caused by conjunctions, occlusions or surrounding clutters. Addressing these problems, we construct a simple yet expressive template from a few reference images of a certain individual, which represents the body as an articulated assembly of compositional and alternative parts, and propose an effective matching algorithm with cluster sampling. This algorithm is designed within a candidacy graph whose vertices are matching candidates (i.e. a pair of source and target body parts), and iterates in two steps for convergence. (i) It generates possible partial matches based on compatible and competitive relations among body parts. (ii) It confirms the partial matches to generate a new matching solution, which is accepted by the Markov Chain Monte Carlo (MCMC) mechanism. In the experiments, we demonstrate the superior performance of our approach on three public databases compared to existing methods.
Global-based methods define a global appearance human signature with rich image features and match given reference images with the observations @cite_2 @cite_24 @cite_18 . For example, D. propose the feature ensemble to deal with viewpoint invariant recognition. Some methods improve the performance by extracting features with region segmentation @cite_5 @cite_25 @cite_12 . Recently, advanced learning techniques are employed for more reliable matching metrics @cite_15 , more representative features @cite_10 , and more expressive multi-valued mapping function @cite_8 . Despite acknowledged success, this category of methods often has problems to handle large pose view variance and occlusions.
{ "cite_N": [ "@cite_18", "@cite_8", "@cite_24", "@cite_2", "@cite_5", "@cite_15", "@cite_10", "@cite_25", "@cite_12" ], "mid": [ "1979260620", "129737085", "2122378993", "1518138188", "2154330368", "1991452654", "1887734902", "2096306138", "2062368515" ], "abstract": [ "In this paper, we present an appearance-based method for person re-identification. It consists in the extraction of features that model three complementary aspects of the human appearance: the overall chromatic content, the spatial arrangement of colors into stable regions, and the presence of recurrent local motifs with high entropy. All this information is derived from different body parts, and weighted opportunely by exploiting symmetry and asymmetry perceptual principles. In this way, robustness against very low resolution, occlusions and pose, viewpoint and illumination changes is achieved. The approach applies to situations where the number of candidates varies continuously, considering single images or bunch of frames for each individual. It has been tested on several public benchmark datasets (ViPER, iLIDS, ETHZ), gaining new state-of-the-art performances.", "This paper proposes a novel approach for pedestrian re-identification. Previous re-identification methods use one of 3 approaches: invariant features; designing metrics that aim to bring instances of shared identities close to one another and instances of different identities far from one another; or learning a transformation from the appearance in one domain to the other. Our implicit approach models camera transfer by a binary relation R= (x,y)|x and y describe the same person seen from cameras A and B respectively . This solution implies that the camera transfer function is a multi-valued mapping and not a single-valued transformation, and does not assume the existence of a metric with desirable properties. We present an algorithm that follows this approach and achieves new state-of-the-art performance.", "Appearance information is essential for applications such as tracking and people recognition. One of the main problems of using appearance-based discriminative models is the ambiguities among classes when the number of persons being considered increases. To reduce the amount of ambiguity, we propose the use of a rich set of feature descriptors based on color, textures and edges. Another issue regarding appearance modeling is the limited number of training samples available for each appearance. The discriminative models are created using a powerful statistical tool called Partial Least Squares (PLS), responsible for weighting the features according to their discriminative power for each different appearance. The experimental results, based on appearance-based person recognition, demonstrate that the use of an enriched feature set analyzed by PLS reduces the ambiguity among different appearances and provides higher recognition rates when compared to other machine learning techniques.", "Viewpoint invariant pedestrian recognition is an important yet under-addressed problem in computer vision. This is likely due to the difficulty in matching two objects with unknown viewpoint and pose. This paper presents a method of performing viewpoint invariant pedestrian recognition using an efficiently and intelligently designed object representation, the ensemble of localized features (ELF). Instead of designing a specific feature by hand to solve the problem, we define a feature space using our intuition about the problem and let a machine learning algorithm find the best representation. We show how both an object class specific representation and a discriminative recognition model can be learned using the AdaBoost algorithm. This approach allows many different kinds of simple features to be combined into a single similarity function. The method is evaluated using a viewpoint invariant pedestrian recognition dataset and the results are shown to be superior to all previous benchmarks for both recognition and reacquisition of pedestrians.", "Visual surveillance using multiple cameras has attracted increasing interest in recent years. Correspondence between multiple cameras is one of the most important and basic problems which visual surveillance using multiple cameras brings. In this paper, we propose a simple and robust method, based on principal axes of people, to match people across multiple cameras. The correspondence likelihood reflecting the similarity of pairs of principal axes of people is constructed according to the relationship between \"ground-points\" of people detected in each camera view and the intersections of principal axes detected in different camera views and transformed to the same view. Our method has the following desirable properties; 1) camera calibration is not needed; 2) accurate motion detection and segmentation are less critical due to the robustness of the principal axis-based feature to noise; 3) based on the fused data derived from correspondence results, positions of people in each camera view can be accurately located even when the people are partially occluded in all views. The experimental results on several real video sequences from outdoor environments have demonstrated the effectiveness, efficiency, and robustness of our method.", "Matching people across non-overlapping camera views, known as person re-identification, is challenging due to the lack of spatial and temporal constraints and large visual appearance changes caused by variations in view angle, lighting, background clutter and occlusion. To address these challenges, most previous approaches aim to extract visual features that are both distinctive and stable under appearance changes. However, most visual features and their combinations under realistic conditions are neither stable nor distinctive thus should not be used indiscriminately. In this paper, we propose to formulate person re-identification as a distance learning problem, which aims to learn the optimal distance that can maximises matching accuracy regardless the choice of representation. To that end, we introduce a novel Probabilistic Relative Distance Comparison (PRDC) model, which differs from most existing distance learning methods in that, rather than minimising intra-class variation whilst maximising intra-class variation, it aims to maximise the probability of a pair of true match having a smaller distance than that of a wrong match pair. This makes our model more tolerant to appearance changes and less susceptible to model over-fitting. Extensive experiments are carried out to demonstrate that 1) by formulating the person re-identification problem as a distance learning problem, notable improvement on matching accuracy can be obtained against conventional person re-identification techniques, which is particularly significant when the training sample size is small; and 2) our PRDC outperforms not only existing distance learning methods but also alternative learning methods based on boosting and learning to rank.", "State-of-the-art person re-identification methods seek robust person matching through combining various feature types. Often, these features are implicitly assigned with a single vector of global weights, which are assumed to be universally good for all individuals, independent to their different appearances. In this study, we show that certain features play more important role than others under different circumstances. Consequently, we propose a novel unsupervised approach for learning a bottom-up feature importance, so features extracted from different individuals are weighted adaptively driven by their unique and inherent appearance attributes. Extensive experiments on two public datasets demonstrate that attribute-sensitive feature importance facilitates more accurate person matching when it is fused together with global weights obtained using existing methods.", "In this work we develop appearance models for computing the similarity between image regions containing deformable objects of a given class in realtime. We introduce the concept of shape and appearance context. The main idea is to model the spatial distribution of the appearance relative to each of the object parts. Estimating the model entails computing occurrence matrices. We introduce a generalization of the integral image and integral histogram frameworks, and prove that it can be used to dramatically speed up occurrence computation. We demonstrate the ability of this framework to recognize an individual walking across a network of cameras. Finally, we show that the proposed approach outperforms several other methods.", "In many surveillance systems there is a requirement todetermine whether a given person of interest has alreadybeen observed over a network of cameras. This paperpresents two approaches for this person re-identificationproblem. In general the human appearance obtained in onecamera is usually different from the ones obtained in anothercamera. In order to re-identify people the human signatureshould handle difference in illumination, pose andcamera parameters. Our appearance models are based onhaar-like features and dominant color descriptors. The AdaBoostscheme is applied to both descriptors to achieve themost invariant and discriminative signature. The methodsare evaluated using benchmark video sequences with differentcamera views where people are automatically detectedusing Histograms of Oriented Gradients (HOG). The reidentificationperformance is presented using the cumulativematching characteristic (CMC) curve." ] }
1502.00256
2145307410
This paper aims at a newly raising task in visual surveillance: re-identifying people at a distance by matching body information, given several reference examples. Most of existing works solve this task by matching a reference template with the target individual, but often suffer from large human appearance variability (e.g. different poses views, illumination) and high false positives in matching caused by conjunctions, occlusions or surrounding clutters. Addressing these problems, we construct a simple yet expressive template from a few reference images of a certain individual, which represents the body as an articulated assembly of compositional and alternative parts, and propose an effective matching algorithm with cluster sampling. This algorithm is designed within a candidacy graph whose vertices are matching candidates (i.e. a pair of source and target body parts), and iterates in two steps for convergence. (i) It generates possible partial matches based on compatible and competitive relations among body parts. (ii) It confirms the partial matches to generate a new matching solution, which is accepted by the Markov Chain Monte Carlo (MCMC) mechanism. In the experiments, we demonstrate the superior performance of our approach on three public databases compared to existing methods.
Compositional approaches re-identify people by using part-based measures. They first localize salient body parts, and then search for part-to-part correspondence between reference samples and observations. These methods show promising results on very challenging scenarios @cite_4 , benefiting from powerful part-based object detectors. For example, N. @cite_1 adopt a decomposable triangulated graph to represent person configuration, and the pictorial structures model for human re-identification is introduced @cite_22 . Besides, modeling contextual correlation between body parts is discussed in @cite_3 .
{ "cite_N": [ "@cite_1", "@cite_4", "@cite_22", "@cite_3" ], "mid": [ "2131255818", "", "2126791727", "2042436258" ], "abstract": [ "In many surveillance applications it is desirable to determine if a given individual has been previously observed over a network of cameras. This is the person reidentification problem. This paper focuses on reidentification algorithms that use the overall appearance of an individual as opposed to passive biometrics such as face and gait. Person reidentification approaches have two aspects: (i) establish correspondence between parts, and (ii) generate signatures that are invariant to variations in illumination, pose, and the dynamic appearance of clothing. A novel spatiotemporal segmentation algorithm is employed to generate salient edgels that are robust to changes in appearance of clothing. The invariant signatures are generated by combining normalized color and salient edgel histograms. Two approaches are proposed to generate correspondences: (i) a model based approach that fits an articulated model to each individual to establish a correspondence map, and (ii) an interest point operator approach that nominates a large number of potential correspondences which are evaluated using a region growing scheme. Finally, the approaches are evaluated on a 44 person database across 3 disparate views.", "", "We propose a novel methodology for re-identification, based on Pictorial Structures (PS). Whenever face or other biometric information is missing, humans recognize an individual by selectively focusing on the body parts, looking for part-to-part correspondences. We want to take inspiration from this strategy in a re-identification context, using PS to achieve this objective. For single image re-identification, we adopt PS to localize the parts, extract and match their descriptors. When multiple images of a single individual are available, we propose a new algorithm to customize the fit of PS on that specific person, leading to what we call a Custom Pictorial Structure (CPS). CPS learns the appearance of an individual, improving the localization of its parts, thus obtaining more reliable visual characteristics for re-identification. It is based on the statistical learning of pixel attributes collected through spatio-temporal reasoning. The use of PS and CPS leads to state-of-the-art results on all the available public benchmarks, and opens a fresh new direction for research on re-identification.", "In many surveillance systems there is a requirement todetermine whether a given person of interest has alreadybeen observed over a network of cameras. This is the personre-identification problem. The human appearance obtainedin one camera is usually different from the ones obtained inanother camera. In order to re-identify people the humansignature should handle difference in illumination, pose andcamera parameters. We propose a new appearance modelbased on spatial covariance regions extracted from humanbody parts. The new spatial pyramid scheme is applied tocapture the correlation between human body parts in orderto obtain a discriminative human signature. The humanbody parts are automatically detected using Histograms ofOriented Gradients (HOG). The method is evaluated usingbenchmark video sequences from i-LIDS Multiple-CameraTracking Scenario data set. The re-identification performanceis presented using the cumulative matching characteristic(CMC) curve. Finally, we show that the proposedapproach outperforms state of the art methods." ] }
1502.00256
2145307410
This paper aims at a newly raising task in visual surveillance: re-identifying people at a distance by matching body information, given several reference examples. Most of existing works solve this task by matching a reference template with the target individual, but often suffer from large human appearance variability (e.g. different poses views, illumination) and high false positives in matching caused by conjunctions, occlusions or surrounding clutters. Addressing these problems, we construct a simple yet expressive template from a few reference images of a certain individual, which represents the body as an articulated assembly of compositional and alternative parts, and propose an effective matching algorithm with cluster sampling. This algorithm is designed within a candidacy graph whose vertices are matching candidates (i.e. a pair of source and target body parts), and iterates in two steps for convergence. (i) It generates possible partial matches based on compatible and competitive relations among body parts. (ii) It confirms the partial matches to generate a new matching solution, which is accepted by the Markov Chain Monte Carlo (MCMC) mechanism. In the experiments, we demonstrate the superior performance of our approach on three public databases compared to existing methods.
Many works @cite_1 @cite_18 @cite_22 utilize multiple reference instances for individual, i.e. multi-shot approaches, but they omit occlusions and conjunctions in the target images and re-identify the target by computing a one-to-many distance, while we explicitly handle these problems by exploiting reconfigurable compositions and contextual interactions during inference.
{ "cite_N": [ "@cite_18", "@cite_1", "@cite_22" ], "mid": [ "1979260620", "2131255818", "2126791727" ], "abstract": [ "In this paper, we present an appearance-based method for person re-identification. It consists in the extraction of features that model three complementary aspects of the human appearance: the overall chromatic content, the spatial arrangement of colors into stable regions, and the presence of recurrent local motifs with high entropy. All this information is derived from different body parts, and weighted opportunely by exploiting symmetry and asymmetry perceptual principles. In this way, robustness against very low resolution, occlusions and pose, viewpoint and illumination changes is achieved. The approach applies to situations where the number of candidates varies continuously, considering single images or bunch of frames for each individual. It has been tested on several public benchmark datasets (ViPER, iLIDS, ETHZ), gaining new state-of-the-art performances.", "In many surveillance applications it is desirable to determine if a given individual has been previously observed over a network of cameras. This is the person reidentification problem. This paper focuses on reidentification algorithms that use the overall appearance of an individual as opposed to passive biometrics such as face and gait. Person reidentification approaches have two aspects: (i) establish correspondence between parts, and (ii) generate signatures that are invariant to variations in illumination, pose, and the dynamic appearance of clothing. A novel spatiotemporal segmentation algorithm is employed to generate salient edgels that are robust to changes in appearance of clothing. The invariant signatures are generated by combining normalized color and salient edgel histograms. Two approaches are proposed to generate correspondences: (i) a model based approach that fits an articulated model to each individual to establish a correspondence map, and (ii) an interest point operator approach that nominates a large number of potential correspondences which are evaluated using a region growing scheme. Finally, the approaches are evaluated on a 44 person database across 3 disparate views.", "We propose a novel methodology for re-identification, based on Pictorial Structures (PS). Whenever face or other biometric information is missing, humans recognize an individual by selectively focusing on the body parts, looking for part-to-part correspondences. We want to take inspiration from this strategy in a re-identification context, using PS to achieve this objective. For single image re-identification, we adopt PS to localize the parts, extract and match their descriptors. When multiple images of a single individual are available, we propose a new algorithm to customize the fit of PS on that specific person, leading to what we call a Custom Pictorial Structure (CPS). CPS learns the appearance of an individual, improving the localization of its parts, thus obtaining more reliable visual characteristics for re-identification. It is based on the statistical learning of pixel attributes collected through spatio-temporal reasoning. The use of PS and CPS leads to state-of-the-art results on all the available public benchmarks, and opens a fresh new direction for research on re-identification." ] }
1502.00193
2163178248
Evolutionary algorithms (EAs) are very popular tools to design and evolve artificial neural networks (ANNs), especially to train them. These methods have advantages over the conventional backpropagation (BP) method because of their low computational requirement when searching in a large solution space. In this paper, we employ Chemical Reaction Optimization (CRO), a newly developed global optimization method, to replace BP in training neural networks. CRO is a population-based metaheuristics mimicking the transition of molecules and their interactions in a chemical reaction. Simulation results show that CRO outperforms many EA strategies commonly used to train neural networks.
Using EA to train ANNs has become an active research topic. Many EAs, e.g. genetic algorithm (GA) @cite_13 , simulated annealing (SA) @cite_27 , and particle swarm optimization (PSO) @cite_25 have been used. Yet relatively few invasive" methods have been studied to achieve the best performance of EA-based neural networks. Sexton used Tabu Search (TS) for neural network training @cite_5 , where TS was used to train a fixed neural network with six hidden layer neurons. The TS solution is given in the form of vectors representing all the weights of the network. The testing data set was a collection of randomly generated two-dimensional points @math where @math and @math . The output data set was generated by simple mathematical functions. The result demonstrated that TS-based networks could outperform conventional BP-derived networks. SA and GA were also implemented for the same data set @cite_22 .
{ "cite_N": [ "@cite_22", "@cite_27", "@cite_5", "@cite_13", "@cite_25" ], "mid": [ "1996688253", "2157080539", "2001646920", "2114053544", "2021309800" ], "abstract": [ "The escalation of Neural Network research in Business has been brought about by the ability of neural networks, as a tool, to closely approximate unknown functions to any degree of desired accuracy. Although, gradient based search techniques such as back-propagation are currently the most widely used optimization techniques for training neural networks, it has been shown that these gradient techniques are severely limited in their ability to find global solutions. Global search techniques have been identified as a potential solution to this problem. In this paper we examine two well known global search techniques, Simulated Annealing and the Genetic Algorithm, and compare their performance. A Monte Carlo study was conducted in order to test the appropriateness of these global search techniques for optimizing neural networks.", "This paper introduces a methodology for neural network global optimization. The aim is the simultaneous optimization of multilayer perceptron (MLP) network weights and architectures, in order to generate topologies with few connections and high classification performance for any data sets. The approach combines the advantages of simulated annealing, tabu search and the backpropagation training algorithm in order to generate an automatic process for producing networks with high classification performance and low complexity. Experimental results obtained with four classification problems and one prediction problem has shown to be better than those obtained by the most commonly used optimization techniques", "The ability of neural networks to closely approximate unknown functions to any degree of desired accuracy has generated considerable demand for neural network research in business. The attractiveness of neural network research stems from researchers' need to approximate models within the business environment without having a priori knowledge about the true underlying function. Gradient techniques, such as backpropagation, are currently the most widely used methods for neural network optimization. Since these techniques search for local solutions, they are subject to local convergence and thus can perform poorly even on simple problems when forecasting out-of-sample. Consequently, a global search algorithm is warranted. In this paper we examine tabu search (TS) as a possible alternative to the problematic backpropagation approach. A Monte Carlo study was conducted to test the appropriateness of TS as a global search technique for optimizing neural networks. Holding the neural network architecture constant, 530 independent runs were conducted for each of seven test functions, including a production function that exhibits both increasing and diminishing marginal returns and the Mackey-Glass chaotic time series. In the resulting comparison, TS derived solutions that were significantly superior to those of backpropagation solutions for in-sample, interpolation, and extrapolation test data for all seven test functions. It was also shown that fewer function evaluations were needed to find these optimal values.", "In this paper, a hybrid Taguchi-genetic algorithm (HTGA) is applied to solve the problem of tuning both network structure and parameters of a feedforward neural network. The HTGA approach is a method of combining the traditional genetic algorithm (TGA), which has a powerful global exploration capability, with the Taguchi method, which can exploit the optimum offspring. The Taguchi method is inserted between crossover and mutation operations of a TGA. Then, the systematic reasoning ability of the Taguchi method is incorporated in the crossover operations to select the better genes to achieve crossover, and consequently enhance the genetic algorithms. Therefore, the HTGA approach can be more robust, statistically sound, and quickly convergent. First, the authors evaluate the performance of the presented HTGA approach by studying some global numerical optimization problems. Then, the presented HTGA approach is effectively applied to solve three examples on forecasting the sunspot numbers, tuning the associative memory, and solving the XOR problem. The numbers of hidden nodes and the links of the feedforward neural network are chosen by increasing them from small numbers until the learning performance is good enough. As a result, a partially connected feedforward neural network can be obtained after tuning. This implies that the cost of implementation of the neural network can be reduced. In these studied problems of tuning both network structure and parameters of a feedforward neural network, there are many parameters and numerous local optima so that these studied problems are challenging enough for evaluating the performances of any proposed GA-based approaches. The computational experiments show that the presented HTGA approach can obtain better results than the existing method reported recently in the literature.", "This paper presents an improved particle swarm optimization (PSO) and discrete PSO (DPSO) with an enhancement operation by using a self-adaptive evolution strategies (ES). This improved PSO DPSO is proposed for joint optimization of three-layer feedforward artificial neural network (ANN) structure and parameters (weights and bias), which is named ESPNet. The experimental results on two real-world problems show that ESPNet can produce compact ANNs with good generalization ability." ] }
1502.00193
2163178248
Evolutionary algorithms (EAs) are very popular tools to design and evolve artificial neural networks (ANNs), especially to train them. These methods have advantages over the conventional backpropagation (BP) method because of their low computational requirement when searching in a large solution space. In this paper, we employ Chemical Reaction Optimization (CRO), a newly developed global optimization method, to replace BP in training neural networks. CRO is a population-based metaheuristics mimicking the transition of molecules and their interactions in a chemical reaction. Simulation results show that CRO outperforms many EA strategies commonly used to train neural networks.
Angeline proposed GeNeralized Acquisition of Recurrent Links (GNARL) using hybrid-GA to train ANNs @cite_15 . Instead of using symmetric topology, GNARL employs sparse connections of neural networks to represent the network structures. GNARL uses a mutation operation to evolve the structure and to tune the weights of networks. GNARL reserves the top 50 A Constructive algorithm for training Cooperative Neural Network Ensembles (CNNE), proposed by M. Islam @cite_20 , uses a constructive algorithm to evolve neural networks. CNNE relies on the contribution of individuals in the population and uses incremental learning to maintain the diversity among individuals in an ensemble. Incremental learning based on negative correlation could effectively reduce the redundancy generated by individuals searching the same solution space and thus different individuals could learn different aspect of the training data, which could result in a final solution of the ensemble. CNNE is a noninvasive" method which relies on proper implementation of BP. Though CNNE minimizes optimization problems by utilization of ensembles, it suffers from the structural climbing problem" @cite_15 .
{ "cite_N": [ "@cite_15", "@cite_20" ], "mid": [ "2138784882", "2106390255" ], "abstract": [ "Standard methods for simultaneously inducing the structure and weights of recurrent neural networks limit every task to an assumed class of architectures. Such a simplification is necessary since the interactions between network structure and function are not well understood. Evolutionary computations, which include genetic algorithms and evolutionary programming, are population-based search methods that have shown promise in many similarly complex tasks. This paper argues that genetic algorithms are inappropriate for network acquisition and describes an evolutionary program, called GNARL, that simultaneously acquires both the structure and weights for recurrent networks. GNARL's empirical acquisition method allows for the emergence of complex behaviors and topologies that are potentially excluded by the artificial architectural constraints imposed in standard network induction methods. >", "Presents a constructive algorithm for training cooperative neural-network ensembles (CNNEs). CNNE combines ensemble architecture design with cooperative training for individual neural networks (NNs) in ensembles. Unlike most previous studies on training ensembles, CNNE puts emphasis on both accuracy and diversity among individual NNs in an ensemble. In order to maintain accuracy among individual NNs, the number of hidden nodes in individual NNs are also determined by a constructive approach. Incremental training based on negative correlation is used in CNNE to train individual NNs for different numbers of training epochs. The use of negative correlation learning and different training epochs for training individual NNs reflect CNNEs emphasis on diversity among individual NNs in an ensemble. CNNE has been tested extensively on a number of benchmark problems in machine learning and neural networks, including Australian credit card assessment, breast cancer, diabetes, glass, heart disease, letter recognition, soybean, and Mackey-Glass time series prediction problems. The experimental results show that CNNE can produce NN ensembles with good generalization ability." ] }
1502.00193
2163178248
Evolutionary algorithms (EAs) are very popular tools to design and evolve artificial neural networks (ANNs), especially to train them. These methods have advantages over the conventional backpropagation (BP) method because of their low computational requirement when searching in a large solution space. In this paper, we employ Chemical Reaction Optimization (CRO), a newly developed global optimization method, to replace BP in training neural networks. CRO is a population-based metaheuristics mimicking the transition of molecules and their interactions in a chemical reaction. Simulation results show that CRO outperforms many EA strategies commonly used to train neural networks.
S. He proposed a Group Search Optimizer-based ANN (GSOANN) @cite_7 , which uses Group Search Optimizer, a population-based optimization algorithm inspired by animal social foraging behavior, to train the networks with least-squared error function as the fitness function.
{ "cite_N": [ "@cite_7" ], "mid": [ "2125281549" ], "abstract": [ "Nature-inspired optimization algorithms, notably evolutionary algorithms (EAs), have been widely used to solve various scientific and engineering problems because of to their simplicity and flexibility. Here we report a novel optimization algorithm, group search optimizer (GSO), which is inspired by animal behavior, especially animal searching behavior. The framework is mainly based on the producer-scrounger model, which assumes that group members search either for ldquofindingrdquo (producer) or for ldquojoiningrdquo (scrounger) opportunities. Based on this framework, concepts from animal searching behavior, e.g., animal scanning mechanisms, are employed metaphorically to design optimum searching strategies for solving continuous optimization problems. When tested against benchmark functions, in low and high dimensions, the GSO algorithm has competitive performance to other EAs in terms of accuracy and convergence speed, especially on high-dimensional multimodal problems. The GSO algorithm is also applied to train artificial neural networks. The promising results on three real-world benchmark problems show the applicability of GSO for problem solving." ] }
1502.00193
2163178248
Evolutionary algorithms (EAs) are very popular tools to design and evolve artificial neural networks (ANNs), especially to train them. These methods have advantages over the conventional backpropagation (BP) method because of their low computational requirement when searching in a large solution space. In this paper, we employ Chemical Reaction Optimization (CRO), a newly developed global optimization method, to replace BP in training neural networks. CRO is a population-based metaheuristics mimicking the transition of molecules and their interactions in a chemical reaction. Simulation results show that CRO outperforms many EA strategies commonly used to train neural networks.
Paulito P. Palmes proposed mutation-based genetic neural network (MGNN) employing a specially designed mutation strategy to perturb the chromosomes representing neural networks @cite_16 . MGNN is very similar to GNARL except that it implements selection, encoding, mutation, fitness function, and stopping criteria differently. MGNN's encoding scheme contributes to a flexible formulation of fitness function and mutation strategy of local adaptation of evolutionary programming, and it implements a stopping criteria using sliding window" to track the state of overfitness.
{ "cite_N": [ "@cite_16" ], "mid": [ "2112061072" ], "abstract": [ "Evolving gradient-learning artificial neural networks (ANNs) using an evolutionary algorithm (EA) is a popular approach to address the local optima and design problems of ANN. The typical approach is to combine the strength of backpropagation (BP) in weight learning and EA's capability of searching the architecture space. However, the BP's \"gradient descent\" approach requires a highly computer-intensive operation that relatively restricts the search coverage of EA by compelling it to use a small population size. To address this problem, we utilized mutation-based genetic neural network (MGNN) to replace BP by using the mutation strategy of local adaptation of evolutionary programming (EP) to effect weight learning. The MGNN's mutation enables the network to dynamically evolve its structure and adapt its weights at the same time. Moreover, MGNN's EP-based encoding scheme allows for a flexible and less restricted formulation of the fitness function and makes fitness computation fast and efficient. This makes it feasible to use larger population sizes and allows MGNN to have a relatively wide search coverage of the architecture space. MGNN implements a stopping criterion where overfitness occurrences are monitored through \"sliding-windows\" to avoid premature learning and overlearning. Statistical analysis of its performance to some well-known classification problems demonstrate its good generalization capability. It also reveals that locally adapting or scheduling the strategy parameters embedded in each individual network may provide a proper balance between the local and global searching capabilities of MGNN." ] }