aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1601.00062
2229399188
We propose two practical non-convex approaches for learning near-isometric, linear embeddings of finite sets of data points. Given a set of training points @math , we consider the secant set @math that consists of all pairwise difference vectors of @math , normalized to lie on the unit sphere. The problem can be formulated as finding a symmetric and positive semi-definite matrix @math that preserves the norms of all the vectors in @math up to a distortion parameter @math . Motivated by non-negative matrix factorization, we reformulate our problem into a Frobenius norm minimization problem, which is solved by the Alternating Direction Method of Multipliers (ADMM) and develop an algorithm, FroMax. Another method solves for a projection matrix @math by minimizing the restricted isometry property (RIP) directly over the set of symmetric, postive semi-definite matrices. Applying ADMM and a Moreau decomposition on a proximal mapping, we develop another algorithm, NILE-Pro, for dimensionality reduction. FroMax is shown to converge faster for smaller @math while NILE-Pro converges faster for larger @math . Both non-convex approaches are then empirically demonstrated to be more computationally efficient than prior convex approaches for a number of applications in machine learning and signal processing.
Using the geometric structure of the data, Hegde, et. al. developed a new deterministic approach, NuMax, to construct a near-isometric, linear embedding @cite_17 . Given a training set @math , the is constructed by taking all pairwise difference vectors of @math , which are then normalized to lie on the unit sphere. Hegde, et. al. formulated a rank minimization problem with affine constraints to construct a @math that preserves norms of all vectors in @math up to a distortion parameter @math . They then relax this problem to a convex program that can be solved using a tractable semidefinite program (SDP), with column generation for large data sets, and develop NuMax based on the Alternating Direction Method of Multipliers (ADMM). This framework deterministically produces a near-isometric linear embedding. Other algorithmic approaches for finding near-isometric linear embeddings are also described in @cite_10 @cite_2 @cite_11 .
{ "cite_N": [ "@cite_2", "@cite_11", "@cite_10", "@cite_17" ], "mid": [ "2060783161", "1822613633", "2140589466", "2137825973" ], "abstract": [ "We propose a new method for linear dimensionality reduction of manifold-modeled data. Given a training set X of Q points belonging to a manifoldM ⊂ R , we construct a linear operator P : R → R that approximately preserves the norms of all Q 2 pairwise difference vectors (or secants) of X . We design the matrix P via a trace-norm minimization that can be efficiently solved as a semi-definite program (SDP). When X comprises a sufficiently dense sampling of M, we prove that the optimal matrix P preserves all pairs of secants over M. We numerically demonstrate the considerable gains using our SDP-based approach over existing linear dimensionality reduction methods, such as principal components analysis (PCA) and random projections.", "We propose a dimensionality reducing matrix design based on training data with constraints on its Frobenius norm and number of rows. Our design criteria is aimed at preserving the distances between the data points in the dimensionality reduced space as much as possible relative to their distances in original data space. This approach can be considered as a deterministic Bi-Lipschitz embedding of the data points. We introduce a scalable learning algorithm, dubbed AMUSE, and provide a rigorous estimation guarantee by leveraging game theoretic tools. We also provide a generalization characterization of our matrix based on our sample data. We use compressive sensing problems as an example application of our problem, where the Frobenius norm design constraint translates into the sensing energy.", "We propose algorithms for constructing linear embeddings of a finite dataset V ⊂ ℝd into a k-dimensional subspace with provable, nearly optimal distortions. First, we propose an exhaustive-search-based algorithm that yields a k-dimensional linear embedding with distortion at most eopt(k)+δ, for any δ > 0 where eopt(k) is the smallest achievable distortion over all possible orthonormal embeddings. This algorithm is space-efficient and can be achieved by a single pass over the data V. However, the runtime of this algorithm is exponential in k. Second, we propose a convex-programming-based algorithm that yields an O(k δ)-dimensional orthonormal embedding with distortion at most (1 + δ)eopt(k). The runtime of this algorithm is polynomial in d and independent of k. Several experiments demonstrate the benefits of our approach over conventional linear embedding techniques, such as principal components analysis (PCA) or random projections.", "We propose a novel framework for the deterministic construction of linear, near-isometric embeddings of a finite set of data points. Given a set of training points @math , we consider the secant set @math that consists of all pairwise difference vectors of @math , normalized to lie on the unit sphere. We formulate an affine rank minimization problem to construct a matrix @math that preserves the norms of all the vectors in @math up to a distortion parameter @math . While affine rank minimization is NP-hard, we show that this problem can be relaxed to a convex formulation that can be solved using a tractable semidefinite program (SDP). In order to enable scalability of our proposed SDP to very large-scale problems, we adopt a two-stage approach. First, in order to reduce compute time, we develop a novel algorithm based on the Alternating Direction Method of Multipliers (ADMM) that we call Nuclear norm minimization with Max-norm constraints (NuMax) to solve the SDP. Second, we develop a greedy, approximate version of NuMax based on the column generation method commonly used to solve large-scale linear programs. We demonstrate that our framework is useful for a number of signal processing applications via a range of experiments on large-scale synthetic and real datasets." ] }
1601.00400
1907729166
This paper proposes a joint multi-task learning algorithm to better predict attributes in images using deep convolutional neural networks (CNN). We consider learning binary semantic attributes through a multi-task CNN model, where each CNN will predict one binary attribute. The multi-task learning allows CNN models to simultaneously share visual knowledge among different attribute categories. Each CNN will generate attribute-specific feature representations, and then we apply multi-task learning on the features to predict their attributes. In our multi-task framework, we propose a method to decompose the overall model’s parameters into a latent task matrix and combination matrix. Furthermore, under-sampled classifiers can leverage shared statistics from other classifiers to improve their performance. Natural grouping of attributes is applied such that attributes in the same group are encouraged to share more knowledge. Meanwhile, attributes in different groups will generally compete with each other, and consequently share less knowledge. We show the effectiveness of our method on two popular attribute datasets.
a visual property that appears or disappears in an image. If this property can be expressed in human language, we call it a Semantic property. Different properties may describe different image features such as colors, patterns, and shapes @cite_75 . Some recent studies concentrate on how to link human-interaction applications through these mid-level attributes, where a consistent alignment should occur between human query expressions and the computer interpretations of query attribute phrases.
{ "cite_N": [ "@cite_75" ], "mid": [ "2098411764" ], "abstract": [ "We propose to shift the goal of recognition from naming to describing. Doing so allows us not only to name familiar objects, but also: to report unusual aspects of a familiar object (“spotty dog”, not just “dog”); to say something about unfamiliar objects (“hairy and four-legged”, not just “unknown”); and to learn how to recognize new objects with few or no visual examples. Rather than focusing on identity assignment, we make inferring attributes the core problem of recognition. These attributes can be semantic (“spotty”) or discriminative (“dogs have it but sheep do not”). Learning attributes presents a major new challenge: generalization across object categories, not just across instances within a category. In this paper, we also introduce a novel feature selection method for learning attributes that generalize well across categories. We support our claims by thorough evaluation that provides insights into the limitations of the standard recognition paradigm of naming and demonstrates the new abilities provided by our attribute-based framework." ] }
1601.00400
1907729166
This paper proposes a joint multi-task learning algorithm to better predict attributes in images using deep convolutional neural networks (CNN). We consider learning binary semantic attributes through a multi-task CNN model, where each CNN will predict one binary attribute. The multi-task learning allows CNN models to simultaneously share visual knowledge among different attribute categories. Each CNN will generate attribute-specific feature representations, and then we apply multi-task learning on the features to predict their attributes. In our multi-task framework, we propose a method to decompose the overall model’s parameters into a latent task matrix and combination matrix. Furthermore, under-sampled classifiers can leverage shared statistics from other classifiers to improve their performance. Natural grouping of attributes is applied such that attributes in the same group are encouraged to share more knowledge. Meanwhile, attributes in different groups will generally compete with each other, and consequently share less knowledge. We show the effectiveness of our method on two popular attribute datasets.
Image multi-labeling is simply learning to assign multiple labels to an image @cite_12 @cite_48 . If the problem is adapted as is, a challenge arises when the number of labels increases and the potential output label combinations become intractable @cite_12 . To mitigate this, a common transformation way is performed by splitting the problem into a set of single binary classifiers @cite_33 . Predicting co-occurring attributes can be seen as multi-label learning. On the other hand, most of the related works @cite_44 @cite_9 tend to apply multi-task learning to allow sharing or using some label relationship heuristics a priori @cite_63 . Another work applies ranking functions with deep CNN to rank label scores @cite_55 .
{ "cite_N": [ "@cite_33", "@cite_48", "@cite_9", "@cite_55", "@cite_44", "@cite_63", "@cite_12" ], "mid": [ "1999954155", "", "2050818842", "1514027499", "2040171755", "64813323", "2146241755" ], "abstract": [ "The widely known binary relevance method for multi-label classification, which considers each label as an independent binary problem, has often been overlooked in the literature due to the perceived inadequacy of not directly modelling label correlations. Most current methods invest considerable complexity to model interdependencies between labels. This paper shows that binary relevance-based methods have much to offer, and that high predictive performance can be obtained without impeding scalability to large datasets. We exemplify this with a novel classifier chains method that can model label correlations while maintaining acceptable computational complexity. We extend this approach further in an ensemble framework. An extensive empirical evaluation covers a broad range of multi-label datasets with a variety of evaluation metrics. The results illustrate the competitiveness of the chaining method against related and state-of-the-art methods, both in terms of predictive performance and time complexity.", "", "The notion of relative attributes as introduced by Parikh and Grauman (ICCV, 2011) provides an appealing way of comparing two images based on their visual properties (or attributes) such as \"smiling\" for face images, \"naturalness\" for outdoor images, etc. For learning such attributes, a Ranking SVM based formulation was proposed that uses globally represented pairs of annotated images. In this paper, we extend this idea towards learning relative attributes using local parts that are shared across categories. First, instead of using a global representation, we introduce a part-based representation combining a pair of images that specifically compares corresponding parts. Then, with each part we associate a locally adaptive \"significance-coefficient\" that represents its discriminative ability with respect to a particular attribute. For each attribute, the significance-coefficients are learned simultaneously with a max-margin ranking model in an iterative manner. Compared to the baseline method, the new method is shown to achieve significant improvement in relative attribute prediction accuracy. Additionally, it is also shown to improve relative feedback based interactive image search.", "Multilabel image annotation is one of the most important challenges in computer vision with many real-world applications. While existing work usually use conventional visual features for multilabel annotation, features based on Deep Neural Networks have shown potential to significantly boost performance. In this work, we propose to leverage the advantage of such features and analyze key components that lead to better performances. Specifically, we show that a significant performance gain could be obtained by combining convolutional architectures with approximate top- @math ranking objectives, as thye naturally fit the multilabel tagging problem. Our experiments on the NUS-WIDE dataset outperforms the conventional visual features by about 10 , obtaining the best reported performance in the literature.", "Existing methods to learn visual attributes are prone to learning the wrong thing -- namely, properties that are correlated with the attribute of interest among training samples. Yet, many proposed applications of attributes rely on being able to learn the correct semantic concept corresponding to each attribute. We propose to resolve such confusions by jointly learning decorrelated, discriminative attribute models. Leveraging side information about semantic relatedness, we develop a multi-task learning approach that uses structured sparsity to encourage feature competition among unrelated attributes and feature sharing among related attributes. On three challenging datasets, we show that accounting for structure in the visual attribute space is key to learning attribute models that preserve semantics, yielding improved generalizability that helps in the recognition and discovery of unseen object categories.", "In this paper we study how to perform object classification in a principled way that exploits the rich structure of real world labels. We develop a new model that allows encoding of flexible relations between labels. We introduce Hierarchy and Exclusion (HEX) graphs, a new formalism that captures semantic relations between any two labels applied to the same object: mutual exclusion, overlap and subsumption. We then provide rigorous theoretical analysis that illustrates properties of HEX graphs such as consistency, equivalence, and computational implications of the graph structure. Next, we propose a probabilistic classification model based on HEX graphs and show that it enjoys a number of desirable properties. Finally, we evaluate our method using a large-scale benchmark. Empirical results demonstrate that our model can significantly improve object classification by exploiting the label relations.", "Multi-label classification methods are increasingly required by modern applications, such as protein function classification, music categorization, and semantic scene classification. This article introduces the task of multi-label classification, organizes the sparse related literature into a structured presentation and performs comparative experimental results of certain multilabel classification methods. It also contributes the definition of concepts for the quantification of the multi-label nature of a data set." ] }
1601.00372
2222235228
Sequence-to-sequence neural translation models learn semantic and syntactic relations between sentence pairs by optimizing the likelihood of the target given the source, i.e., @math , an objective that ignores other potentially useful sources of information. We introduce an alternative objective function for neural MT that maximizes the mutual information between the source and target sentences, modeling the bi-directional dependency of sources and targets. We implement the model with a simple re-ranking method, and also introduce a decoding algorithm that increases diversity in the N-best list produced by the first pass. Applied to the WMT German English and French English tasks, the proposed models offers a consistent performance boost on both standard LSTM and attention-based neural MT architectures.
models map source sequences to vector space representations, from which a target sequence is then generated. They yield good performance in a variety of NLP generation tasks including conversational response generation @cite_13 @cite_9 @cite_28 , and parsing @cite_25 @cite_6 .
{ "cite_N": [ "@cite_28", "@cite_9", "@cite_6", "@cite_13", "@cite_25" ], "mid": [ "1958706068", "889023230", "", "1591706642", "1869752048" ], "abstract": [ "Sequence-to-sequence neural network models for generation of conversational responses tend to generate safe, commonplace responses (e.g., \"I don't know\") regardless of the input. We suggest that the traditional objective function, i.e., the likelihood of output (response) given input (message) is unsuited to response generation tasks. Instead we propose using Maximum Mutual Information (MMI) as the objective function in neural models. Experimental results demonstrate that the proposed MMI models produce more diverse, interesting, and appropriate responses, yielding substantive gains in BLEU scores on two conversational datasets and in human evaluations.", "We investigate the task of building open domain, conversational dialogue systems based on large dialogue corpora using generative models. Generative models produce system responses that are autonomously generated word-by-word, opening up the possibility for realistic, flexible interactions. In support of this goal, we extend the recently proposed hierarchical recurrent encoder-decoder neural network to the dialogue domain, and demonstrate that this model is competitive with state-of-the-art neural language models and back-off n-gram models. We investigate the limitations of this and similar approaches, and show how its performance can be improved by bootstrapping the learning from a larger question-answer pair corpus and from pretrained word embeddings.", "", "Conversational modeling is an important task in natural language understanding and machine intelligence. Although previous approaches exist, they are often restricted to specific domains (e.g., booking an airline ticket) and require hand-crafted rules. In this paper, we present a simple approach for this task which uses the recently proposed sequence to sequence framework. Our model converses by predicting the next sentence given the previous sentence or sentences in a conversation. The strength of our model is that it can be trained end-to-end and thus requires much fewer hand-crafted rules. We find that this straightforward model can generate simple conversations given a large conversational training dataset. Our preliminary results suggest that, despite optimizing the wrong objective function, the model is able to converse well. It is able extract knowledge from both a domain specific dataset, and from a large, noisy, and general domain dataset of movie subtitles. On a domain-specific IT helpdesk dataset, the model can find a solution to a technical problem via conversations. On a noisy open-domain movie transcript dataset, the model can perform simple forms of common sense reasoning. As expected, we also find that the lack of consistency is a common failure mode of our model.", "Syntactic constituency parsing is a fundamental problem in natural language processing and has been the subject of intensive research and engineering for decades. As a result, the most accurate parsers are domain specific, complex, and inefficient. In this paper we show that the domain agnostic attention-enhanced sequence-to-sequence model achieves state-of-the-art results on the most widely used syntactic constituency parsing dataset, when trained on a large synthetic corpus that was annotated using existing parsers. It also matches the performance of standard parsers when trained only on a small human-annotated dataset, which shows that this model is highly data-efficient, in contrast to sequence-to-sequence models without the attention mechanism. Our parser is also fast, processing over a hundred sentences per second with an unoptimized CPU implementation." ] }
1601.00372
2222235228
Sequence-to-sequence neural translation models learn semantic and syntactic relations between sentence pairs by optimizing the likelihood of the target given the source, i.e., @math , an objective that ignores other potentially useful sources of information. We introduce an alternative objective function for neural MT that maximizes the mutual information between the source and target sentences, modeling the bi-directional dependency of sources and targets. We implement the model with a simple re-ranking method, and also introduce a decoding algorithm that increases diversity in the N-best list produced by the first pass. Applied to the WMT German English and French English tasks, the proposed models offers a consistent performance boost on both standard LSTM and attention-based neural MT architectures.
A neural machine translation system uses distributed representations to model the conditional probability of targets given sources, using two components, an encoder and a decoder. Kalchbrenner and Blunsom used an encoding model akin to convolutional networks for encoding and standard hidden unit recurrent nets for decoding. Similar convolutional networks are used in @cite_19 for encoding. employed a stacking LSTM model for both encoding and decoding. , adopted bi-directional recurrent nets for the encoder.
{ "cite_N": [ "@cite_19" ], "mid": [ "2132043663" ], "abstract": [ "The recently proposed neural network joint model (NNJM) (, 2014) augments the n-gram target language model with a heuristically chosen source context window, achieving state-of-the-art performance in SMT. In this paper, we give a more systematic treatment by summarizing the relevant source information through a convolutional architecture guided by the target information. With different guiding signals during decoding, our specifically designed convolution+gating architectures can pinpoint the parts of a source sentence that are relevant to predicting a target word, and fuse them with the context of entire source sentence to form a unified representation. This representation, together with target language words, are fed to a deep neural network (DNN) to form a stronger NNJM. Experiments on two NIST Chinese-English translation tasks show that the proposed model can achieve significant improvements over the previous NNJM by up to +1.08 BLEU points on average" ] }
1601.00372
2222235228
Sequence-to-sequence neural translation models learn semantic and syntactic relations between sentence pairs by optimizing the likelihood of the target given the source, i.e., @math , an objective that ignores other potentially useful sources of information. We introduce an alternative objective function for neural MT that maximizes the mutual information between the source and target sentences, modeling the bi-directional dependency of sources and targets. We implement the model with a simple re-ranking method, and also introduce a decoding algorithm that increases diversity in the N-best list produced by the first pass. Applied to the WMT German English and French English tasks, the proposed models offers a consistent performance boost on both standard LSTM and attention-based neural MT architectures.
Maximum Mutual Information (MMI) was introduced in speech recognition @cite_17 as a way of measuring the mutual dependence between inputs (acoustic feature vectors) and outputs (words) and improving discriminative training @cite_10 . show that MMI could solve an important problem in conversational response generation. Prior models tended to generate highly generic, dull responses (e.g., I don't know ) regardless of the inputs @cite_31 @cite_13 @cite_26 . show that modeling the mutual dependency between messages and response promotes the diversity of response outputs.
{ "cite_N": [ "@cite_26", "@cite_10", "@cite_31", "@cite_13", "@cite_17" ], "mid": [ "2220374841", "2003123121", "2951580200", "1591706642", "1877570817" ], "abstract": [ "During the past decade, several areas of speech and language understanding have witnessed substantial breakthroughs from the use of data-driven models. In the area of dialogue systems, the trend is less obvious, and most practical systems are still built through significant engineering and expert knowledge. Nevertheless, several recent results suggest that data-driven approaches are feasible and quite promising. To facilitate research in this area, we have carried out a wide survey of publicly available datasets suitable for data-driven learning of dialogue systems. We discuss important characteristics of these datasets and how they can be used to learn diverse dialogue strategies. We also describe other potential uses of these datasets, such as methods for transfer learning between datasets and the use of external knowledge, and discuss appropriate choice of evaluation metrics for the learning objective.", "This paper describes, and evaluates on a large scale, the lattice based framework for discriminative training of large vocabulary speech recognition systems based on Gaussian mixture hidden Markov models (HMMs). This paper concentrates on the maximum mutual information estimation (MMIE) criterion which has been used to train HMM systems for conversational telephone speech transcription using up to 265 hours of training data. These experiments represent the largest-scale application of discriminative training techniques for speech recognition of which the authors are aware. Details are given of the MMIE lattice-based implementation used with the extended Baum-Welch algorithm, which makes training of such large systems computationally feasible. Techniques for improving generalization using acoustic scaling and weakened language models are discussed. The overall technique has allowed the estimation of triphone and quinphone HMM parameters which has led to significant reductions in word error rate for the transcription of conversational telephone speech relative to our best systems trained using maximum likelihood estimation (MLE). This is in contrast to some previous studies, which have concluded that there is little benefit in using discriminative training for the most difficult large vocabulary speech recognition tasks. The lattice MMIE-based discriminative training scheme is also shown to out-perform the frame discrimination technique. Various properties of the lattice-based MMIE training scheme are investigated including comparisons of different lattice processing strategies (full search and exact-match) and the effect of lattice size on performance. Furthermore a scheme based on the linear interpolation of the MMIE and MLE objective functions is shown to reduce the danger of over-training. It is shown that HMMs trained with MMIE benefit as much as MLE-trained HMMs from applying model adaptation using maximum likelihood linear regression (MLLR). This has allowed the straightforward integration of MMIE-trained HMMs into complex multi-pass systems for transcription of conversational telephone speech and has contributed to our MMIE-trained systems giving the lowest word error rates in both the 2000 and 2001 NIST Hub5 evaluations.", "We present a novel response generation system that can be trained end to end on large quantities of unstructured Twitter conversations. A neural network architecture is used to address sparsity issues that arise when integrating contextual information into classic statistical models, allowing the system to take into account previous dialog utterances. Our dynamic-context generative models show consistent gains over both context-sensitive and non-context-sensitive Machine Translation and Information Retrieval baselines.", "Conversational modeling is an important task in natural language understanding and machine intelligence. Although previous approaches exist, they are often restricted to specific domains (e.g., booking an airline ticket) and require hand-crafted rules. In this paper, we present a simple approach for this task which uses the recently proposed sequence to sequence framework. Our model converses by predicting the next sentence given the previous sentence or sentences in a conversation. The strength of our model is that it can be trained end-to-end and thus requires much fewer hand-crafted rules. We find that this straightforward model can generate simple conversations given a large conversational training dataset. Our preliminary results suggest that, despite optimizing the wrong objective function, the model is able to converse well. It is able extract knowledge from both a domain specific dataset, and from a large, noisy, and general domain dataset of movie subtitles. On a domain-specific IT helpdesk dataset, the model can find a solution to a technical problem via conversations. On a noisy open-domain movie transcript dataset, the model can perform simple forms of common sense reasoning. As expected, we also find that the lack of consistency is a common failure mode of our model.", "A method for estimating the parameters of hidden Markov models of speech is described. Parameter values are chosen to maximize the mutual information between an acoustic observation sequence and the corresponding word sequence. Recognition results are presented comparing this method with maximum likelihood estimation." ] }
1601.00372
2222235228
Sequence-to-sequence neural translation models learn semantic and syntactic relations between sentence pairs by optimizing the likelihood of the target given the source, i.e., @math , an objective that ignores other potentially useful sources of information. We introduce an alternative objective function for neural MT that maximizes the mutual information between the source and target sentences, modeling the bi-directional dependency of sources and targets. We implement the model with a simple re-ranking method, and also introduce a decoding algorithm that increases diversity in the N-best list produced by the first pass. Applied to the WMT German English and French English tasks, the proposed models offers a consistent performance boost on both standard LSTM and attention-based neural MT architectures.
Our goal, distinct from these previous uses of MMI, is to see whether the mutual information objective improves translation by bidirectionally modeling source-target dependencies. In that sense, our work is designed to incorporate into models features that have proved useful in phrase-based MT, like the reverse translation probability or sentence length @cite_0 @cite_8 @cite_5 .
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_8" ], "mid": [ "2154124206", "2251682575", "2060127787" ], "abstract": [ "We present a framework for statistical machine translation of natural languages based on direct maximum entropy models, which contains the widely used source-channel approach as a special case. All knowledge sources are treated as feature functions, which depend on the source language sentence, the target language sentence and possible hidden variables. This approach allows a baseline machine translation system to be extended easily by adding new feature functions. We show that a baseline statistical machine translation system is significantly improved using this approach.", "Recent work has shown success in using neural network language models (NNLMs) as features in MT systems. Here, we present a novel formulation for a neural network joint model (NNJM), which augments the NNLM with a source context window. Our model is purely lexicalized and can be integrated into any MT decoder. We also present several variations of the NNJM which provide significant additive improvements.", "We propose a novel string-to-dependency algorithm for statistical machine translation. This algorithm employs a target dependency language model during decoding to exploit long distance word relations, which cannot be modeled with a traditional n-gram language model. Experiments show that the algorithm achieves significant improvement in MT performance over a state-of-the-art hierarchical string-to-string system on NIST MT06 and MT08 newswire evaluation sets." ] }
1601.00372
2222235228
Sequence-to-sequence neural translation models learn semantic and syntactic relations between sentence pairs by optimizing the likelihood of the target given the source, i.e., @math , an objective that ignores other potentially useful sources of information. We introduce an alternative objective function for neural MT that maximizes the mutual information between the source and target sentences, modeling the bi-directional dependency of sources and targets. We implement the model with a simple re-ranking method, and also introduce a decoding algorithm that increases diversity in the N-best list produced by the first pass. Applied to the WMT German English and French English tasks, the proposed models offers a consistent performance boost on both standard LSTM and attention-based neural MT architectures.
Various algorithms have been proposed for generated diverse translations in phrase-based MT, including compact representations like lattices and hypergraphs @cite_24 @cite_15 @cite_11 , traits'' like translation length @cite_16 , bagging boosting @cite_3 , or multiple systems @cite_20 . , produce diverse N-best lists by adding a dissimilarity function based on N-gram overlaps, distancing the current translation from already-generated ones by choosing translations that have higher scores but distinct from previous ones. While we draw on these intuitions, these existing diversity promoting algorithms are tailored to phrase-based translation frameworks and not easily transplanted to neural MT decoding which requires batched computation.
{ "cite_N": [ "@cite_3", "@cite_24", "@cite_15", "@cite_16", "@cite_20", "@cite_11" ], "mid": [ "1972567251", "2105891181", "", "125693536", "2251631319", "2136657878" ], "abstract": [ "In this article we address the issue of generating diversified translation systems from a single Statistical Machine Translation (SMT) engine for system combination. Unlike traditional approaches, we do not resort to multiple structurally different SMT systems, but instead directly learn a strong SMT system from a single translation engine in a principled way. Our approach is based on Bagging and Boosting which are two instances of the general framework of ensemble learning. The basic idea is that we first generate an ensemble of weak translation systems using a base learning algorithm, and then learn a strong translation system from the ensemble. One of the advantages of our approach is that it can work with any of current SMT systems and make them stronger almost ''for free''. Beyond this, most system combination methods are directly applicable to the proposed framework for generating the final translation system from the ensemble of weak systems. We evaluate our approach on Chinese-English translation in three state-of-the-art SMT systems, including a phrase-based system, a hierarchical phrase-based system and a syntax-based system. Experimental results on the NIST MT evaluation corpora show that our approach leads to significant improvements in translation accuracy over the baselines. More interestingly, it is observed that our approach is able to improve the existing system combination systems. The biggest improvements are obtained by generating weak systems using Bagging Boosting, and learning the strong system using a state-of-the-art system combination method.", "Minimum Error Rate Training (MERT) is an effective means to estimate the feature function weights of a linear model such that an automated evaluation criterion for measuring system performance can directly be optimized in training. To accomplish this, the training procedure determines for each feature function its exact error surface on a given set of candidate translations. The feature function weights are then adjusted by traversing the error surface combined over all sentences and picking those values for which the resulting error count reaches a minimum. Typically, candidates in MERT are represented as N-best lists which contain the N most probable translation hypotheses produced by a decoder. In this paper, we present a novel algorithm that allows for efficiently constructing and representing the exact error surface of all translations that are encoded in a phrase lattice. Compared to N-best MERT, the number of candidate translations thus taken into account increases by several orders of magnitudes. The proposed method is used to train the feature function weights of a phrase-based statistical machine translation system. Experiments conducted on the NIST 2008 translation tasks show significant runtime improvements and moderate BLEU score gains over N-best MERT.", "", "In the area of machine translation (MT) system combination, previous work on generating input hypotheses has focused on varying a core aspect of the MT system, such as the decoding algorithm or alignment algorithm. In this paper, we propose a new method for generating diverse hypotheses from a single MT system using traits. These traits are simple properties of the MT output such as \"average output length\" and \"average rule length.\" Our method is designed to select hypotheses which vary in trait value but do not significantly degrade in BLEU score. These hypotheses can be combined using standard system combination techniques to produce a 1.2-1.5 BLEU gain on the Arabic-English NIST MT06 MT08 translation task.", "We present Positive Diversity Tuning, a new method for tuning machine translation models specifically for improved performance during system combination. System combination gains are often limited by the fact that the translations produced by the different component systems are too similar to each other. We propose a method for reducing excess cross-system similarity by optimizing a joint objective that simultaneously rewards models for producing translations that are similar to reference translations, while also punishing them for translations that are too similar to those produced by other systems. The formulation of the Positive Diversity objective is easy to implement and allows for its quick integration with most machine translation tuning pipelines. We find that individual systems tuned on the same data to Positive Diversity can be even more diverse than systems built using different data sets, while still obtaining good BLEU scores. When these individual systems are used together for system combination, our approach allows for significant gains of 0.8 BLEU even when the combination is performed using a small number of otherwise identical individual systems.", "Abstract : We present Minimum Bayes-Risk (MBR) decoding for statistical machine translation. This statistical approach aims to minimize expected loss of translation errors under loss functions that measure translation performance. We describe a hierarchy of loss functions that incorporate different levels of linguistic information from word strings, word-to-word alignments from an MT system, and syntactic structure from parse-trees of source and target language sentences. We report the performance of the MBR decoders on a Chinese-to-English translation task. Our results show that MBR decoding can be used to tune statistical MT performance for specific loss functions." ] }
1601.00182
2952825891
Modern Internet applications often produce a large volume of user activity records. Data analysts are interested in cohort analysis, or finding unusual user behavioral trends, in these large tables of activity records. In a traditional database system, cohort analysis queries are both painful to specify and expensive to evaluate. We propose to extend database systems to support cohort analysis. We do so by extending SQL with three new operators. We devise three different evaluation schemes for cohort query processing. Two of them adopt a non-intrusive approach. The third approach employs a columnar based evaluation scheme with optimizations specifically designed for cohort query processing. Our experimental results confirm the performance benefits of our proposed columnar database system, compared against the two non-intrusive approaches that implement cohort queries on top of regular relational databases.
The work related to ours is the database support for data analysis and cohort analysis. The requirement to support data analysis inside a database system has a long history. The early effort is the SQL GROUP BY operator and aggregate functions. These ideas are generalized with the CUBE operator @cite_19 . Traditional row-oriented databases are inefficient for CUBE style OLAP analysis. Hence, columnar databases are proposed for solving the efficiency issue @cite_12 @cite_14 @cite_17 . Techniques such as data compression @cite_7 @cite_2 , query processing on compressed data @cite_18 @cite_10 @cite_6 , array based aggregation @cite_5 @cite_0 , and materialized view based approaches @cite_11 are proposed for speeding up OLAP queries. Albeit targeting OLAP queries which are defined on relational operators that are generally not applicable to cohort queries, the above techniques can also be used to accelerate the processing of cohort queries as we have shown in .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_11", "@cite_7", "@cite_6", "@cite_0", "@cite_19", "@cite_2", "@cite_5", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "2123686039", "2104003087", "1497285001", "1993819379", "2055774867", "2044240774", "262463062", "", "2150950420", "1561200998", "2124851765", "" ], "abstract": [ "Column-oriented database system architectures invite a re-evaluation of how and when data in databases is compressed. Storing data in a column-oriented fashion greatly increases the similarity of adjacent records on disk and thus opportunities for compression. The ability to compress many adjacent tuples at once lowers the per-tuple cost of compression, both in terms of CPU and space overheads.In this paper, we discuss how we extended C-Store (a column-oriented DBMS) with a compression sub-system. We show how compression schemes not traditionally used in row-oriented DBMSs can be applied to column-oriented systems. We then evaluate a set of compression schemes and show that the best scheme depends not only on the properties of the data but also on the nature of the query workload.", "In the past decade, advances in the speed of commodity CPUs have far out-paced advances in memory latency. Main-memory access is therefore increasingly a performance bottleneck for many computer applications, including database systems. In this article, we use a simple scan test to show the severe impact of this bottleneck. The insights gained are translated into guidelines for database architecture, in terms of both data structures and algorithms. We discuss how vertically fragmented data structures optimize cache performance on sequential data access. We then focus on equi-join, typically a random-access operation, and introduce radix algorithms for partitioned hash-join. The performance of these algorithms is quantified using a detailed analytical model that incorporates memory access cost. Experiments that validate this model were performed on the Monet database system. We obtained exact statistics on events such as TLB misses and L1 and L2 cache misses by using hardware performance counters found in modern CPUs. Using our cost model, we show how the carefully tuned memory access pattern of our radix algorithms makes them perform well, which is confirmed by experimental results.", "With the advent of the Internet, access to database servers from autonomous clients will become more and more popular. In this paper, we propose a monitoring service that could be offered by such database servers, and present algorithms for its implementation. In contrast to published view maintenance algorithms, we do not assume that the server has access to the original materialization when computing differential view changes to be notified. We also do not assume any database capabilities on the client side and therefore compute precisely the required differentials rather than just an approximation, as is done by cache coherence techniques in homogeneous clientserver databases. The method has been implemented in ConceptBase, a meta data management system supporting an Internet-based client-server architecture, and tried out in some cooperative design applications.", "In this paper, we show how compression can be integrated into a relational database system. Specifically, we describe how the storage manager, the query execution engine, and the query optimizer of a database system can be extended to deal with compressed data. Our main result is that compression can significantly improve the response time of queries if very light-weight compression techniques are used. We will present such light-weight compression techniques and give the results of running the TPC-D benchmark on a so compressed database and a non-compressed database using the AODB database system, an experimental database system that was developed at the Universities of Mannheim and Passau. Our benchmark results demonstrate that compression indeed offers high performance gains (up to 50 ) for IO-intensive queries and moderate gains for CPU-intensive queries. Compression can, however, also increase the running time of certain update operations. In all, we recommend to extend today's database systems with light-weight compression techniques and to make extensive use of this feature.", "This paper focuses on running scans in a main memory data processing system at \"bare met al\" speed. Essentially, this means that the system must aim to process data at or near the speed of the processor (the fastest component in most system configurations). Scans are common in main memory data processing environments, and with the state-of-the-art techniques it still takes many cycles per input tuple to apply simple predicates on a single column of a table. In this paper, we propose a technique called BitWeaving that exploits the parallelism available at the bit level in modern processors. BitWeaving operates on multiple bits of data in a single cycle, processing bits from different columns in each cycle. Thus, bits from a batch of tuples are processed in each cycle, allowing BitWeaving to drop the cycles per column to below one in some case. BitWeaving comes in two flavors: BitWeaving V which looks like a columnar organization but at the bit level, and BitWeaving H which packs bits horizontally. In this paper we also develop the arithmetic framework that is needed to evaluate predicates using these BitWeaving organizations. Our experimental results show that both these methods produce significant performance benefits over the existing state-of-the-art methods, and in some cases produce over an order of magnitude in performance improvement.", "Computing multiple related group-bys and aggregates is one of the core operations of On-Line Analytical Processing (OLAP) applications. Recently, [GBLP95] proposed the “Cube” operator, which computes group-by aggregations over all possible subsets of the specified dimensions. The rapid acceptance of the importance of this operator has led to a variant of the Cube being proposed for the SQL standard. Several efficient algorithms for Relational OLAP (ROLAP) have been developed to compute the Cube. However, to our knowledge there is nothing in the literature on how to compute the Cube for Multidimensional OLAP (MOLAP) systems, which store their data in sparse arrays rather than in tables. In this paper, we present a MOLAP algorithm to compute the Cube, and compare it to a leading ROLAP algorithm. The comparison between the two is interesting, since although they are computing the same function, one is value-based (the ROLAP algorithm) whereas the other is position-based (the MOLAP algorithm). Our tests show that, given appropriate compression techniques, the MOLAP algorithm is significantly faster than the ROLAP algorithm. In fact, the difference is so pronounced that this MOLAP algorithm may be useful for ROLAP systems as well as MOLAP systems, since in many cases, instead of cubing a table directly, it is faster to first convert the table to an array, cube the array, then convert the result back to a table.", "", "", "At the heart of all OLAP or multidimensional data analysis applications is the ability to simultaneously aggregate across many sets of dimensions. Computing multidimensional aggregates is a performance bottleneck for these applications. This paper presents fast algorithms for computing a collection of group bys. We focus on a special case of the aggregation problem - computation of the CUBE operator. The CUBE operator requires computing group-bys on all possible combinations of a list of attributes, and is equivalent to the union of a number of standard group-by operations. We show how the structure of CUBE computation can be viewed in terms of a hierarchy of group-by operations. Our algorithms extend sort-based and hashbased grouping methods with several .optimizations, like combining common operations across multiple groupbys, caching, and using pre-computed group-by8 for computing other groupbys. Empirical evaluation shows that the resulting algorithms give much better performance compared to straightforward meth", "OptimizingQueriesOnCompressedBitmapsSihem Amer-YahiaAT&T Labs Researchsihem@research.att.comTheo doreJohnsonjohnsont@research.att.comAbstractBitmap indices are used by DBMS's to accelerate decision supp ort queries.A signi cant advantage ofbitmap indices is that complex logical selection op erations can b e p erformed very quickly, by p erformingbit-wiseAND,OR,andNOTop erators.Althoughbitmapindicescanb espaceine\u000ecientforhighcardinalityattributes,the space use of compressed bitmapscompares well to other indexingmetho ds.Oracle and Sybase IQ are two commercial pro ducts that make extensive use of compressed bitmap indices.Our recent research showed that there are several fast algorithmsfor evaluatingBo oleanop eratorson compressedbitmaps.Dep endingon the natureof the op erandbitmaps(theirformat, densityandclusterdness) and the op eration to b e p erformed (AND, NOT, ...), these algorithms can have executiontimes that are orders of magnitude di erent.Cho osing an algorithm for p erforming a Bo olean op erationhas global e ects in the Bo olean query expression, requiring global optimization.We present a linear timedynamicprogrammingsearch strategy based on a cost mo delto optimizequeryexpressionevaluationplans.We alsopresentrewritingheuristicsthat rewritethe queryexpressionto anequivalenonetoencourage b etter algorithmsassignments.Our p erformance results show that the optimizerrequiresanegligibl e amount of time to execute, and that optimized complex queries can execute up to three timesfaster than unoptimized queries on real data.1Intro ductionAbitmap indexis a bit string in which each bit is mapp ed to a record ID (RID) of a relation.A bit in thebitmap index is set (to 1) if the corresp onding RID has prop ertyP(i.e., the RID represents a customer thatlives in New York), and is reset (to 0) otherwise.In typical usage, the predicatePis true for a record if it hasthe valueafor attributeA.One such predicate is asso ciated to one bitmap index for each unique value ofthe attributeA.The predicates can b e more complex, for example bitslice indices [OQ97] and precomputedcomplex selection predicates [HEP99].Oneadvantageofbitmapindicesisthatcomplexselectionpredicatescanb ecomputedveryquickly,by p erforming bit-wiseAND, OR, andNOTop erations on the bitmap indices.Furthermore, the indexableselection predicates can involve many attributes.Let's consider some examples, using a customer databasewith schemaCustomer(Name, Livesin, Worksin, Car, Numberofchildren, Hascable, Hasel lular)\u000fSupp osethatweanttoselectallcustomerswholivinNewEngland.Thentheselectioncon-ditionisLivesin= \"ORin= \"in= \"in= \"in= \"ORLivesin= \"in= \". Since a bitmap index is createdfor each value of the attributeLivesin, the query translates into mapping the attribute to all its p ossible values.1", "Database systems tend to achieve only low IPC (instructions-per-cycle) eciency on modern CPUs in compute-intensive application areas like decision support, OLAP and multimedia retrieval. This paper starts with an in-depth investigation to the reason why this happens, focusing on the TPC-H benchmark. Our analysis of various relational systems and MonetDB leads us to a new set of guidelines for designing a query processor. The second part of the paper describes the architecture of our new X100 query engine for the MonetDB system that follows these guidelines. On the surface, it resembles a classical Volcano-style engine, but the crucial dierence to base all execution on the concept of vector processing makes it highly CPU ecien t. We evaluate the power of MonetDB X100 on the 100GB version of TPC-H, showing its raw execution power to be between one and two orders of magnitude higher than previous technology.", "" ] }
1512.09204
2221926851
We consider effort allocation in crowdsourcing, where we wish to assign labeling tasks to imperfect homogeneous crowd workers to maximize overall accuracy in a continuous-time Bayesian setting, subject to budget and time constraints. The Bayes-optimal policy for this problem is the solution to a partially observable Markov decision process, but the curse of dimensionality renders the computation infeasible. Based on the Lagrangian Relaxation technique in Adelman & Mersereau (2008), we provide a computationally tractable instance-specific upper bound on the value of this Bayes-optimal policy, which can in turn be used to bound the optimality gap of any other sub-optimal policy. In an approach similar in spirit to the Whittle index for restless multiarmed bandits, we provide an index policy for effort allocation in crowdsourcing and demonstrate numerically that it outperforms other stateof- arts and performs close to optimal solution.
The second strand resides in the literature of Multi-armed bandit (MAB) and stochastic dynamic programming. The formulation a Bayesian-optimal procedure as a dynamic program is considered in @cite_10 @cite_2 . Our use of Lagrangian relaxation is an application of the relaxation method of weakly coupled dynamic program discussed in . The setting in this paper differs from the previous works by that only one task is to be assigned when a worker enters and the completion of task is not instant. The index-based policy proposed in this paper, which uses Lagrangian Multipliers to assign indices, draws inspiration from @cite_14 .
{ "cite_N": [ "@cite_14", "@cite_10", "@cite_2" ], "mid": [ "2056921512", "2065087844", "" ], "abstract": [ "We consider a population of n projects which in general continue to evolve whether in operation or not (although by different rules). It is desired to choose the projects in operation at each instant of time so as to maximise the expected rate of reward, under a constraint upon the expected number of projects in operation. The Lagrange multiplier associated with this constraint defines an index which reduces to the Gittins index when projects not being operated are static. If one is constrained to operate m projects exactly then arguments are advanced to support the conjecture that, for m and n large in constant ratio, the policy of operating the m projects of largest current index is nearly optimal. The index is evaluated for some particular projects.", "A partially observed Markov decision process (POMDP) is a generalization of a Markov decision process that allows for incomplete information regarding the state of the system. The significant applied potential for such processes remains largely unrealized, due to an historical lack of tractable solution methodologies. This paper reviews some of the current algorithmic alternatives for solving discrete-time, finite POMDPs over both finite and infinite horizons. The major impediment to exact solution is that, even with a finite set of internal system states, the set of possible information states is uncountably infinite. Finite algorithms are theoretically available for exact solution of the finite horizon problem, but these are computationally intractable for even modest-sized problems. Several approximation methodologies are reviewed that have the potential to generate computationally feasible, high precision solutions.", "" ] }
1512.09254
2196710378
The paper presents an application of non-linear stacking ensembles for prediction of Go player attributes. An evolutionary algorithm is used to form a diverse ensemble of base learners, which are then aggregated by a stacking ensemble. This methodology allows for an efficient prediction of different attributes of Go players from sets of their games. These attributes can be fairly general, in this work, we used the strength and style of the players.
Another approach to combine different models is boosting @cite_13 , where a (presumably weak) model is iteratively trained to specialize on hard instances. Stacking @cite_2 on the other hand, uses a two-layered approach, where model on the second level learns to correct for mistakes that first level learners make. For classification, various ways of forming the features from the first level prediction have been proposed ( @cite_14 @cite_0 ), multi-response linear regression has been found to work well for second level learner. For regression task such as ours, simple linear second level models have been proposed by Breiman @cite_17 . We are not aware of any use of non-linear models for second level predictors like we use in this work.
{ "cite_N": [ "@cite_14", "@cite_0", "@cite_2", "@cite_13", "@cite_17" ], "mid": [ "1645816215", "1523472376", "28412257", "1988790447", "" ], "abstract": [ "Stacked generalization is a general method of using a high-level model to combine lower-level models to achieve greater predictive accuracy. In this paper we address two crucial issues which have been considered to be a 'black art' in classification tasks ever since the introduction of stacked generalization in 1992 by Wolpert: the type of generalizer that is suitable to derive the higher-level model, and the kind of attributes that should be used as its input. We find that best results are obtained when the higher-level model combines the confidence (and not just the predictions) of the lower-level ones. We demonstrate the effectiveness of stacked generalization for combining three different types of learning algorithms for classification tasks. We also compare the performance of stacked generalization with majority vote and published results of arcing and bagging.", "Much of the research in inductive learning concentrates on problems with relatively small amounts of data. With the coming age of ubiquitous network computing, it is likely that orders of magnitude more data in databases will be available for various learning problems of real world importance. Some learning algorithms assume that the entire data set fits into main memory, which is not feasible for massive amounts of data, especially for applications in data mining. One approach to handling a large data set is to partition the data set into subsets, run the learning algorithm on each of the subsets, and combine the results. Moreover, data can be inherently distributed across multiple sites on the network and merging all the data in one location can be expensive or prohibitive. In this thesis we propose, investigate, and evaluate a meta-learning approach to integrating the results of multiple learning processes. Our approach utilizes machine learning to guide the integration. We identified two main meta-learning strategies: combiner and arbiter. Both strategies are independent to the learning algorithms used in generating the classifiers. The combiner strategy attempts to reveal relationships among the learned classifiers' prediction patterns. The arbiter strategy tries to determine the correct prediction when the classifiers have different opinions. Various schemes under these two strategies have been developed. Empirical results show that our schemes can obtain accurate classifiers from inaccurate classifiers trained from data subsets. We also implemented and analyzed the schemes in a parallel and distributed environment to demonstrate their scalability.", "This paper introduces stacked generalization, a scheme for minimizing the generalization error rate of one or more generalizers. Stacked generalization works by deducing the biases of the generalizer(s) with respect to a provided learning set. This deduction proceeds by generalizing in a second space whose inputs are (for example) the guesses of the original generalizers when taught with part of the learning set and trying to guess the rest of it, and whose output is (for example) the correct guess. When used with multiple generalizers, stacked generalization can be seen as a more sophisticated version of cross-validation, exploiting a strategy more sophisticated than cross-validation's crude winner-takes-all for combining the individual generalizers. When used with a single generalizer, stacked generalization is a scheme for estimating (and then correcting for) the error of a generalizer which has been trained on a particular learning set and then asked a particular question. After introducing stacked generalization and justifying its use, this paper presents two numerical experiments. The first demonstrates how stacked generalization improves upon a set of separate generalizers for the NETtalk task of translating text to phonemes. The second demonstrates how stacked generalization improves the performance of a single surface-fitter. With the other experimental evidence in the literature, the usual arguments supporting cross-validation, and the abstract justifications presented in this paper, the conclusion is that for almost any real-world generalization problem one should use some version of stacked generalization to minimize the generalization error rate. This paper ends by discussing some of the variations of stacked generalization, and how it touches on other fields like chaos theory.", "In the first part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weight-update Littlestone?Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games, and prediction of points in Rn. In the second part of the paper we apply the multiplicative weight-update technique to derive a new boosting algorithm. This boosting algorithm does not require any prior knowledge about the performance of the weak learning algorithm. We also study generalizations of the new boosting algorithm to the problem of learning functions whose range, rather than being binary, is an arbitrary finite set or a bounded segment of the real line.", "" ] }
1512.09272
2219193941
Recent innovations in training deep convolutional neural network (ConvNet) models have motivated the design of new methods to automatically learn local image descriptors. The latest deep ConvNets proposed for this task consist of a siamese network that is trained by penalising misclassification of pairs of local image patches. Current results from machine learning show that replacing this siamese by a triplet network can improve the classification accuracy in several problems, but this has yet to be demonstrated for local image descriptor learning. Moreover, current siamese and triplet networks have been trained with stochastic gradient descent that computes the gradient from individual pairs or triplets of local image patches, which can make them prone to overfitting. In this paper, we first propose the use of triplet networks for the problem of local image descriptor learning. Furthermore, we also propose the use of a global loss that minimises the overall classification error in the training set, which can improve the generalisation capability of the model. Using the UBC benchmark dataset for comparing local image descriptors, we show that the triplet network produces a more accurate embedding than the siamese network in terms of the UBC dataset errors. Moreover, we also demonstrate that a combination of the triplet and global losses produces the best embedding in the field, using this triplet network. Finally, we also show that the use of the central-surround siamese network trained with the global loss produces the best result of the field on the UBC dataset. Pre-trained models are available online at this https URL
Extending ) to a non-linear transformation can be done by re-formulating @math such that it involves inner products, which can then be kernelised @cite_15 , and the optimisation is again solved with generalised Eigenvalue problem @cite_15 . Alternatively, this non-linear transform can be learned with a ConvNet using a siamese network @cite_22 that minimises a pairwise loss @cite_9 (Fig. -(b)) by reducing the distance of patches (in the embedded space) belonging to the same class and increasing the distance of patches from different classes, similarly to the objective function derived from ). Note that this siamese network can produce either an embedding or a pairwise similarity estimation, depending on the architecture and loss function. This siamese network has been extended to a triplet network that uses a triplet loss @cite_4 @cite_5 @cite_20 @cite_24 (Fig. -(d)), which has been shown not only to produce the best classification results in several problems (e.g., STL10 @cite_18 , LineMOD @cite_33 , Labelled Faces in the Wild), but also to produce effective feature embeddings.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_22", "@cite_33", "@cite_9", "@cite_24", "@cite_5", "@cite_15", "@cite_20" ], "mid": [ "", "1975517671", "2171590421", "2101199297", "", "1909903157", "1839408879", "2109531142", "2096733369" ], "abstract": [ "", "Learning fine-grained image similarity is a challenging task. It needs to capture between-class and within-class image differences. This paper proposes a deep ranking model that employs deep learning techniques to learn similarity metric directly from images. It has higher learning capability than models based on hand-crafted features. A novel multiscale network structure has been developed to describe the images effectively. An efficient triplet sampling algorithm is also proposed to learn the model with distributed asynchronized stochastic gradient. Extensive experiments show that the proposed algorithm outperforms models based on hand-crafted visual features and deep classification models.", "This paper describes the development of an algorithm for verification of signatures written on a touch-sensitive pad. The signature verification algorithm is based on an artificial neural network. The novel network presented here, called a “Siamese” time delay neural network, consists of two identical networks joined at their output. During training the network learns to measure the similarity between pairs of signatures. When used for verification, only one half of the Siamese network is evaluated. The output of this half network is the feature vector for the input signature. Verification consists of comparing this feature vector with a stored feature vector for the signer. Signatures closer than a chosen threshold to this stored representation are accepted, all other signatures are rejected as forgeries. System performance is illustrated with experiments performed in the laboratory.", "We present a method for real-time 3D object instance detection that does not require a time-consuming training stage, and can handle untextured objects. At its core, our approach is a novel image representation for template matching designed to be robust to small image transformations. This robustness is based on spread image gradient orientations and allows us to test only a small subset of all possible pixel locations when parsing the image, and to represent a 3D object with a limited set of templates. In addition, we demonstrate that if a dense depth sensor is available we can extend our approach for an even better performance also taking 3D surface normal orientations into account. We show how to take advantage of the architecture of modern computers to build an efficient but very discriminant representation of the input images that can be used to consider thousands of templates in real time. We demonstrate in many experiments on real data that our method is much faster and more robust with respect to background clutter than current state-of-the-art methods.", "", "Detecting poorly textured objects and estimating their 3D pose reliably is still a very challenging problem. We introduce a simple but powerful approach to computing descriptors for object views that efficiently capture both the object identity and 3D pose. By contrast with previous manifold-based approaches, we can rely on the Euclidean distance to evaluate the similarity between descriptors, and therefore use scalable Nearest Neighbor search methods to efficiently handle a large number of objects under a large range of poses. To achieve this, we train a Convolutional Neural Network to compute these descriptors by enforcing simple similarity and dissimilarity constraints between the descriptors. We show that our constraints nicely untangle the images from different objects and different views into clusters that are not only well-separated but also structured as the corresponding sets of poses: The Euclidean distance between descriptors is large when the descriptors are from different objects, and directly related to the distance between the poses when the descriptors are from the same object. These important properties allow us to outperform state-of-the-art object views representations on challenging RGB and RGB-D data.", "Deep learning has proven itself as a successful set of models for learning useful semantic representations of data. These, however, are mostly implicitly learned as part of a classification task. In this paper we propose the triplet network model, which aims to learn useful representations by distance comparisons. A similar model was defined by (2014), tailor made for learning a ranking for image information retrieval. Here we demonstrate using various datasets that our model learns a better representation than that of its immediate competitor, the Siamese network. We also discuss future possible usage as a framework for unsupervised learning.", "Reducing the dimensionality of data without losing intrinsic information is an important preprocessing step in high-dimensional data analysis. Fisher discriminant analysis (FDA) is a traditional technique for supervised dimensionality reduction, but it tends to give undesired results if samples in a class are multimodal. An unsupervised dimensionality reduction method called locality-preserving projection (LPP) can work well with multimodal data due to its locality preserving property. However, since LPP does not take the label information into account, it is not necessarily useful in supervised learning scenarios. In this paper, we propose a new linear supervised dimensionality reduction method called local Fisher discriminant analysis (LFDA), which effectively combines the ideas of FDA and LPP. LFDA has an analytic form of the embedding transformation and the solution can be easily computed just by solving a generalized eigenvalue problem. We demonstrate the practical usefulness and high scalability of the LFDA method in data visualization and classification tasks through extensive simulation studies. We also show that LFDA can be extended to non-linear dimensionality reduction scenarios by applying the kernel trick.", "Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors." ] }
1512.09041
2264423349
Actor-action semantic segmentation made an important step toward advanced video understanding problems: what action is happening; who is performing the action; and where is the action in space-time. Current models for this problem are local, based on layered CRFs, and are unable to capture long-ranging interaction of video parts. We propose a new model that combines these local labeling CRFs with a hierarchical supervoxel decomposition. The supervoxels provide cues for possible groupings of nodes, at various scales, in the CRFs to encourage adaptive, high-order groups for more effective labeling. Our model is dynamic and continuously exchanges information during inference: the local CRFs influence what supervoxels in the hierarchy are active, and these active nodes influence the connectivity in the CRF; we hence call it a grouping process model. The experimental results on a recent large-scale video dataset show a large margin of 60 relative improvement over the state of the art, which demonstrates the effectiveness of the dynamic, bidirectional flow between labeling and grouping.
Our paper is closely related to @cite_55 , where the actor-action semantic segmentation problem is first proposed. Their paper demonstrates that inference jointly over actors and actions outperforms inference independently over them. They propose a trilayer model that achieves the state-of-the-art performance on the actor-action semantic segmentation problem. However, their model only captures the interactions of actors and actions in a local CRF pairwise neighborhood, whereas our method considers the interplays at various levels of granularities in space and time introduced by a supervoxel hierarchy.
{ "cite_N": [ "@cite_55" ], "mid": [ "24089286" ], "abstract": [ "We introduce UCF101 which is currently the largest dataset of human actions. It consists of 101 action classes, over 13k clips and 27 hours of video data. The database consists of realistic user uploaded videos containing camera motion and cluttered background. Additionally, we provide baseline action recognition results on this new dataset using standard bag of words approach with overall performance of 44.5 . To the best of our knowledge, UCF101 is currently the most challenging dataset of actions due to its large number of classes, large number of clips and also unconstrained nature of such clips." ] }
1512.09041
2264423349
Actor-action semantic segmentation made an important step toward advanced video understanding problems: what action is happening; who is performing the action; and where is the action in space-time. Current models for this problem are local, based on layered CRFs, and are unable to capture long-ranging interaction of video parts. We propose a new model that combines these local labeling CRFs with a hierarchical supervoxel decomposition. The supervoxels provide cues for possible groupings of nodes, at various scales, in the CRFs to encourage adaptive, high-order groups for more effective labeling. Our model is dynamic and continuously exchanges information during inference: the local CRFs influence what supervoxels in the hierarchy are active, and these active nodes influence the connectivity in the CRF; we hence call it a grouping process model. The experimental results on a recent large-scale video dataset show a large margin of 60 relative improvement over the state of the art, which demonstrates the effectiveness of the dynamic, bidirectional flow between labeling and grouping.
Supervoxels have demonstrated potential to capture object boundaries, follow object parts over time @cite_49 , and localize objects and actions @cite_53 @cite_6 . Supervoxels are used as higher-order potentials for human action segmentation @cite_2 and video object segmentation @cite_56 . Different from the above works, we use a supervoxel hierarchy to connect bottom-up pixel labeling and top-down recognition, where supervoxels contain clear actor-action semantic meaning. We also use the tree slice concept for selecting supervoxels in a hierarchy as in @cite_4 , but the difference is that our model selects the tree slices in an iterative fashion, where the tree slice also modifies the pixel-level groupings.
{ "cite_N": [ "@cite_4", "@cite_53", "@cite_6", "@cite_56", "@cite_49", "@cite_2" ], "mid": [ "2068649797", "2018068650", "1920142129", "589665618", "2081432165", "1912148408" ], "abstract": [ "Supervoxel hierarchies provide a rich multiscale decomposition of a given video suitable for subsequent processing in video analysis. The hierarchies are typically computed by an unsupervised process that is susceptible to under-segmentation at coarse levels and over-segmentation at fine levels, which make it a challenge to adopt the hierarchies for later use. In this paper, we propose the first method to overcome this limitation and flatten the hierarchy into a single segmentation. Our method, called the uniform entropy slice, seeks a selection of supervoxels that balances the relative level of information in the selected supervoxels based on some post hoc feature criterion such as object-ness. For example, with this criterion, in regions nearby objects, our method prefers finer supervoxels to capture the local details, but in regions away from any objects we prefer coarser supervoxels. We formulate the uniform entropy slice as a binary quadratic program and implement four different feature criteria, both unsupervised and supervised, to drive the flattening. Although we apply it only to supervoxel hierarchies in this paper, our method is generally applicable to segmentation tree hierarchies. Our experiments demonstrate both strong qualitative performance and superior quantitative performance to state of the art baselines on benchmark internet videos.", "This paper considers the problem of action localization, where the objective is to determine when and where certain actions appear. We introduce a sampling strategy to produce 2D+t sequences of bounding boxes, called tubelets. Compared to state-of-the-art alternatives, this drastically reduces the number of hypotheses that are likely to include the action of interest. Our method is inspired by a recent technique introduced in the context of image localization. Beyond considering this technique for the first time for videos, we revisit this strategy for 2D+t sequences obtained from super-voxels. Our sampling strategy advantageously exploits a criterion that reflects how action related motion deviates from background motion. We demonstrate the interest of our approach by extensive experiments on two public datasets: UCF Sports and MSR-II. Our approach significantly outperforms the state-of-the-art on both datasets, while restricting the search of actions to a fraction of possible bounding box sequences.", "Semantic object segmentation in video is an important step for large-scale multimedia analysis. In many cases, however, semantic objects are only tagged at video-level, making them difficult to be located and segmented. To address this problem, this paper proposes an approach to segment semantic objects in weakly labeled video via object detection. In our approach, a novel video segmentation-by-detection framework is proposed, which first incorporates object and region detectors pre-trained on still images to generate a set of detection and segmentation proposals. Based on the noisy proposals, several object tracks are then initialized by solving a joint binary optimization problem with min-cost flow. As such tracks actually provide rough configurations of semantic objects, we thus refine the object segmentation while preserving the spatiotemporal consistency by inferring the shape likelihoods of pixels from the statistical information of tracks. Experimental results on Youtube-Objects dataset and SegTrack v2 dataset demonstrate that our method outperforms state-of-the-arts and shows impressive results.", "A major challenge in video segmentation is that the foreground object may move quickly in the scene at the same time its appearance and shape evolves over time. While pairwise potentials used in graph-based algorithms help smooth labels between neighboring (super)pixels in space and time, they offer only a myopic view of consistency and can be misled by inter-frame optical flow errors. We propose a higher order supervoxel label consistency potential for semi-supervised foreground segmentation. Given an initial frame with manual annotation for the foreground object, our approach propagates the foreground region through time, leveraging bottom-up supervoxels to guide its estimates towards long-range coherent regions. We validate our approach on three challenging datasets and achieve state-of-the-art results.", "Supervoxel segmentation has strong potential to be incorporated into early video analysis as superpixel segmentation has in image analysis. However, there are many plausible supervoxel methods and little understanding as to when and where each is most appropriate. Indeed, we are not aware of a single comparative study on supervoxel segmentation. To that end, we study five supervoxel algorithms in the context of what we consider to be a good supervoxel: namely, spatiotemporal uniformity, object region boundary detection, region compression and parsimony. For the evaluation we propose a comprehensive suite of 3D volumetric quality metrics to measure these desirable supervoxel characteristics. We use three benchmark video data sets with a variety of content-types and varying amounts of human annotations. Our findings have led us to conclusive evidence that the hierarchical graph-based and segmentation by weighted aggregation methods perform best and almost equally-well on nearly all the metrics and are the methods of choice given our proposed assumptions.", "Detailed analysis of human action, such as action classification, detection and localization has received increasing attention from the community; datasets like JHMDB have made it plausible to conduct studies analyzing the impact that such deeper information has on the greater action understanding problem. However, detailed automatic segmentation of human action has comparatively been unexplored. In this paper, we take a step in that direction and propose a hierarchical MRF model to bridge low-level video fragments with high-level human motion and appearance; novel higher-order potentials connect different levels of the supervoxel hierarchy to enforce the consistency of the human segmentation by pulling from different segment-scales. Our single layer model significantly outperforms the current state-of-the-art on actionness, and our full model improves upon the single layer baselines in action segmentation." ] }
1512.08899
2470660316
We study abduction in First Order Horn logic theories where all atoms can be abduced and we are looking for preferred solutions with respect to three objective functions: cardinality minimality, coherence, and weighted abduction. We represent this reasoning problem in Answer Set Programming (ASP), in order to obtain a flexible framework for experimenting with global constraints and objective functions, and to test the boundaries of what is possible with ASP. Realizing this problem in ASP is challenging as it requires value invention and equivalence between certain constants, because the Unique Names Assumption does not hold in general. To permit reasoning in cyclic theories, we formally describe fine-grained variations of limiting Skolemization. We identify term equivalence as a main instantiation bottleneck, and improve the efficiency of our approach with on-demand constraints that were used to eliminate the same bottleneck in state-of-the-art solvers. We evaluate our approach experimentally on the ACCEL benchmark for plan recognition in Natural Language Understanding. Our encodings are publicly available, modular, and our approach is more efficient than state-of-the-art solvers on the ACCEL benchmark.
abduction was realized in Markov Logic @cite_51 in the Alchemy system @cite_24 although without value invention @cite_31 @cite_16 , i.e., existential variables in rule heads are naively instantiated with all ground terms in the program. A corresponding ASP encoding for the non-probabilistic case for exists [ ] Schuller2015rcra , however it shows prohibitively bad performance.
{ "cite_N": [ "@cite_24", "@cite_31", "@cite_16", "@cite_51" ], "mid": [ "166955936", "", "2206942178", "1977970897" ], "abstract": [ "A cushioning apparatus operable to be mounted within a railway unit center sill assembly. The apparatus includes a cushioning housing having mechanical and hydraulic cushioning cavities and being operable for translation between buff and draft stop members within the sill structure. A mechanical cushioning assembly is positioned within the mechanical cushioning cavity and includes a high capacity of elastomeric cushioning pad. A follower abuts against the elastomeric pad and carries spacer arms which at least partially surround the elastomeric pad. The spacer arms have an axial extent less than the axial dimension of the cushioning unit, whereby the elastomeric pad is operable to accommodate compression between the follower and the cushioning housing until the spacer members go solid between the cushioning housing and the follower. A hydraulic cushioning assembly is positioned within the hydraulic cushioning cavity and includes a coaxial cylinder which divides the cushioning cavity into a high pressure inner chamber and a surrounding low pressure chamber. A piston is mounted for reciprocation within the interior of the high pressure chamber and port and valve means are provided to permit the flow of fluid from the high pressure fluid chamber with a first impedance in response to coupling force induced relative movement of the piston within the chamber and with a second greater impedance in response to run-in train action force induced relative movement of the piston within the chamber. A coupler bar extends within the draft end of the sill and is connected to the cushioning housing to impart buff and draft forces to the cushioning housing during coupling and train action events.", "", "Plan recognition is a form of abductive reasoning that involves inferring plans that best explain sets of observed actions. Most existing approaches to plan recognition and other abductive tasks employ either purely logical methods that do not handle uncertainty, or purely probabilistic methods that do not handle structured representations. To overcome these limitations, this paper introduces an approach to abductive reasoning using a first-order probabilistic logic, specifically Markov Logic Networks (MLNs). It introduces several novel techniques for making MLNs efficient and effective for abduction. Experiments on three plan recognition datasets show the benefit of our approach over existing methods.", "We propose a simple approach to combining first-order logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a first-order knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects in the domain, it specifies a ground Markov network containing one feature for each possible grounding of a first-order formula in the KB, with the corresponding weight. Inference in MLNs is performed by MCMC over the minimal subset of the ground network required for answering the query. Weights are efficiently learned from relational databases by iteratively optimizing a pseudo-likelihood measure. Optionally, additional clauses are learned using inductive logic programming techniques. Experiments with a real-world database and knowledge base in a university domain illustrate the promise of this approach." ] }
1512.08899
2470660316
We study abduction in First Order Horn logic theories where all atoms can be abduced and we are looking for preferred solutions with respect to three objective functions: cardinality minimality, coherence, and weighted abduction. We represent this reasoning problem in Answer Set Programming (ASP), in order to obtain a flexible framework for experimenting with global constraints and objective functions, and to test the boundaries of what is possible with ASP. Realizing this problem in ASP is challenging as it requires value invention and equivalence between certain constants, because the Unique Names Assumption does not hold in general. To permit reasoning in cyclic theories, we formally describe fine-grained variations of limiting Skolemization. We identify term equivalence as a main instantiation bottleneck, and improve the efficiency of our approach with on-demand constraints that were used to eliminate the same bottleneck in state-of-the-art solvers. We evaluate our approach experimentally on the ACCEL benchmark for plan recognition in Natural Language Understanding. Our encodings are publicly available, modular, and our approach is more efficient than state-of-the-art solvers on the ACCEL benchmark.
The termination proofs we do are related to the notion of in programs @cite_17 , however Liberal Safety requires either specific acyclicity conditions (which are absent in our encodings), or conditions on finiteness of the domain of certain attributes of the external atom (that our Skolemization atoms do not fulfill). Hence we had to prove termination without using Liberal Safety.
{ "cite_N": [ "@cite_17" ], "mid": [ "2193726304" ], "abstract": [ "Answer set programs with external source access may introduce new constants that are not present in the program, which is known as value invention. As naive value invention leads to programs with infinite grounding and answer sets, syntactic safety criteria are imposed on programs. However, traditional criteria are in many cases unnecessarily strong and limit expressiveness. We present liberal domain-expansion (de-) safe programs, a novel generic class of answer set programs with external source access that has a finite grounding and allows for value invention. De-safe programs use so-called term bounding functions as a parameter for modular instantiation with concrete--e.g., syntactic or semantic or both--safety criteria. This ensures extensibility of the approach in the future. We provide concrete instances of the framework and develop an operator that can be used for computing a finite grounding. Finally, we discuss related notions of safety from the literature, and show that our approach is strictly more expressive." ] }
1512.08899
2470660316
We study abduction in First Order Horn logic theories where all atoms can be abduced and we are looking for preferred solutions with respect to three objective functions: cardinality minimality, coherence, and weighted abduction. We represent this reasoning problem in Answer Set Programming (ASP), in order to obtain a flexible framework for experimenting with global constraints and objective functions, and to test the boundaries of what is possible with ASP. Realizing this problem in ASP is challenging as it requires value invention and equivalence between certain constants, because the Unique Names Assumption does not hold in general. To permit reasoning in cyclic theories, we formally describe fine-grained variations of limiting Skolemization. We identify term equivalence as a main instantiation bottleneck, and improve the efficiency of our approach with on-demand constraints that were used to eliminate the same bottleneck in state-of-the-art solvers. We evaluate our approach experimentally on the ACCEL benchmark for plan recognition in Natural Language Understanding. Our encodings are publicly available, modular, and our approach is more efficient than state-of-the-art solvers on the ACCEL benchmark.
In the area of Automated Theorem Proving, algorithms search for finite models (or theorems, unsatisfiability proofs) in full first order logic without enforcing UNA and including native support for Skolemization (cf. @cite_4 ). These algorithms focus on finding a feasible solution and do not contain support for preferences (optimization criteria). However, the main emphasis of our abduction problems is to find solutions with optimal cost (recall that our problems always have the trivial solution to abduce all input atoms). To tackle our abduction problem with such theorem provers, it would be necessary to transform the optimization problem into a decision problem and perform a search over the optimization criterion, calling the prover several times. Related to theorem proving, a hypertableaux algorithm for coreference resolution is described in @cite_30 . This algorithm is inspired by weighted abduction, however it does not use preferences and relies solely on inconsistency for eliminating undesired solutions.
{ "cite_N": [ "@cite_30", "@cite_4" ], "mid": [ "1547478246", "1573992413" ], "abstract": [ "In this paper, we argue that the resolution of anaphoric expressions in an utterance is essentially an abductive task following [12] who use a weighted abduction scheme on horn clauses to deal with reference. We give a semantic representation for utterances containing anaphora that enables us to compute possible antecedents by abductive inference. We extend the disjunctive model construction procedure of hyper tableaux [3, 14] with a clause transformation turning the abductive task into a model generation problem and show the completeness of this transformation with respect to the computation of abductive explanations. This abductive inference is applied to the resolution of anaphoric expressions in our general model constructing framework for incremental discourse representation which we argue to be useful for computing information updates from natural language utterances.", "This paper describes the First-Order Form (FOF) and Clause Normal Form (CNF) parts of the TPTP problem library, and the associated infrastructure. TPTP v3.5.0 was the last release containing only FOF and CNF problems, and thus serves as the exemplar. This paper summarizes the history and development of the TPTP, describes the structure and contents of the TPTP, and gives an overview of TPTP related projects and tools." ] }
1512.08899
2470660316
We study abduction in First Order Horn logic theories where all atoms can be abduced and we are looking for preferred solutions with respect to three objective functions: cardinality minimality, coherence, and weighted abduction. We represent this reasoning problem in Answer Set Programming (ASP), in order to obtain a flexible framework for experimenting with global constraints and objective functions, and to test the boundaries of what is possible with ASP. Realizing this problem in ASP is challenging as it requires value invention and equivalence between certain constants, because the Unique Names Assumption does not hold in general. To permit reasoning in cyclic theories, we formally describe fine-grained variations of limiting Skolemization. We identify term equivalence as a main instantiation bottleneck, and improve the efficiency of our approach with on-demand constraints that were used to eliminate the same bottleneck in state-of-the-art solvers. We evaluate our approach experimentally on the ACCEL benchmark for plan recognition in Natural Language Understanding. Our encodings are publicly available, modular, and our approach is more efficient than state-of-the-art solvers on the ACCEL benchmark.
The complexity of abduction in theories in the presence of the objective has been analyzed in @cite_13 @cite_3 , and in @cite_40 , the propositional case of abduction in logic programs is studied and extended to function-free logic programming abduction (Sec. 6), under the restriction that only constants from observations and knowledge base (there called manifestations and program ) are used and that the UNA holds for all terms. However, in our variant of abduction the optimal solution may use a set (of unspecified size) of constants that are not present in the input and there is potential equality among certain input constants and constants originating in value invention. Hence, existing results can be seen as lower bounds for hardness but do not directly carry over to our scenario.
{ "cite_N": [ "@cite_40", "@cite_13", "@cite_3" ], "mid": [ "", "2037481186", "2100657934" ], "abstract": [ "", "Abstract The problem of abduction can be characterized as finding the best explanation of a set of data. In this paper we focus on one type of abduction in which the best explanation is the most plausible combination of hypotheses that explains all the data. We then present several computational complexity results demonstrating that this type of abduction is intractable (NP-hard) in general. In particular, choosing between incompatible hypotheses, reasoning about cancellation effects among hypotheses, and satisfying the maximum plausibility requirement are major factors leading to intractability. We also identify a tractable, but restricted, class of abduction problems.", "Abduction is an important form of nonmonotonic reasoning allowing one to find explanations for certain symptoms or manifestations. When the application domain is described by a logical theory, we speak about logic-based abduction . Candidates for abductive explanations are usually subjected to minimality criteria such as subset-minimality, minimal cardinality, minimal weight, or minimality under prioritization of individual hypotheses. This paper presents a comprehensive complexity analysis of relevant decision and search problems related to abduction on propositional theories. Our results indicate that abduction is harder than deduction. In particular, we show that with the most basic forms of abduction the relevant decision problems are complete for complexity classes at the second level of the polynomial hierarchy, while the use of prioritization raises the complexity to the third level in certain cases." ] }
1512.08899
2470660316
We study abduction in First Order Horn logic theories where all atoms can be abduced and we are looking for preferred solutions with respect to three objective functions: cardinality minimality, coherence, and weighted abduction. We represent this reasoning problem in Answer Set Programming (ASP), in order to obtain a flexible framework for experimenting with global constraints and objective functions, and to test the boundaries of what is possible with ASP. Realizing this problem in ASP is challenging as it requires value invention and equivalence between certain constants, because the Unique Names Assumption does not hold in general. To permit reasoning in cyclic theories, we formally describe fine-grained variations of limiting Skolemization. We identify term equivalence as a main instantiation bottleneck, and improve the efficiency of our approach with on-demand constraints that were used to eliminate the same bottleneck in state-of-the-art solvers. We evaluate our approach experimentally on the ACCEL benchmark for plan recognition in Natural Language Understanding. Our encodings are publicly available, modular, and our approach is more efficient than state-of-the-art solvers on the ACCEL benchmark.
In an acyclic theory, our reasoning problem is related to non-recursive negation-free Datalog theories and non-recursive logic programming with equality, which has been studied (although not with respect to abductive reasoning) in @cite_15 .
{ "cite_N": [ "@cite_15" ], "mid": [ "1969965298" ], "abstract": [ "This article surveys various complexity and expressiveness results on different forms of logic programming. The main focus is on decidable forms of logic programming, in particular, propositional logic programming and datalog, but we also mention general logic programming with function symbols. Next to classical results on plain logic programming (pure Horn clause programs), more recent results on various important extensions of logic programming are surveyed. These include logic programming with different forms of negation, disjunctive logic programming, logic programming with equality, and constraint logic programming." ] }
1512.09228
2201352388
In this paper, we propose several optimizations for the SFA construction algorithm, which greatly reduce the in-memory footprint and the processing steps required to construct an SFA. We introduce fingerprints as a space- and time-efficient way to represent SFA states. To compute fingerprints, we apply the Barrett reduction algorithm and accelerate it using recent additions to the x86 instruction set architecture. We exploit fingerprints to introduce hashing for further optimizations. Our parallel SFA construction algorithm is nonblocking and utilizes instruction-level, data-level, and task-level parallelism of coarse-, medium- and fine-grained granularity. We adapt static workload distributions and align the SFA data-structures with the constraints of multicore memory hierarchies, to increase the locality of memory accesses and facilitate HW prefetching. We conduct experiments on the PROSITE protein database for FAs of up to 702 FA states to evaluate performance and effectiveness of our proposed optimizations. Evaluations have been conducted on a 4 CPU (64 cores) AMD Opteron 6378 system and a 2 CPU (28 cores, 2 hyperthreads per core) Intel Xeon E5-2697 v3 system. The observed speedups over the sequential baseline algorithm are up to 118541x on the AMD system and 2113968x on the Intel system.
Locating a string in a larger text has applications with text editing, compiler front-ends and web browsers, internet search engines, computer security, and DNA sequence analysis. Early string searching algorithms such as Aho--Corasick @cite_10 , Boyer--Moore @cite_1 and Rabin--Karp @cite_13 efficiently match a finite set of input strings against an input text.
{ "cite_N": [ "@cite_1", "@cite_10", "@cite_13" ], "mid": [ "2134826720", "2099964107", "1972418517" ], "abstract": [ "An algorithm is presented that searches for the location, “ i l” of the first occurrence of a character string, “ pat ,” in another string, “ string .” During the search operation, the characters of pat are matched starting with the last character of pat . The information gained by starting the match at the end of the pattern often allows the algorithm to proceed in large jumps through the text being searched. Thus the algorithm has the unusual property that, in most cases, not all of the first i characters of string are inspected. The number of characters actually inspected (on the average) decreases as a function of the length of pat . For a random English pattern of length 5, the algorithm will typically inspect i 4 characters of string before finding a match at i . Furthermore, the algorithm has been implemented so that (on the average) fewer than i + patlen machine instructions are executed. These conclusions are supported with empirical evidence and a theoretical analysis of the average behavior of the algorithm. The worst case behavior of the algorithm is linear in i + patlen , assuming the availability of array space for tables linear in patlen plus the size of the alphabet. 3", "This paper describes a simple, efficient algorithm to locate all occurrences of any of a finite number of keywords in a string of text. The algorithm consists of constructing a finite state pattern matching machine from the keywords and then using the pattern matching machine to process the text string in a single pass. Construction of the pattern matching machine takes time proportional to the sum of the lengths of the keywords. The number of state transitions made by the pattern matching machine in processing the text string is independent of the number of keywords. The algorithm has been used to improve the speed of a library bibliographic search program by a factor of 5 to 10.", "We present randomized algorithms to solve the following string-matching problem and some of its generalizations: Given a string X of length n (the pattern) and a string Y (the text), find the first occurrence of X as a consecutive block within Y. The algorithms represent strings of length n by much shorter strings called fingerprints, and achieve their efficiency by manipulating fingerprints instead of longer strings. The algorithms require a constant number of storage locations, and essentially run in real time. They are conceptually simple and easy to implement. The method readily generalizes to higher-dimensional patternmatching problems." ] }
1512.09228
2201352388
In this paper, we propose several optimizations for the SFA construction algorithm, which greatly reduce the in-memory footprint and the processing steps required to construct an SFA. We introduce fingerprints as a space- and time-efficient way to represent SFA states. To compute fingerprints, we apply the Barrett reduction algorithm and accelerate it using recent additions to the x86 instruction set architecture. We exploit fingerprints to introduce hashing for further optimizations. Our parallel SFA construction algorithm is nonblocking and utilizes instruction-level, data-level, and task-level parallelism of coarse-, medium- and fine-grained granularity. We adapt static workload distributions and align the SFA data-structures with the constraints of multicore memory hierarchies, to increase the locality of memory accesses and facilitate HW prefetching. We conduct experiments on the PROSITE protein database for FAs of up to 702 FA states to evaluate performance and effectiveness of our proposed optimizations. Evaluations have been conducted on a 4 CPU (64 cores) AMD Opteron 6378 system and a 2 CPU (28 cores, 2 hyperthreads per core) Intel Xeon E5-2697 v3 system. The observed speedups over the sequential baseline algorithm are up to 118541x on the AMD system and 2113968x on the Intel system.
Regular expressions allow the specification of infinite sets of input strings. Converting a regular expression to a DFA for DFA membership tests is a standard technique to perform regular expression matching. The specification of virus signatures in intrusion prevention systems @cite_32 @cite_9 @cite_20 and the specification of DNA sequences @cite_12 @cite_3 constitute recent applications of regular expression matching with DFAs.
{ "cite_N": [ "@cite_9", "@cite_32", "@cite_3", "@cite_20", "@cite_12" ], "mid": [ "2006508099", "2100583963", "", "1674877186", "2155744824" ], "abstract": [ "Many network intrusion detection systems (NIDS) use byte sequences as signatures to detect malicious activity. While being highly efficient, they tend to suffer from a high false-positive rate. We develop the concept of contextual signatures as an improvement of string-based signature-matching. Rather than matching fixed strings in isolation, we augment the matching process with additional context. When designing an efficient signature engine for the NIDS bro, we provide low-level context by using regular expressions for matching, and high-level context by taking advantage of the semantic information made available by bro's protocol analysis and scripting language. Therewith, we greatly enhance the signature's expressiveness and hence the ability to reduce false positives. We present several examples such as matching requests with replies, using knowledge of the environment, defining dependencies between signatures to model step-wise attacks, and recognizing exploit scans.To leverage existing efforts, we convert the comprehensive signature set of the popular freeware NIDS snort into bro's language. While this does not provide us with improved signatures by itself, we reap an established base to build upon. Consequently, we evaluate our work by comparing to snort, discussing in the process several general problems of comparing different NIDSs.", "In this paper we explore the problem of creating vulnerability signatures. A vulnerability signature matches all exploits of a given vulnerability, even polymorphic or metamorphic variants. Our work departs from previous approaches by focusing on the semantics of the program and vulnerability exercised by a sample exploit instead of the semantics or syntax of the exploit itself. We show the semantics of a vulnerability define a language which contains all and only those inputs that exploit the vulnerability. A vulnerability signature is a representation (e.g., a regular expression) of the vulnerability language. Unlike exploit-based signatures whose error rate can only be empirically measured for known test cases, the quality of a vulnerability signature can be formally quantified for all possible inputs. We provide a formal definition of a vulnerability signature and investigate the computational complexity of creating and matching vulnerability signatures. We also systematically explore the design space of vulnerability signatures. We identify three central issues in vulnerability-signature creation: how a vulnerability signature represents the set of inputs that may exercise a vulnerability, the vulnerability coverage (i.e., number of vulnerable program paths) that is subject to our analysis during signature creation, and how a vulnerability signature is then created for a given representation and coverage. We propose new data-flow analysis and novel adoption of existing techniques such as constraint solving for automatically generating vulnerability signatures. We have built a prototype system to test our techniques. Our experiments show that we can automatically generate a vulnerability signature using a single exploit which is of much higher quality than previous exploit-based signatures. In addition, our techniques have several other security applications, and thus may be of independent interest.", "", "Network intrusion detection systems (NIDS) are an important part of any network security architecture. They provide a layer of defense which monitors network traffic for predefined suspicious activity or patterns, and alert system administrators when potential hostile traffic is detected. Commercial NIDS have many differences, but Information Systems departments must face the commonalities that they share such as significant system footprint, complex deployment and high monetary cost. Snort was designed to address these issues.", "PROSITE consists of documentation entries describing protein domains, families and functional sites, as well as associated patterns and profiles to identify them. It is complemented by ProRule, a collection of rules based on profiles and patterns, which increases the discriminatory power of these profiles and patterns by providing additional information about functionally and or structurally critical amino acids. PROSITE is largely used for the annotation of domain features of UniProtKB Swiss-Prot entries. Among the 983 (DNA-binding) domains, repeats and zinc fingers present in Swiss-Prot (release 57.8 of 22 September 2009), 696 (70 ) are annotated with PROSITE descriptors using information from ProRule. In order to allow better functional characterization of domains, PROSITE developments focus on subfamily specific profiles and a new profile building method giving more weight to functionally important residues. Here, we describe AMSA, an annotated multiple sequence alignment format used to build a new generation of generalized profiles, the migration of ScanProsite to Vital-IT, a cluster of 633 CPUs, and the adoption of the Distributed Annotation System (DAS) to facilitate PROSITE data integration and interchange with other sources. The latest version of PROSITE (release 20.54, of 22 September 2009) contains 1308 patterns, 863 profiles and 869 ProRules. PROSITE is accessible at: http: www.expasy.org prosite ." ] }
1512.09228
2201352388
In this paper, we propose several optimizations for the SFA construction algorithm, which greatly reduce the in-memory footprint and the processing steps required to construct an SFA. We introduce fingerprints as a space- and time-efficient way to represent SFA states. To compute fingerprints, we apply the Barrett reduction algorithm and accelerate it using recent additions to the x86 instruction set architecture. We exploit fingerprints to introduce hashing for further optimizations. Our parallel SFA construction algorithm is nonblocking and utilizes instruction-level, data-level, and task-level parallelism of coarse-, medium- and fine-grained granularity. We adapt static workload distributions and align the SFA data-structures with the constraints of multicore memory hierarchies, to increase the locality of memory accesses and facilitate HW prefetching. We conduct experiments on the PROSITE protein database for FAs of up to 702 FA states to evaluate performance and effectiveness of our proposed optimizations. Evaluations have been conducted on a 4 CPU (64 cores) AMD Opteron 6378 system and a 2 CPU (28 cores, 2 hyperthreads per core) Intel Xeon E5-2697 v3 system. The observed speedups over the sequential baseline algorithm are up to 118541x on the AMD system and 2113968x on the Intel system.
A straight-forward way to exploit parallelism with DFA membership tests is to run a single DFA on multiple input streams in parallel, or to run multiple DFAs in parallel. This approach has been taken by @cite_24 with a DFA-based string matching system for network security on the IBM Cell BE processor. Similarly, @cite_16 investigated parallel architectures for packet inspection based on DFAs. Both approaches assume multiple input streams and a vast number of patterns (i.e., virus signatures), which is common with network security applications. However, neither approach parallelizes the DFA membership algorithm itself, which is required to improve applications with single, long-running membership tests such as DNA sequence analysis.
{ "cite_N": [ "@cite_24", "@cite_16" ], "mid": [ "2157059791", "1999856755" ], "abstract": [ "The security of your data and of your network is in the hands of intrusion detection systems, virus scanners and spam filters, which are all critically based on string matching. But network links are getting faster and faster, and string matching is getting more and more difficult to perform in real time. Traditional processors are not keeping up with the performance demands, whereas specialized hardware will never be able to compete with commodity hardware in terms of cost effectiveness, reusability and ease of programming. Advanced multi-core architectures like the IBM Cell Broadband Engine promise unprecedented performance at a low cost, thanks to their popularity and production volume. Nevertheless, the suitability of the cell processor to string matching has not been investigated so far. In this paper we investigate the performance attainable by the cell processor when employed for string matching algorithms based on deterministic finite-state automata (DFA). Our findings show that the cell is an ideal candidate to tackle modern security needs: two processing elements alone, out of the eight available on one cell processor provide sufficient computational power to filter a network link with bit rates in excess of 10 Gbps.", "Multi-pattern matching is a key technique for implementing network security applications such as Network Intrusion Detection Protection Systems (NIDS NIPSes) where every packet is inspected against predefined attack signatures written in regular expressions (regexes). To this end, Deterministic Finite Automaton (DFA) is widely used for multi-regex matching, but existing DFAbased researches have claimed high throughput at an expenses of extremely high memory cost. In this paper, we propose a parallel architecture of DFA called Parallel DFA (PDFA), using multiple flow aggregations to increase the throughput with nearly no extra memory cost. The basic idea is to selectively store the DFA in multiple memory modules which can be accessed in parallel and to explore the potential parallelism. The memory cost of our system in both the average cases and the worst cases is analyzed, optimized and evaluated by numerical results. The evaluation shows that we obtain an average speedup of about 0.5k to 0.7k where k is the number of parallel memory modules under our synthetic trace and compressed real trace in a statistical average case, compared with the traditional DFA-based matching approaches." ] }
1512.09228
2201352388
In this paper, we propose several optimizations for the SFA construction algorithm, which greatly reduce the in-memory footprint and the processing steps required to construct an SFA. We introduce fingerprints as a space- and time-efficient way to represent SFA states. To compute fingerprints, we apply the Barrett reduction algorithm and accelerate it using recent additions to the x86 instruction set architecture. We exploit fingerprints to introduce hashing for further optimizations. Our parallel SFA construction algorithm is nonblocking and utilizes instruction-level, data-level, and task-level parallelism of coarse-, medium- and fine-grained granularity. We adapt static workload distributions and align the SFA data-structures with the constraints of multicore memory hierarchies, to increase the locality of memory accesses and facilitate HW prefetching. We conduct experiments on the PROSITE protein database for FAs of up to 702 FA states to evaluate performance and effectiveness of our proposed optimizations. Evaluations have been conducted on a 4 CPU (64 cores) AMD Opteron 6378 system and a 2 CPU (28 cores, 2 hyperthreads per core) Intel Xeon E5-2697 v3 system. The observed speedups over the sequential baseline algorithm are up to 118541x on the AMD system and 2113968x on the Intel system.
@cite_6 reported that with the IE 8 and Firefox web browsers 3--40 browsing, employ speculation to parallelize token detection (lexing) of HTML language front-ends. Similar to Holub and S tekr's @math -local automata, they use the preceding @math characters of a chunk to synchronize a DFA to a particular state. Unlike @math -locality, which is a static DFA property, speculate the DFA to be in a particular, frequently occurring DFA state at the beginning of a chunk. Speculation fails if the DFA turns out to be in a different state, in which case the chunk needs to be re-matched. Lexing HTML documents results in frequent matches, and the structure of regular expressions is reported to be simpler than, e.g., virus signatures @cite_2 . Speculation is facilitated by the fact that the state at the beginning of a token is always the same, regardless where lexing started. A prototype implementation is reported to scale up to six of the eight synergistic processing units of the Cell BE.
{ "cite_N": [ "@cite_6", "@cite_2" ], "mid": [ "1855338384", "2110199304" ], "abstract": [ "We argue that the transition from laptops to handheld computers will happen only if we rethink the design of web browsers. Web browsers are an indispensable part of the end-user software stack but they are too inefficient for handhelds. While the laptop reused the software stack of its desktop ancestor, solid-state device trends suggest that today's browser designs will not become sufficiently (1) responsive and (2) energy-efficient. We argue that browser improvements must go beyond JavaScript JIT compilation and discuss how parallelism may help achieve these two goals. Motivated by a future browser-based application, we describe the preliminary design of our parallel browser, its work-efficient parallel algorithms, and an actor-based scripting language.", "Intrusion prevention systems (IPSs) determine whether incoming traffic matches a database of signatures, where each signature is a regular expression and represents an attack or a vulnerability. IPSs need to keep up with ever-increasing line speeds, which has lead to the use of custom hardware. A major bottleneck that IPSs face is that they scan incoming packets one byte at a time, which limits their throughput and latency. In this paper, we present a method to search for arbitrary regular expressions by scanning multiple bytes in parallel using speculation. We break the packet in several chunks, opportunistically scan them in parallel, and if the speculation is wrong, correct it later. We present algorithms that apply speculation in single-threaded software running on commodity processors as well as algorithms for parallel hardware. Experimental results show that speculation leads to improvements in latency and throughput in both cases." ] }
1512.09228
2201352388
In this paper, we propose several optimizations for the SFA construction algorithm, which greatly reduce the in-memory footprint and the processing steps required to construct an SFA. We introduce fingerprints as a space- and time-efficient way to represent SFA states. To compute fingerprints, we apply the Barrett reduction algorithm and accelerate it using recent additions to the x86 instruction set architecture. We exploit fingerprints to introduce hashing for further optimizations. Our parallel SFA construction algorithm is nonblocking and utilizes instruction-level, data-level, and task-level parallelism of coarse-, medium- and fine-grained granularity. We adapt static workload distributions and align the SFA data-structures with the constraints of multicore memory hierarchies, to increase the locality of memory accesses and facilitate HW prefetching. We conduct experiments on the PROSITE protein database for FAs of up to 702 FA states to evaluate performance and effectiveness of our proposed optimizations. Evaluations have been conducted on a 4 CPU (64 cores) AMD Opteron 6378 system and a 2 CPU (28 cores, 2 hyperthreads per core) Intel Xeon E5-2697 v3 system. The observed speedups over the sequential baseline algorithm are up to 118541x on the AMD system and 2113968x on the Intel system.
The speculative parallel pattern matching (SPPM) approach by @cite_2 @cite_33 uses speculation to match the increasing network line-speeds faced by intrusion prevention systems. SPPM DFAs represent virus signatures. Like , DFAs are speculated to be in a particular, frequently occurring DFA state at the beginning of a chunk. SPPM starts the speculative matching at the beginning of each chunk. With every input character, a speculative matching process stores the encountered DFA state for subsequent reference. Speculation fails if the DFA turns out to be in a different state at the beginning of a speculatively matched chunk. In this case, re-matching continues until the DFA synchronizes with the saved history state (in the worst case, the whole chunk needs to be re-matched). A single-threaded SPPM version is proposed to improve performance by issuing multiple independent memory accesses in parallel. Such pipelining (or interleaving) of DFA matches is orthogonal to our approach, which focuses on latency rather than throughput.
{ "cite_N": [ "@cite_33", "@cite_2" ], "mid": [ "1592471300", "2110199304" ], "abstract": [ "Intrusion prevention systems determine whether incoming traffic matches a database of signatures, where each signature in the database represents an attack or a vulnerability. IPSs need to keep up with ever-increasing line speeds, which leads to the use of custom hardware. A major bottleneck that IPSs face is that they scan incoming packets one byte at a time, which limits their throughput and latency. In this paper, we present a method for scanning multiple bytes in parallel using speculation. We break the packet in several chunks, opportunistically scan them in parallel and if the speculation is wrong, correct it later. We present algorithms that apply speculation in single-threaded software running on commodity processors as well as algorithms for parallel hardware. Experimental results show that speculation leads to improvements in latency and throughput in both cases.", "Intrusion prevention systems (IPSs) determine whether incoming traffic matches a database of signatures, where each signature is a regular expression and represents an attack or a vulnerability. IPSs need to keep up with ever-increasing line speeds, which has lead to the use of custom hardware. A major bottleneck that IPSs face is that they scan incoming packets one byte at a time, which limits their throughput and latency. In this paper, we present a method to search for arbitrary regular expressions by scanning multiple bytes in parallel using speculation. We break the packet in several chunks, opportunistically scan them in parallel, and if the speculation is wrong, correct it later. We present algorithms that apply speculation in single-threaded software running on commodity processors as well as algorithms for parallel hardware. Experimental results show that speculation leads to improvements in latency and throughput in both cases." ] }
1512.09228
2201352388
In this paper, we propose several optimizations for the SFA construction algorithm, which greatly reduce the in-memory footprint and the processing steps required to construct an SFA. We introduce fingerprints as a space- and time-efficient way to represent SFA states. To compute fingerprints, we apply the Barrett reduction algorithm and accelerate it using recent additions to the x86 instruction set architecture. We exploit fingerprints to introduce hashing for further optimizations. Our parallel SFA construction algorithm is nonblocking and utilizes instruction-level, data-level, and task-level parallelism of coarse-, medium- and fine-grained granularity. We adapt static workload distributions and align the SFA data-structures with the constraints of multicore memory hierarchies, to increase the locality of memory accesses and facilitate HW prefetching. We conduct experiments on the PROSITE protein database for FAs of up to 702 FA states to evaluate performance and effectiveness of our proposed optimizations. Evaluations have been conducted on a 4 CPU (64 cores) AMD Opteron 6378 system and a 2 CPU (28 cores, 2 hyperthreads per core) Intel Xeon E5-2697 v3 system. The observed speedups over the sequential baseline algorithm are up to 118541x on the AMD system and 2113968x on the Intel system.
The speculative parallel DFA membership test reported by @cite_5 is to parallelize DFA membership tests for multicore, SIMD, and cloud computing environments. This technique is one of the speculative parallel matching methods by searching arbitrary regular expressions. It requires dividing the input string into chunks, matching chunks in parallel, and combining the matching results. When the input string is partitioned, the algorithm decides the amount of characters of each chunk depending on the number of possible start states. Unlike the previous approaches of parallel membership tests, it is failure-free , which means speed-downs never happen by maintaining the sequential semantics.
{ "cite_N": [ "@cite_5" ], "mid": [ "2169680978" ], "abstract": [ "We present techniques to parallelize membership tests for Deterministic Finite Automata (DFAs). Our method searches arbitrary regular expressions by matching multiple bytes in parallel using speculation. We partition the input string into chunks, match chunks in parallel, and combine the matching results. Our parallel matching algorithm exploits structural DFA properties to minimize the speculative overhead. Unlike previous approaches, our speculation is failure-free, i.e., (1) sequential semantics are maintained, and (2) speed-downs are avoided altogether. On architectures with a SIMD gather-operation for indexed memory loads, our matching operation is fully vectorized. The proposed load-balancing scheme uses an off-line profiling step to determine the matching capacity of each participating processor. Based on matching capacities, DFA matches are load-balanced on inhomogeneous parallel architectures such as cloud computing environments. We evaluated our speculative DFA membership test for a representative set of benchmarks from the Perl-compatible Regular Expression (PCRE) library and the PROSITE protein database. Evaluation was conducted on a 4 CPU (40 cores) shared-memory node of the Intel Academic Program Manycore Testing Lab (Intel MTL), on the Intel AVX2 SDE simulator for 8-way fully vectorized SIMD execution, and on a 20-node (288 cores) cluster on the Amazon EC2 computing cloud. Obtained speedups are on the order of @math O 1 + | P | - 1 | Q | · ? , where @math | P | denotes the number of processors or SIMD units, @math | Q | denotes the number of DFA states, and @math 0 < ? ≤ 1 represents a statically computed DFA property. For all observed cases, we found that @math 0.02 < ? < 0.47 . Actual speedups range from 2.3 @math × to 38.8 @math × for up to 512 DFA states for PCRE, and between 1.3 @math × and 19.9 @math × for up to 1,288 DFA states for PROSITE on a 40-core MTL node. Speedups on the EC2 computing cloud range from 5.0 @math × to 65.8 @math × for PCRE, and from 5.0 @math × to 138.5 @math × for PROSITE. Speedups of our C-based DFA matcher over the Perl-based ScanProsite scan tool range from 559.3 @math × to 15079.7 @math × on a 40-core MTL node. We show the scalability of our approach for input-sizes of up to 10 GB." ] }
1512.08493
2267524689
With recent advances in radio-frequency identification (RFID), wireless sensor networks, and Web services, physical things are becoming an integral part of the emerging ubiquitous Web. Finding correlations of ubiquitous things is a crucial prerequisite for many important applications such as things search, discovery, classification, recommendation, and composition. This article presents DisCor-T, a novel graph-based method for discovering underlying connections of things via mining the rich content embodied in human-thing interactions in terms of user, temporal and spatial information. We model these various information using two graphs, namely spatio-temporal graph and social graph. Then, random walk with restart (RWR) is applied to find proximities among things, and a relational graph of things (RGT) indicating implicit correlations of things is learned. The correlation analysis lays a solid foundation contributing to improved effectiveness in things management. To demonstrate the utility, we develop a flexible feature-based classification framework on top of RGT and perform a systematic case study. Our evaluation exhibits the strength and feasibility of the proposed approach.
Some improvement on semi-supervised learning algorithms focused on the dependency between labels @cite_36 , while some other work tried to capture the long-distance relevance of nodes. For example, @cite_8 proposed a nonparametric latent feature models for link prediction.@cite_42 , Neville and Jensen used clustering algorithm to find cluster membership and fix the latent group variables for inference.
{ "cite_N": [ "@cite_36", "@cite_42", "@cite_8" ], "mid": [ "2056974656", "2605441573", "2158535911" ], "abstract": [ "We present a novel framework for multi-label learning that explicitly addresses the challenge arising from the large number of classes and a small size of training data. The key assumption behind this work is that two examples tend to have large overlap in their assigned class memberships if they share high similarity in their input patterns. We capitalize this assumption by first computing two sets of similarities, one based on the input patterns of examples, and the other based on the class memberships of the examples. We then search for the optimal assignment of class memberships to the unlabeled data that minimizes the difference between these two sets of similarities. The optimization problem is formulated as a constrained Non-negative Matrix Factorization (NMF) problem, and an algorithm is presented to efficiently find the solution. Compared to the existing approaches for multi-label learning, the proposed approach is advantageous in that it is able to explore both the unlabeled data and the correlation among different classes simultaneously. Experiments with text categorization show that our approach performs significantly better than several state-of-the-art classification techniques when the number of classes is large and the size of training data is small.", "The presence of autocorrelation provides strong motivation for using relational techniques for learning and inference. Autocorrelation is a statistical dependency between the values of the same variable on related entities and is a nearly ubiquitous characteristic of relational data sets. Recent research has explored the use of collective inference techniques to exploit this phenomenon. These techniques achieve significant performance gains by modeling observed correlations among class labels of related instances, but the models fail to capture a frequent cause of autocorrelation---the presence of underlying groups that influence the attributes on a set of entities. We propose a latent group model (LGM) for relational data, which discovers and exploits the hidden structures responsible for the observed autocorrelation among class labels. Modeling the latent group structure improves model performance, increases inference efficiency, and enhances our understanding of the datasets. We evaluate performance on three relational classification tasks and show that LGM outperforms models that ignore latent group structure, particularly when there is little information with which to seed inference.", "As the availability and importance of relational data—such as the friendships summarized on a social networking website—increases, it becomes increasingly important to have good models for such data. The kinds of latent structure that have been considered for use in predicting links in such networks have been relatively limited. In particular, the machine learning community has focused on latent class models, adapting Bayesian nonparametric methods to jointly infer how many latent classes there are while learning which entities belong to each class. We pursue a similar approach with a richer kind of latent variable—latent features—using a Bayesian nonparametric approach to simultaneously infer the number of features at the same time we learn which entities have each feature. Our model combines these inferred features with known covariates in order to perform link prediction. We demonstrate that the greater expressiveness of this approach allows us to improve performance on three datasets." ] }
1512.08422
2469057590
In this paper, we propose the TBCNN-pair model to recognize entailment and contradiction between two sentences. In our model, a tree-based convolutional neural network (TBCNN) captures sentence-level semantics; then heuristic matching layers like concatenation, element-wise product difference combine the information in individual sentences. Experimental results show that our model outperforms existing sentence encoding-based approaches by a large margin.
Entailment recognition can be viewed as a task of sentence pair modeling. Most neural networks in this field involve a sentence-level model, followed by one or a few matching layers. They are sometimes called Siamese'' architectures @cite_18 .
{ "cite_N": [ "@cite_18" ], "mid": [ "2171590421" ], "abstract": [ "This paper describes the development of an algorithm for verification of signatures written on a touch-sensitive pad. The signature verification algorithm is based on an artificial neural network. The novel network presented here, called a “Siamese” time delay neural network, consists of two identical networks joined at their output. During training the network learns to measure the similarity between pairs of signatures. When used for verification, only one half of the Siamese network is evaluated. The output of this half network is the feature vector for the input signature. Verification consists of comparing this feature vector with a stored feature vector for the signer. Signatures closer than a chosen threshold to this stored representation are accepted, all other signatures are rejected as forgeries. System performance is illustrated with experiments performed in the laboratory." ] }
1512.08422
2469057590
In this paper, we propose the TBCNN-pair model to recognize entailment and contradiction between two sentences. In our model, a tree-based convolutional neural network (TBCNN) captures sentence-level semantics; then heuristic matching layers like concatenation, element-wise product difference combine the information in individual sentences. Experimental results show that our model outperforms existing sentence encoding-based approaches by a large margin.
The simplest approach to match two sentences, perhaps, is to concatenate their vector representations [Arc-I] DRR,CNN:NIPS . Concatenation is also applied in our previous work of matching the subject and object in relation classification @cite_1 @cite_0 . apply additional heuristics, namely Euclidean distance, cosine measure, and element-wise absolute difference. The above methods operate on a fixed-size vector representation of a sentence, categorized as -based approaches. Thus the matching complexity is @math , i.e., independent of the sentence length. Word-by-word similarity matrices are introduced to enhance interaction. To obtain the similarity matrix, (Arc-II) concatenate two words' vectors (after convolution), compute Euclidean distance, and apply tensor product. In this way, the complexity is of @math , where @math is the length of a sentence; hence similarity matrices are difficult to scale and less efficient for large datasets.
{ "cite_N": [ "@cite_0", "@cite_1" ], "mid": [ "2236688737", "1750263989" ], "abstract": [ "Nowadays, neural networks play an important role in the task of relation classification. By designing different neural architectures, researchers have improved the performance to a large extent in comparison with traditional methods. However, existing neural networks for relation classification are usually of shallow architectures (e.g., one-layer convolutional neural networks or recurrent networks). They may fail to explore the potential representation space in different abstraction levels. In this paper, we propose deep recurrent neural networks (DRNNs) for relation classification to tackle this challenge. Further, we propose a data augmentation method by leveraging the directionality of relations. We evaluated our DRNNs on the SemEval-2010 Task 8, and achieve an F1-score of 86.1 , outperforming previous state-of-the-art recorded results.", "Relation classification is an important research arena in the field of natural language processing (NLP). In this paper, we present SDP-LSTM, a novel neural network to classify the relation of two entities in a sentence. Our neural architecture leverages the shortest dependency path (SDP) between two entities; multichannel recurrent neural networks, with long short term memory (LSTM) units, pick up heterogeneous information along the SDP. Our proposed model has several distinct features: (1) The shortest dependency paths retain most relevant information (to relation classification), while eliminating irrelevant words in the sentence. (2) The multichannel LSTM networks allow effective information integration from heterogeneous sources over the dependency paths. (3) A customized dropout strategy regularizes the neural network to alleviate overfitting. We test our model on the SemEval 2010 relation classification task, and achieve an @math -score of 83.7 , higher than competing methods in the literature." ] }
1512.08422
2469057590
In this paper, we propose the TBCNN-pair model to recognize entailment and contradiction between two sentences. In our model, a tree-based convolutional neural network (TBCNN) captures sentence-level semantics; then heuristic matching layers like concatenation, element-wise product difference combine the information in individual sentences. Experimental results show that our model outperforms existing sentence encoding-based approaches by a large margin.
Recently, introduce several context-aware methods for sentence matching. They report that RNNs over a single chain of two sentences are more informative than separate RNNs; a static attention over the first sentence is also useful when modeling the second one. Such context-awareness interweaves the sentence modeling and matching steps. In some scenarios like sentence pair re-ranking @cite_10 , it is not feasible to pre-calculate the vector representations of sentences, so the matching complexity is of @math . further develop a word-by-word attention mechanism and obtain a higher accuracy with a complexity order of @math .
{ "cite_N": [ "@cite_10" ], "mid": [ "2339852062" ], "abstract": [ "To establish an automatic conversation system between humans and computers is regarded as one of the most hardcore problems in computer science, which involves interdisciplinary techniques in information retrieval, natural language processing, artificial intelligence, etc. The challenges lie in how to respond so as to maintain a relevant and continuous conversation with humans. Along with the prosperity of Web 2.0, we are now able to collect extremely massive conversational data, which are publicly available. It casts a great opportunity to launch automatic conversation systems. Owing to the diversity of Web resources, a retrieval-based conversation system will be able to find at least some responses from the massive repository for any user inputs. Given a human issued message, i.e., query, our system would provide a reply after adequate training and learning of how to respond. In this paper, we propose a retrieval-based conversation system with the deep learning-to-respond schema through a deep neural network framework driven by web data. The proposed model is general and unified for different conversation scenarios in open domain. We incorporate the impact of multiple data inputs, and formulate various features and factors with optimization into the deep learning framework. In the experiments, we investigate the effectiveness of the proposed deep neural network structures with better combinations of all different evidence. We demonstrate significant performance improvement against a series of standard and state-of-art baselines in terms of p@1, MAP, nDCG, and MRR for conversational purposes." ] }
1512.08240
2277747675
Abstract We introduce the implicitly constrained least squares (ICLS) classifier, a novel semi-supervised version of the least squares classifier. This classifier minimizes the squared loss on the labeled data among the set of parameters implied by all possible labelings of the unlabeled data. Unlike other discriminative semi-supervised methods, this approach does not introduce explicit additional assumptions into the objective function, but leverages implicit assumptions already present in the choice of the supervised least squares classifier. This method can be formulated as a quadratic programming problem and its solution can be found using a simple gradient descent procedure. We prove that, in a limited 1-dimensional setting, this approach never leads to performance worse than the supervised classifier. Experimental results show that also in the general multidimensional case performance improvements can be expected, both in terms of the squared loss that is intrinsic to the classifier and in terms of the expected classification error.
Many diverse approaches to semi-supervised learning have been proposed @cite_1 @cite_35 . While semi-supervised techniques have shown promise in some applications, such as document classification @cite_27 , peptide identification @cite_43 and cancer recurrence prediction @cite_8 , it has also been observed that these techniques may give performance worse than their supervised counterparts. See for instance @cite_37 @cite_42 , for an analysis of this problem, and @cite_26 for a practical example in part-of-speech tagging. In these cases, disregarding the unlabeled data would lead to better performance.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_26", "@cite_8", "@cite_42", "@cite_1", "@cite_43", "@cite_27" ], "mid": [ "1990334093", "182780697", "1968953480", "2151801481", "2170569305", "", "2053943711", "2097089247" ], "abstract": [ "Semi-supervised learning is a learning paradigm concerned with the study of how computers and natural systems such as humans learn in the presence of both labeled and unlabeled data. Traditionally, learning has been studied either in the unsupervised paradigm (e.g., clustering, outlier detection) where all the data is unlabeled, or in the supervised paradigm (e.g., classification, regression) where all the data is labeled.The goal of semi-supervised learning is to understand how combining labeled and unlabeled data may change the learning behavior, and design algorithms that take advantage of such a combination. Semi-supervised learning is of great interest in machine learning and data mining because it can use readily available unlabeled data to improve supervised learning tasks when the labeled data is scarce or expensive. Semi-supervised learning also shows potential as a quantitative tool to understand human category learning, where most of the input is self-evidently unlabeled. In this introductory book, we present some popular semi-supervised learning models, including self-training, mixture models, co-training and multiview learning, graph-based methods, and semi-supervised support vector machines. For each model, we discuss its basic mathematical formulation. The success of semi-supervised learning depends critically on some underlying assumptions. We emphasize the assumptions made by each model and give counterexamples when appropriate to demonstrate the limitations of the different models. In addition, we discuss semi-supervised learning for cognitive psychology. Finally, we give a computational learning theoretic perspective on semi-supervised learning, and we conclude the book with a brief discussion of open questions in the field.", "There is disclosed a liquid flow meter comprising a transparent glass tube upon which is positioned a pair of spaced photodetectors. The liquid to be measured is passed through the tube and through the detectors. A supply tank filled with gas at a suitable pressure is connected to a valve which, when actuated, injects into the liquid stream a bubble of predetermined size. Passage of the bubble through the two photodetectors actuates a timing circuit which displays the elapsed time, thereby giving an accurate measurement of flow rate.", "In part of speech tagging by Hidden Markov Model, a statistical model is used to assign grammatical categories to words in a text. Early work in the field relied on a corpus which had been tagged by a human annotator to train the model. More recently, (1992) suggest that training can be achieved with a minimal lexicon and a limited amount of a priori information about probabilities, by using an Baum-Welch re-estimation to automatically refine the model. In this paper, I report two experiments designed to determine how much manual training information is needed. The first experiment suggests that initial biasing of either lexical or transition probabilities is essential to achieve a good accuracy. The second experiment reveals that there are three distinct patterns of Baum-Welch reestimation. In two of the patterns, the re-estimation ultimately reduces the accuracy of the tagging rather than improving it. The pattern which is applicable can be predicted from the quality of the initial model and the similarity between the tagged training corpus (if any) and the corpus to be tagged. Heuristics for deciding how to use re-estimation in an effective manner are given. The conclusions are broadly in agreement with those of Merialdo (1994), but give greater detail about the contributions of different parts of the model.", "Motivation: Gene expression profiling has shown great potential in outcome prediction for different types of cancers. Nevertheless, small sample size remains a bottleneck in obtaining robust and accurate classifiers. Traditional supervised learning techniques can only work with labeled data. Consequently, a large number of microarray data that do not have sufficient follow-up information are disregarded. To fully leverage all of the precious data in public databases, we turned to a semi-supervised learning technique, low density separation (LDS). Results: Using a clinically important question of predicting recurrence risk in colorectal cancer patients, we demonstrated that (i) semi-supervised classification improved prediction accuracy as compared with the state of the art supervised method SVM, (ii) performance gain increased with the number of unlabeled samples, (iii) unlabeled data from different institutes could be employed after appropriate processing and (iv) the LDS method is robust with regard to the number of input features. To test the general applicability of this semi-supervised method, we further applied LDS on human breast cancer datasets and also observed superior performance. Our results demonstrated great potential of semi-supervised learning in gene expression-based outcome prediction for cancer patients. Contact: ude.tlibrednav@gnahz.gnib Supplementary Information: Supplementary data are available at Bioinformatics online.", "This paper analyzes the performance of semi-supervised learning of mixture models. We show that unlabeled data can lead to an increase in classification error even in situations where additional labeled data would decrease classification error. We present a mathematical analysis of this \"degradation\" phenomenon and show that it is due to the fact that bias may be adversely affected by unlabeled data. We discuss the impact of these theoretical results to practical situations.", "", "Shotgun proteomics uses liquid chromatography-tandem mass spectrometry to identify proteins in complex biological samples. We describe an algorithm, called Percolator, for improving the rate of confident peptide identifications from a collection of tandem mass spectra. Percolator uses semi-supervised machine learning to discriminate between correct and decoy spectrum identifications, correctly assigning peptides to 17 more spectra from a tryptic Saccharomyces cerevisiae dataset, and up to 77 more spectra from non-tryptic digests, relative to a fully supervised approach.", "This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large quantities of unlabeled documents are readily available. We introduce an algorithm for learning from labeled and unlabeled documents based on the combination of Expectation-Maximization (EM) and a naive Bayes classifier. The algorithm first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents. It then trains a new classifier using the labels for all the documents, and iterates to convergence. This basic EM procedure works well when the data conform to the generative assumptions of the model. However these assumptions are often violated in practice, and poor performance can result. We present two extensions to the algorithm that improve classification accuracy under these conditions: (1) a weighting factor to modulate the contribution of the unlabeled data, and (2) the use of multiple mixture components per class. Experimental results, obtained using text from three different real-world tasks, show that the use of unlabeled data reduces classification error by up to 30 ." ] }
1512.08240
2277747675
Abstract We introduce the implicitly constrained least squares (ICLS) classifier, a novel semi-supervised version of the least squares classifier. This classifier minimizes the squared loss on the labeled data among the set of parameters implied by all possible labelings of the unlabeled data. Unlike other discriminative semi-supervised methods, this approach does not introduce explicit additional assumptions into the objective function, but leverages implicit assumptions already present in the choice of the supervised least squares classifier. This method can be formulated as a quadratic programming problem and its solution can be found using a simple gradient descent procedure. We prove that, in a limited 1-dimensional setting, this approach never leads to performance worse than the supervised classifier. Experimental results show that also in the general multidimensional case performance improvements can be expected, both in terms of the squared loss that is intrinsic to the classifier and in terms of the expected classification error.
* Self-Learning A simple approach to semi-supervised learning is offered by the self-learning procedure @cite_23 also known as Yarowsky's algorithm @cite_29 @cite_5 or retagging @cite_26 . Taking any classifier, we first estimate its parameters on only the labeled data. Using this trained classifier we label the unlabeled objects and add them, or potentially only those we are most confident about, with their predicted labels to the labeled training set. The classifier parameters are re-estimated using these labeled objects to get a new classifier. One iteratively applies this procedure until the predicted labels of the unlabeled data no longer change.
{ "cite_N": [ "@cite_5", "@cite_29", "@cite_26", "@cite_23" ], "mid": [ "2101210369", "2152005244", "1968953480", "1975165783" ], "abstract": [ "This paper presents an unsupervised learning algorithm for sense disambiguation that, when trained on unannotated English text, rivals the performance of supervised techniques that require time-consuming hand annotations. The algorithm is based on two powerful constraints---that words tend to have one sense per discourse and one sense per collocation---exploited in an iterative bootstrapping procedure. Tested accuracy exceeds 96 .", "Many problems in computational linguistics are well suited for bootstrapping (semisupervised learning) techniques. The Yarowsky algorithm is a well-known bootstrapping algorithm, but it is not mathematically well understood. This article analyzes it as optimizing an objective function. More specifically, a number of variants of the Yarowsky algorithm (though not the original algorithm itself) are shown to optimize either likelihood or a closely related objective function K.", "In part of speech tagging by Hidden Markov Model, a statistical model is used to assign grammatical categories to words in a text. Early work in the field relied on a corpus which had been tagged by a human annotator to train the model. More recently, (1992) suggest that training can be achieved with a minimal lexicon and a limited amount of a priori information about probabilities, by using an Baum-Welch re-estimation to automatically refine the model. In this paper, I report two experiments designed to determine how much manual training information is needed. The first experiment suggests that initial biasing of either lexical or transition probabilities is essential to achieve a good accuracy. The second experiment reveals that there are three distinct patterns of Baum-Welch reestimation. In two of the patterns, the re-estimation ultimately reduces the accuracy of the tagging rather than improving it. The pattern which is applicable can be predicted from the quality of the initial model and the similarity between the tagged training corpus (if any) and the corpus to be tagged. Heuristics for deciding how to use re-estimation in an effective manner are given. The conclusions are broadly in agreement with those of Merialdo (1994), but give greater detail about the contributions of different parts of the model.", "Abstract The construction of a suitable rule of allocation in the two-population discrimination problem is considered in the case where there are initially available from the populations II1, II2, n 1, n 2 observations and M unclassified observations. An iterative reclassification procedure based on the n 1 + n 3 + M observations is proposed and found asymptotically optimal when M → ∞ and n 1 and n 2 are moderately large. The case of finite M is evaluated by a Monte Carlo experiment which suggests that the proposed procedure, after only one iteration, gives a rule with smaller average risk than the usual rule based on just the n 1 + n 2 classified observations." ] }
1512.08240
2277747675
Abstract We introduce the implicitly constrained least squares (ICLS) classifier, a novel semi-supervised version of the least squares classifier. This classifier minimizes the squared loss on the labeled data among the set of parameters implied by all possible labelings of the unlabeled data. Unlike other discriminative semi-supervised methods, this approach does not introduce explicit additional assumptions into the objective function, but leverages implicit assumptions already present in the choice of the supervised least squares classifier. This method can be formulated as a quadratic programming problem and its solution can be found using a simple gradient descent procedure. We prove that, in a limited 1-dimensional setting, this approach never leads to performance worse than the supervised classifier. Experimental results show that also in the general multidimensional case performance improvements can be expected, both in terms of the squared loss that is intrinsic to the classifier and in terms of the expected classification error.
One of the advantages of this procedure is that it can be applied to any supervised classifier. It has also shown practical success in some application domains, particularly document classification @cite_27 @cite_5 . Unfortunately, the process of self-training can also lead to severely decreased performance, compared to the supervised solution @cite_37 @cite_42 . One can imagine that once an object is incorrectly labeled and added to the training set, its incorrect label may be reinforced, leading the solution away from the optimum. Self-learning is closely related to expectation maximization (EM) based approaches @cite_29 . Indeed, expectation maximization suffers from the same issues as self-learning @cite_35 . In we compare the proposed approach to self-learning for the least squares classifier.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_29", "@cite_42", "@cite_27", "@cite_5" ], "mid": [ "1990334093", "182780697", "2152005244", "2170569305", "2097089247", "2101210369" ], "abstract": [ "Semi-supervised learning is a learning paradigm concerned with the study of how computers and natural systems such as humans learn in the presence of both labeled and unlabeled data. Traditionally, learning has been studied either in the unsupervised paradigm (e.g., clustering, outlier detection) where all the data is unlabeled, or in the supervised paradigm (e.g., classification, regression) where all the data is labeled.The goal of semi-supervised learning is to understand how combining labeled and unlabeled data may change the learning behavior, and design algorithms that take advantage of such a combination. Semi-supervised learning is of great interest in machine learning and data mining because it can use readily available unlabeled data to improve supervised learning tasks when the labeled data is scarce or expensive. Semi-supervised learning also shows potential as a quantitative tool to understand human category learning, where most of the input is self-evidently unlabeled. In this introductory book, we present some popular semi-supervised learning models, including self-training, mixture models, co-training and multiview learning, graph-based methods, and semi-supervised support vector machines. For each model, we discuss its basic mathematical formulation. The success of semi-supervised learning depends critically on some underlying assumptions. We emphasize the assumptions made by each model and give counterexamples when appropriate to demonstrate the limitations of the different models. In addition, we discuss semi-supervised learning for cognitive psychology. Finally, we give a computational learning theoretic perspective on semi-supervised learning, and we conclude the book with a brief discussion of open questions in the field.", "There is disclosed a liquid flow meter comprising a transparent glass tube upon which is positioned a pair of spaced photodetectors. The liquid to be measured is passed through the tube and through the detectors. A supply tank filled with gas at a suitable pressure is connected to a valve which, when actuated, injects into the liquid stream a bubble of predetermined size. Passage of the bubble through the two photodetectors actuates a timing circuit which displays the elapsed time, thereby giving an accurate measurement of flow rate.", "Many problems in computational linguistics are well suited for bootstrapping (semisupervised learning) techniques. The Yarowsky algorithm is a well-known bootstrapping algorithm, but it is not mathematically well understood. This article analyzes it as optimizing an objective function. More specifically, a number of variants of the Yarowsky algorithm (though not the original algorithm itself) are shown to optimize either likelihood or a closely related objective function K.", "This paper analyzes the performance of semi-supervised learning of mixture models. We show that unlabeled data can lead to an increase in classification error even in situations where additional labeled data would decrease classification error. We present a mathematical analysis of this \"degradation\" phenomenon and show that it is due to the fact that bias may be adversely affected by unlabeled data. We discuss the impact of these theoretical results to practical situations.", "This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large quantities of unlabeled documents are readily available. We introduce an algorithm for learning from labeled and unlabeled documents based on the combination of Expectation-Maximization (EM) and a naive Bayes classifier. The algorithm first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents. It then trains a new classifier using the labels for all the documents, and iterates to convergence. This basic EM procedure works well when the data conform to the generative assumptions of the model. However these assumptions are often violated in practice, and poor performance can result. We present two extensions to the algorithm that improve classification accuracy under these conditions: (1) a weighting factor to modulate the contribution of the unlabeled data, and (2) the use of multiple mixture components per class. Experimental results, obtained using text from three different real-world tasks, show that the use of unlabeled data reduces classification error by up to 30 .", "This paper presents an unsupervised learning algorithm for sense disambiguation that, when trained on unannotated English text, rivals the performance of supervised techniques that require time-consuming hand annotations. The algorithm is based on two powerful constraints---that words tend to have one sense per discourse and one sense per collocation---exploited in an iterative bootstrapping procedure. Tested accuracy exceeds 96 ." ] }
1512.08240
2277747675
Abstract We introduce the implicitly constrained least squares (ICLS) classifier, a novel semi-supervised version of the least squares classifier. This classifier minimizes the squared loss on the labeled data among the set of parameters implied by all possible labelings of the unlabeled data. Unlike other discriminative semi-supervised methods, this approach does not introduce explicit additional assumptions into the objective function, but leverages implicit assumptions already present in the choice of the supervised least squares classifier. This method can be formulated as a quadratic programming problem and its solution can be found using a simple gradient descent procedure. We prove that, in a limited 1-dimensional setting, this approach never leads to performance worse than the supervised classifier. Experimental results show that also in the general multidimensional case performance improvements can be expected, both in terms of the squared loss that is intrinsic to the classifier and in terms of the expected classification error.
The low-density assumption is used in entropy regularization @cite_7 as well as for support vector classification in the transductive support vector machine (TSVM) @cite_9 and closely related semi-supervised SVM (S @math VM) @cite_40 @cite_12 . In these approaches an additional term is added to the objective function to push the decision boundary away from regions of high density. Several approaches have been put forth to minimize the resulting non-convex objective function, such as the convex concave procedure @cite_32 and difference convex programming @cite_12 @cite_41 .
{ "cite_N": [ "@cite_7", "@cite_41", "@cite_9", "@cite_32", "@cite_40", "@cite_12" ], "mid": [ "2145494108", "2158522957", "2107008379", "2142114717", "2107968230", "2128097790" ], "abstract": [ "We consider the semi-supervised learning problem, where a decision rule is to be learned from labeled and unlabeled data. In this framework, we motivate minimum entropy regularization, which enables to incorporate unlabeled data in the standard supervised learning. Our approach includes other approaches to the semi-supervised problem as particular or limiting cases. A series of experiments illustrates that the proposed solution benefits from unlabeled data. The method challenges mixture models when the data are sampled from the distribution class spanned by the generative model. The performances are definitely in favor of minimum entropy regularization when generative models are misspecified, and the weighting of unlabeled data provides robustness to the violation of the \"cluster assumption\". Finally, we also illustrate that the method can also be far superior to manifold learning in high dimension spaces.", "In classification, semi-supervised learning occurs when a large amount of unlabeled data is available with only a small number of labeled data. In such a situation, how to enhance predictability of classification through unlabeled data is the focus. In this article, we introduce a novel large margin semi-supervised learning methodology, using grouping information from unlabeled data, together with the concept of margins, in a form of regularization controlling the interplay between labeled and unlabeled data. Based on this methodology, we develop two specific machines involving support vector machines and ψ-learning, denoted as SSVM and SPSI, through difference convex programming. In addition, we estimate the generalization error using both labeled and unlabeled data, for tuning regularizers. Finally, our theoretical and numerical analyses indicate that the proposed methodology achieves the desired objective of delivering high performance in generalization, particularly against some strong performers.", "", "We show how the concave-convex procedure can be applied to transductive SVMs, which traditionally require solving a combinatorial search problem. This provides for the first time a highly scalable algorithm in the nonlinear case. Detailed experiments verify the utility of our approach. Software is available at http: www.kyb.tuebingen.mpg.de bs people fabee transduction.html .", "We introduce a semi-supervised support vector machine (S3VM) method. Given a training set of labeled data and a working set of unlabeled data, S3VM constructs a support vector machine using both the training and working sets. We use S3VM to solve the transduction problem using overall risk minimization (ORM) posed by Vapnik. The transduction problem is to estimate the value of a classification function at the given points in the working set. This contrasts with the standard inductive learning problem of estimating the classification function at all possible values and then using the fixed function to deduce the classes of the working set data. We propose a general S3VM model that minimizes both the misclassification error and the function capacity based on all the available data. We show how the S3VM model for 1-norm linear support vector machines can be converted to a mixed-integer program and then solved exactly using integer programming. Results of S3VM and the standard 1-norm support vector machine approach are compared on ten data sets. Our computational results support the statistical learning theory results showing that incorporating working data improves generalization when insufficient training information is available. In every case, S3VM either improved or showed no significant difference in generalization compared to the traditional approach.", "Large scale learning is often realistic only in a semi-supervised setting where a small set of labeled examples is available together with a large collection of unlabeled data. In many information retrieval and data mining applications, linear classifiers are strongly preferred because of their ease of implementation, interpretability and empirical performance. In this work, we present a family of semi-supervised linear support vector classifiers that are designed to handle partially-labeled sparse datasets with possibly very large number of examples and features. At their core, our algorithms employ recently developed modified finite Newton techniques. Our contributions in this paper are as follows: (a) We provide an implementation of Transductive SVM (TSVM) that is significantly more efficient and scalable than currently used dual techniques, for linear classification problems involving large, sparse datasets. (b) We propose a variant of TSVM that involves multiple switching of labels. Experimental results show that this variant provides an order of magnitude further improvement in training efficiency. (c) We present a new algorithm for semi-supervised learning based on a Deterministic Annealing (DA) approach. This algorithm alleviates the problem of local minimum in the TSVM optimization procedure while also being computationally attractive. We conduct an empirical study on several document classification tasks which confirms the value of our methods in large scale semi-supervised settings." ] }
1512.08240
2277747675
Abstract We introduce the implicitly constrained least squares (ICLS) classifier, a novel semi-supervised version of the least squares classifier. This classifier minimizes the squared loss on the labeled data among the set of parameters implied by all possible labelings of the unlabeled data. Unlike other discriminative semi-supervised methods, this approach does not introduce explicit additional assumptions into the objective function, but leverages implicit assumptions already present in the choice of the supervised least squares classifier. This method can be formulated as a quadratic programming problem and its solution can be found using a simple gradient descent procedure. We prove that, in a limited 1-dimensional setting, this approach never leads to performance worse than the supervised classifier. Experimental results show that also in the general multidimensional case performance improvements can be expected, both in terms of the squared loss that is intrinsic to the classifier and in terms of the expected classification error.
* Safe Semi-supervised Learning @cite_18 @cite_34 attempt to guard against the possibility of deterioration in performance by not introducing additional assumptions, but instead leveraging implicit assumptions already present in the choice of the supervised classifier. These assumptions link parameters estimates that depend on labeled data to parameter estimates that rely on all data. By exploiting these links, semi-supervised versions of the nearest mean classifier and the linear discriminant are derived. Because these links are unique to each classifier, the approach does not generalize directly to other classifiers. The method presented here is similar in spirit, but unlike @cite_18 @cite_34 , no explicit equations have to be formulated to link parameter estimates using only labeled data to parameter estimates based on all data. Moreover, our approach allows for theoretical analysis of the non-deterioration of the performance of the procedure.
{ "cite_N": [ "@cite_18", "@cite_34" ], "mid": [ "1781870695", "2123610365" ], "abstract": [ "A rather simple semi-supervised version of the equally simple nearest mean classifier is presented. However simple, the proposed approach is of practical interest as the nearest mean classifier remains a relevant tool in biomedical applications or other areas dealing with relatively high-dimensional feature spaces or small sample sizes. More importantly, the performance of our semi-supervised nearest mean classifier is typically expected to improve over that of its standard supervised counterpart and typically does not deteriorate with increasing numbers of unlabeled data. This behavior is achieved by constraining the parameters that are estimated to comply with relevant information in the unlabeled data, which leads, in expectation, to a more rapid convergence to the large-sample solution because the variance of the estimate is reduced. In a sense, our proposal demonstrates that it may be possible to properly train a known classification scheme such that it can benefit from unlabeled data, while avoiding the additional assumptions typically made in semi-supervised learning.", "We cast a semi-supervised nearest mean classifier, previously introduced by the first author, in a more principled log-likelihood formulation that is subject to constraints. This, in turn, leads us to make the important suggestion to not only investigate error rates of semi-supervised learners but also consider the risk they originally aim to optimize. We demonstrate empirically that in terms of classification error, mixed results are obtained when comparing supervised to semi-supervised nearest mean classification, while in terms of log-likelihood on the test set, the semi-supervised method consistently outperforms its supervised counterpart. Comparisons to self-learning, a standard approach in semi-supervised learning, are included to further clarify the way, in which our constrained nearest mean classifier improves over regular, supervised nearest mean classification." ] }
1512.08240
2277747675
Abstract We introduce the implicitly constrained least squares (ICLS) classifier, a novel semi-supervised version of the least squares classifier. This classifier minimizes the squared loss on the labeled data among the set of parameters implied by all possible labelings of the unlabeled data. Unlike other discriminative semi-supervised methods, this approach does not introduce explicit additional assumptions into the objective function, but leverages implicit assumptions already present in the choice of the supervised least squares classifier. This method can be formulated as a quadratic programming problem and its solution can be found using a simple gradient descent procedure. We prove that, in a limited 1-dimensional setting, this approach never leads to performance worse than the supervised classifier. Experimental results show that also in the general multidimensional case performance improvements can be expected, both in terms of the squared loss that is intrinsic to the classifier and in terms of the expected classification error.
Aside from the work by @cite_18 @cite_34 , another attempt to construct a robust semi-supervised version of a supervised classifier has been made in @cite_31 , which introduces the safe semi-supervised support vector machine (S @math VM). This method is an extension of S @math VM @cite_40 which constructs a set of low-density decision boundaries with the help of the additional unlabeled data, and chooses the decision boundary, which, even in the worst-case, gives the highest gain in performance over the supervised solution. If the low-density assumption holds, this procedure provably increases classification accuracy over the supervised solution. The main difference with the method considered in this paper, however, is that we make no such additional assumptions. We show that even without these assumptions, safe improvements are possible for the least squares classifier.
{ "cite_N": [ "@cite_40", "@cite_18", "@cite_34", "@cite_31" ], "mid": [ "2107968230", "1781870695", "2123610365", "2183800414" ], "abstract": [ "We introduce a semi-supervised support vector machine (S3VM) method. Given a training set of labeled data and a working set of unlabeled data, S3VM constructs a support vector machine using both the training and working sets. We use S3VM to solve the transduction problem using overall risk minimization (ORM) posed by Vapnik. The transduction problem is to estimate the value of a classification function at the given points in the working set. This contrasts with the standard inductive learning problem of estimating the classification function at all possible values and then using the fixed function to deduce the classes of the working set data. We propose a general S3VM model that minimizes both the misclassification error and the function capacity based on all the available data. We show how the S3VM model for 1-norm linear support vector machines can be converted to a mixed-integer program and then solved exactly using integer programming. Results of S3VM and the standard 1-norm support vector machine approach are compared on ten data sets. Our computational results support the statistical learning theory results showing that incorporating working data improves generalization when insufficient training information is available. In every case, S3VM either improved or showed no significant difference in generalization compared to the traditional approach.", "A rather simple semi-supervised version of the equally simple nearest mean classifier is presented. However simple, the proposed approach is of practical interest as the nearest mean classifier remains a relevant tool in biomedical applications or other areas dealing with relatively high-dimensional feature spaces or small sample sizes. More importantly, the performance of our semi-supervised nearest mean classifier is typically expected to improve over that of its standard supervised counterpart and typically does not deteriorate with increasing numbers of unlabeled data. This behavior is achieved by constraining the parameters that are estimated to comply with relevant information in the unlabeled data, which leads, in expectation, to a more rapid convergence to the large-sample solution because the variance of the estimate is reduced. In a sense, our proposal demonstrates that it may be possible to properly train a known classification scheme such that it can benefit from unlabeled data, while avoiding the additional assumptions typically made in semi-supervised learning.", "We cast a semi-supervised nearest mean classifier, previously introduced by the first author, in a more principled log-likelihood formulation that is subject to constraints. This, in turn, leads us to make the important suggestion to not only investigate error rates of semi-supervised learners but also consider the risk they originally aim to optimize. We demonstrate empirically that in terms of classification error, mixed results are obtained when comparing supervised to semi-supervised nearest mean classification, while in terms of log-likelihood on the test set, the semi-supervised method consistently outperforms its supervised counterpart. Comparisons to self-learning, a standard approach in semi-supervised learning, are included to further clarify the way, in which our constrained nearest mean classifier improves over regular, supervised nearest mean classification.", "It is usually expected that, when labeled data are limited, the learning performance can be improved by exploiting unlabeled data. In many cases, however, the performances of current semi-supervised learning approaches may be even worse than purely using the limited labeled data. It is desired to have safe semi-supervised learning approaches which never degenerate learning performance by using unlabeled data. In this paper, we focus on semi-supervised support vector machines (S3VMs) and propose S4VMs, i.e., safe S3VMs. Unlike S3VMs which typically aim at approaching an optimal low-density separator, S4VMs try to exploit the candidate low-density separators simultaneously to reduce the risk of identifying a poor separator with unlabeled data. We describe two implementations of S4VMs, and our comprehensive experiments show that the overall performance of S4VMs are highly competitive to S3VMs, while in contrast to S3VMs which degenerate performance in many cases, S4VMs are never significantly inferior to inductive SVMs." ] }
1512.08269
2285691875
The Hidden Markov Model (HMM) is one of the mainstays of statistical modeling of discrete time series, with applications including speech recognition, computational biology, computer vision and econometrics. Estimating an HMM from its observation process is often addressed via the Baum-Welch algorithm, which is known to be susceptible to local optima. In this paper, we first give a general characterization of the basin of attraction associated with any global optimum of the population likelihood. By exploiting this characterization, we provide non-asymptotic finite sample guarantees on the Baum-Welch updates, guaranteeing geometric convergence to a small ball of radius on the order of the minimax rate around a global optimum. As a concrete example, we prove a linear rate of convergence for a hidden Markov mixture of two isotropic Gaussians given a suitable mean separation and an initialization within a ball of large radius around (one of) the true parameters. To our knowledge, these are the first rigorous local convergence guarantees to global optima for the Baum-Welch algorithm in a setting where the likelihood function is nonconvex. We complement our theoretical results with thorough numerical simulations studying the convergence of the Baum-Welch algorithm and illustrating the accuracy of our predictions.
Our work builds upon a framework for analysis of EM, as previously introduced by a subset of the current authors @cite_8 ; see also the follow-up work to regularized EM algorithms @cite_28 @cite_26 . All of this past work applies to models based on i.i.d. samples, and as we show in this paper, there are a number of non-trivial steps required to derive analogous theory for the dependent variables that arise for HMMs. Before doing so, let us put the results of this paper in context relative to older and more classical work on Baum-Welch and related algorithms.
{ "cite_N": [ "@cite_28", "@cite_26", "@cite_8" ], "mid": [ "2184753682", "", "2962737134" ], "abstract": [ "Latent variable models are a fundamental modeling tool in machine learning applications, but they present significant computational and analytical challenges. The popular EM algorithm and its variants, is a much used algorithmic tool; yet our rigorous understanding of its performance is highly incomplete. Recently, work in (2014) has demonstrated that for an important class of problems, EM exhibits linear local convergence. In the high-dimensional setting, however, the M-step may not be well defined. We address precisely this setting through a unified treatment using regularization. While regularization for high-dimensional problems is by now well understood, the iterative EM algorithm requires a careful balancing of making progress towards the solution while identifying the right structure (e.g., sparsity or low-rank). In particular, regularizing the M-step using the state-of-the-art high-dimensional prescriptions (e.g., Wainwright (2014)) is not guaranteed to provide this balance. Our algorithm and analysis are linked in a way that reveals the balance between optimization and statistical errors. We specialize our general framework to sparse gaussian mixture models, high-dimensional mixed regression, and regression with missing variables, obtaining statistical guarantees for each of these examples.", "", "We develop a general framework for proving rigorous guarantees on the performance of the EM algorithm and a variant known as gradient EM. Our analysis is divided into two parts: a treatment of these algorithms at the population level (in the limit of infinite data), followed by results that apply to updates based on a finite set of samples. First, we characterize the domain of attraction of any global maximizer of the population likelihood. This characterization is based on a novel view of the EM updates as a perturbed form of likelihood ascent, or in parallel, of the gradient EM updates as a perturbed form of standard gradient ascent. Leveraging this characterization, we then provide non-asymptotic guarantees on the EM and gradient EM algorithms when applied to a finite set of samples. We develop consequences of our general theory for three canonical examples of incompletedata problems: mixture of Gaussians, mixture of regressions, and linear regression with covariates missing completely at random. In each case, our theory guarantees that with a suitable initialization, a relatively small number of EM (or gradient EM) steps will yield (with high probability) an estimate that is within statistical error of the MLE. We provide simulations to confirm this theoretically predicted behavior." ] }
1512.08269
2285691875
The Hidden Markov Model (HMM) is one of the mainstays of statistical modeling of discrete time series, with applications including speech recognition, computational biology, computer vision and econometrics. Estimating an HMM from its observation process is often addressed via the Baum-Welch algorithm, which is known to be susceptible to local optima. In this paper, we first give a general characterization of the basin of attraction associated with any global optimum of the population likelihood. By exploiting this characterization, we provide non-asymptotic finite sample guarantees on the Baum-Welch updates, guaranteeing geometric convergence to a small ball of radius on the order of the minimax rate around a global optimum. As a concrete example, we prove a linear rate of convergence for a hidden Markov mixture of two isotropic Gaussians given a suitable mean separation and an initialization within a ball of large radius around (one of) the true parameters. To our knowledge, these are the first rigorous local convergence guarantees to global optima for the Baum-Welch algorithm in a setting where the likelihood function is nonconvex. We complement our theoretical results with thorough numerical simulations studying the convergence of the Baum-Welch algorithm and illustrating the accuracy of our predictions.
These latter two results are abstract, applicable to a broad class of HMMs. We then specialize them to the case of a hidden Markov mixture consisting of two isotropic components, with means separated by a constant distance, and obtain concrete guarantees for this model. It is worth comparing these results to past work in the i.i.d. setting, for which the problem of Gaussian mixture estimation under various separation assumptions has been extensively studied (e.g., @cite_23 @cite_4 @cite_36 @cite_31 ). The constant distance separation required in our work is much weaker than the separation assumptions imposed in papers that focus on correctly labeling samples in a mixture model. Our separation condition is related to, but in general incomparable with the non-degeneracy requirements in other work @cite_12 @cite_30 @cite_7 .
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_7", "@cite_36", "@cite_23", "@cite_31", "@cite_12" ], "mid": [ "", "1980018091", "", "1907927509", "1956647075", "2026302946", "1825640910" ], "abstract": [ "", "We show that a simple spectral algorithm for learning a mixture of k spherical Gaussians in Rn works remarkably well--it succeeds in identifying the Gaussians assuming essentially the minimum possible separation between their centers that keeps them unique (solving an open problem of Arora and Kannan (Proceedings of the 33rd ACM STOC, 2001). The sample complexity and running time are polynomial in both n and k. The algorithm can be applied to the more general problem of learning a mixture of \"weakly isotropic\" distributions (e.g. a mixture of uniform distributions on cubes).", "", "In recent years analysis of complexity of learning Gaussian mixture models from sampled data has received significant attention in computational machine learning and theory communities. In this paper we present the first result showing that polynomial time learning of multidimensional Gaussian Mixture distributions is possible when the separation between the component means is arbitrarily small. Specifically, we present an algorithm for learning the parameters of a mixture of k identical spherical Gaussians in n-dimensional space with an arbitrarily small separation between the components, which is polynomial in dimension, inverse component separation and other input parameters for a fixed number of components k. The algorithm uses a projection to k dimensions and then a reduction to the 1-dimensional case. It relies on a theoretical analysis showing that two 1-dimensional mixtures whose densities are close in the L norm must have similar means and mixing coefficients. To produce the necessary lower bound for the L norm in terms of the distances between the corresponding means, we analyze the behavior of the Fourier transform of a mixture of Gaussians in one dimension around the origin, which turns out to be closely related to the properties of the Vandermonde matrix obtained from the component means. Analysis of minors of the Vandermonde matrix together with basic function approximation results allows us to provide a lower bound for the norm of the mixture in the Fourier domain and hence a bound in the original space. Additionally, we present a separate argument for reconstructing variance.", "Mixtures of Gaussians are among the most fundamental and widely used statistical models. Current techniques for learning such mixtures from data are local search heuristics with weak performance guarantees. We present the first provably correct algorithm for learning a mixture of Gaussians. This algorithm is very simple and returns the true centers of the Gaussians to within the precision specified by the user with high probability. It runs in time only linear in the dimension of the data and polynomial in the number of Gaussians.", "Given data drawn from a mixture of multivariate Gaussians, a basic problem is to accurately estimate the mixture parameters. We give an algorithm for this problem that has running time and data requirements polynomial in the dimension and the inverse of the desired accuracy, with provably minimal assumptions on the Gaussians. As a simple consequence of our learning algorithm, we we give the first polynomial time algorithm for proper density estimation for mixtures of k Gaussians that needs no assumptions on the mixture. It was open whether proper density estimation was even statistically possible (with no assumptions) given only polynomially many samples, let alone whether it could be computationally efficient. The building blocks of our algorithm are based on the work (Kalai , STOC 2010) that gives an efficient algorithm for learning mixtures of two Gaussians by considering a series of projections down to one dimension, and applying the method of moments to each univariate projection. A major technical hurdle in the previous work is showing that one can efficiently learn univariate mixtures of two Gaussians. In contrast, because pathological scenarios can arise when considering projections of mixtures of more than two Gaussians, the bulk of the work in this paper concerns how to leverage a weaker algorithm for learning univariate mixtures (of many Gaussians) to learn in high dimensions. Our algorithm employs hierarchical clustering and rescaling, together with methods for backtracking and recovering from the failures that can occur in our univariate algorithm. Finally, while the running time and data requirements of our algorithm depend exponentially on the number of Gaussians in the mixture, we prove that such a dependence is necessary.", "Hidden Markov Models (HMMs) are one of the most fundamental and widely used statistical tools for modeling discrete time series. In general, learning HMMs from data is computationally hard (under cryptographic assumptions), and practitioners typically resort to search heuristics which suffer from the usual local optima issues. We prove that under a natural separation condition (bounds on the smallest singular value of the HMM parameters), there is an efficient and provably correct algorithm for learning HMMs. The sample complexity of the algorithm does not explicitly depend on the number of distinct (discrete) observations-it implicitly depends on this quantity through spectral properties of the underlying HMM. This makes the algorithm particularly applicable to settings with a large number of observations, such as those in natural language processing where the space of observation is sometimes the words in a language. The algorithm is also simple, employing only a singular value decomposition and matrix multiplications." ] }
1512.08299
2955348018
Time-varying graphs are a useful model for networks with dynamic connectivity such as vehicular networks, yet, despite their great modeling power, many important features of time-varying graphs are still poorly understood. In this paper, we study the survivability properties of time-varying networks against unpredictable interruptions. We first show that the traditional definition of survivability is not effective in time-varying networks, and propose a new survivability framework. To evaluate the survivability of time-varying networks under the new framework, we propose two metrics that are analogous to MaxFlow and MinCut in static networks. We show that some fundamental survivability-related results such as Menger's Theorem only conditionally hold in time-varying networks. Then we analyze the complexity of computing the proposed metrics and develop several approximation algorithms. Finally, we conduct trace-driven simulations to demonstrate the application of our survivability framework to the robust design of a real-world bus communication network.
Despite the extensive research on time-varying graphs, there is very little literature on survivability of time-varying networks. The closest work to ours was done by Berman @cite_18 and Kleinberg @cite_9 . They discussed vulnerability in so-called edge-scheduled networks" or temporal networks" where each link is active for exactly one slot and only permanent failures happen. Our work considers a more general graph model while leveraging the temporal features of failures, thus generalizing their results. Scellato @cite_13 investigated a similar problem in random time-varying graphs and proposed a metric called temporal robustness". By comparison, our framework is deterministic, thus guaranteeing the worst-case survivability. Li @cite_32 studied a related but different problem in time-varying networks; specifically, they proposed heuristic algorithms to find the the min-cost subgraph of a probabilistic time-varying graph such that the probability that the subgraph is temporally connected exceeds a certain threshold.
{ "cite_N": [ "@cite_9", "@cite_18", "@cite_13", "@cite_32" ], "mid": [ "", "2049706966", "2093109628", "2066466703" ], "abstract": [ "", "An edge-scheduled network N is a multigraph G = (V, E), where each edge e ϵ E has been assigned two real weights: a start time α(e) and a finish time β(e). Such a multigraph models a communication or transportation network. A multiedge joining vertices u and v represents a direct communication (transportation) link between u and v, and the edges of the multiedge represent potential communications (transportations) between u and v over a fixed period of time. For a, b ϵ V, and k a nonnegative integer, we say that N is k-failure ab-invulnerable for the time period [0, t] if information can be relayed from a to b within that time period, even if up to k edges are deleted, i.e., “fail.” The k-failure ab-vulnerability threshold νab(k) is the earliest time t such that N is k-failure ab-invulnerable for the time period [0, t] [where νab(k) = ∞ if no such t exists]. Let κ denote the smallest k such that νab(k) = ∞. In this paper, we present an O(κ|E|) algorithm for computing νab(i), i = 0, …, κ −1. The latter algorithm constructs a set of κ pairwise edge-disjoint schedule-conforming paths P0, …, Pκ −1 such that the finish time of Pi is νab(i), i = 0, 1, …, κ −1. (A path P = ae1u1e2 ··· Upp−1epb is schedule-conforming if the finish time of edge ei is no greater than the start time of the next edge ei + 1.) The existence of such paths when α(e) = β(e) = 0, for all e ϵ E, implies Menger's Theorem. In this paper, we also show that the obvious analogs of these results for either multiedge deletions or vertex deletions do not hold. In fact, we show that the problem of finding k schedule-conforming paths such that no two paths pass through the same vertex (multiedge) is NP-complete, even for k = 2. © 1996 John Wiley & Sons, Inc.", "The application of complex network models to communication systems has led to several important results: nonetheless, previous research has often neglected to take into account their temporal properties, which in many real scenarios play a pivotal role. At the same time, network robustness has come extensively under scrutiny. Understanding whether networked systems can undergo structural damage and yet perform efficiently is crucial to both their protection against failures and to the design of new applications. In spite of this, it is still unclear what type of resilience we may expect in a network which continuously changes over time. In this work, we present the first attempt to define the concept of temporal network robustness: we describe a measure of network robustness for time-varying networks and we show how it performs on different classes of random models by means of analytical and numerical evaluation. Finally, we report a case study on a real-world scenario, an opportunistic vehicular system of about 500 taxicabs, highlighting the importance of time in the evaluation of robustness. Particularly, we show how static approximation can wrongly indicate high robustness of fragile networks when adopted in mobile time-varying networks, while a temporal approach captures more accurately the system performance.", "Delay tolerant networks (DTNs) recently have drawn much attention from researchers due to their wide applications in various challenging environments. Previous DTN research mainly concentrates on information propagation and packet delivery. However, with possible participation of a large number of mobile devices, how to maintain efficient and dynamic topology becomes crucial. In this paper, we study the topology design problem in a predictable DTN where the time-evolving topology is known a priori or can be predicted. We model such a time-evolving network as a weighted directed space-time graph which includes both spacial and temporal information. Links inside the space-time graph are unreliable due to either the dynamic nature of wireless communications or the rough prediction of underlying human device mobility. The purpose of our reliable topology design problem is to build a sparse structure from the original space-time graph such that (1) for any pair of devices, there is a space-time path connecting them with a reliability higher than the required threshold; (2) the total cost of the structure is minimized. Such an optimization problem is NP-hard, thus we propose several heuristics which can significantly reduce the total cost of the topology while maintain the “reliable” connectivity over time. In this paper, we consider both unicast and broadcast reliability of a topology. Finally, extensive simulations are conducted on random DTNs, a synthetic space DTN, and a real-world DTN tracing data. Results demonstrate the efficiency of the proposed methods." ] }
1512.08260
2283184565
In this paper, we provide an elementary, unified treatment of two distinct blue-shift instabilities for the scalar wave equation on a fixed Kerr black hole background: the celebrated blue-shift at the Cauchy horizon (familiar from the strong cosmic censorship conjecture) and the time-reversed red-shift at the event horizon (relevant in classical scattering theory). Our first theorem concerns the latter and constructs solutions to the wave equation on Kerr spacetimes such that the radiation field along the future event horizon vanishes and the radiation field along future null infinity decays at an arbitrarily fast polynomial rate, yet, the local energy of the solution is infinite near any point on the future event horizon. Our second theorem constructs solutions to the wave equation on rotating Kerr spacetimes such that the radiation field along the past event horizon (extended into the black hole) vanishes and the radiation field along past null infinity decays at an arbitrarily fast polynomial rate, yet, the local energy of the solution is infinite near any point on the Cauchy horizon. The results make essential use of the scattering theory developed in Dafermos, Rodnianski and Shlapentokh-Rothman (A scattering theory for the wave equation on Kerr black hole exteriors (2014). arXiv:1412.8379) and exploit directly the time-translation invariance of the scattering map and the non-triviality of the transmission map.
Theorem above concerning the event horizon @math is related to a similar result proven in @cite_23 for the Schwarzschild @math case using certain monotonicity properties of the wave equation in spherical symmetry (Theorem 11.1 of @cite_23 ). In particular, the above generalisation shows that Theorem 2 of @cite_23 holds for all @math .
{ "cite_N": [ "@cite_23" ], "mid": [ "326163211" ], "abstract": [ "We develop a definitive physical-space scattering theory for the scalar wave equation on Kerr exterior backgrounds in the general subextremal case |a|<M. In particular, we prove results corresponding to \"existence and uniqueness of scattering states\" and \"asymptotic completeness\" and we show moreover that the resulting \"scattering matrix\" mapping radiation fields on the past horizon and past null infinity to radiation fields on the future horizon and future null infinity is a bounded operator. The latter allows us to give a time-domain theory of superradiant reflection. The boundedness of the scattering matrix shows in particular that the maximal amplification of solutions associated to ingoing finite-energy wave packets on past null infinity is bounded. On the frequency side, this corresponds to the novel statement that the suitably normalised reflection and transmission coefficients are uniformly bounded independently of the frequency parameters. We further complement this with a demonstration that superradiant reflection indeed amplifies the energy radiated to future null infinity of suitable wave-packets as above. The results make essential use of a refinement of our recent proof [M. Dafermos, I. Rodnianski and Y. Shlapentokh-Rothman, Decay for solutions of the wave equation on Kerr exterior spacetimes III: the full subextremal case |a|<M, arXiv:1402.6034] of boundedness and decay for solutions of the Cauchy problem so as to apply in the class of solutions where only a degenerate energy is assumed finite. We show in contrast that the analogous scattering maps cannot be defined for the class of finite non-degenerate energy solutions. This is due to the fact that the celebrated horizon red-shift effect acts as a blue-shift instability when solving the wave equation backwards." ] }
1512.08260
2283184565
In this paper, we provide an elementary, unified treatment of two distinct blue-shift instabilities for the scalar wave equation on a fixed Kerr black hole background: the celebrated blue-shift at the Cauchy horizon (familiar from the strong cosmic censorship conjecture) and the time-reversed red-shift at the event horizon (relevant in classical scattering theory). Our first theorem concerns the latter and constructs solutions to the wave equation on Kerr spacetimes such that the radiation field along the future event horizon vanishes and the radiation field along future null infinity decays at an arbitrarily fast polynomial rate, yet, the local energy of the solution is infinite near any point on the future event horizon. Our second theorem constructs solutions to the wave equation on rotating Kerr spacetimes such that the radiation field along the past event horizon (extended into the black hole) vanishes and the radiation field along past null infinity decays at an arbitrarily fast polynomial rate, yet, the local energy of the solution is infinite near any point on the Cauchy horizon. The results make essential use of the scattering theory developed in Dafermos, Rodnianski and Shlapentokh-Rothman (A scattering theory for the wave equation on Kerr black hole exteriors (2014). arXiv:1412.8379) and exploit directly the time-translation invariance of the scattering map and the non-triviality of the transmission map.
Our Theorem above concerning the Cauchy horizon @math can be thought to complete a paper of McNamara @cite_33 , where a conditional proof of a slightly weaker statement was given, showing that @math failed to be @math at the horizon, subject however to verifying certain statements concerning non-zero transmission'' to the Cauchy horizon (these needed statements in fact follow from our Theorem and could be used to complete his proof). Our proof (see below) will however be different from that of McNamara.
{ "cite_N": [ "@cite_33" ], "mid": [ "2104249908" ], "abstract": [ "Linear perturbations of black hole models by a variety of fields are considered. Perturbing fields include the zero rest mass scalar field in the case of Reissner-Nordstrom, and gravitational, electromagnetic and zero rest mass scalar perturbation in the case of the Kerr model. The analysis deals with the Ψ 0 components (in the Newman-Penrose (1962) formalism) of non-zero spin fields. The symmetry properties of the models are used to derive the crucial condition th at the field be singular on the inner horizon. This condition is independent of the field propagation equation. Initial data are then given in terms of incoming radiation from f - is shown that there exist wellbehaved initial data sets for which the resultant fields are singular on the inner horizon. It is emphasized that this instability result is dependent only on the global symmetries and causal structure of the models considered, and is independent of the precise nature of the perturbing field." ] }
1512.08260
2283184565
In this paper, we provide an elementary, unified treatment of two distinct blue-shift instabilities for the scalar wave equation on a fixed Kerr black hole background: the celebrated blue-shift at the Cauchy horizon (familiar from the strong cosmic censorship conjecture) and the time-reversed red-shift at the event horizon (relevant in classical scattering theory). Our first theorem concerns the latter and constructs solutions to the wave equation on Kerr spacetimes such that the radiation field along the future event horizon vanishes and the radiation field along future null infinity decays at an arbitrarily fast polynomial rate, yet, the local energy of the solution is infinite near any point on the future event horizon. Our second theorem constructs solutions to the wave equation on rotating Kerr spacetimes such that the radiation field along the past event horizon (extended into the black hole) vanishes and the radiation field along past null infinity decays at an arbitrarily fast polynomial rate, yet, the local energy of the solution is infinite near any point on the Cauchy horizon. The results make essential use of the scattering theory developed in Dafermos, Rodnianski and Shlapentokh-Rothman (A scattering theory for the wave equation on Kerr black hole exteriors (2014). arXiv:1412.8379) and exploit directly the time-translation invariance of the scattering map and the non-triviality of the transmission map.
Another approach to capturing the blue-shift instability at the Cauchy horizon @math is to identify a condition on the solution which ensures blow up at @math . Such a condition was given by Luk--Oh @cite_32 in the Reissner--Nordstr " o m case, who moreover showed that their condition indeed holds for solutions arising from compactly supported data posed on an asymptotically flat, hypersurface. (For some partial results concerning self-gravitating spherically symmetric scalar fields see @cite_18 @cite_27 .) In addition, the work @cite_32 gives an explicit characterization of the genericity assumption in terms of the asymptotics along future null infinity @math . In parallel with the present paper, Luk--Sbierski @cite_15 have obtained a Kerr analogue of the result of @cite_32 relating a polynomial lower bound along @math to infinite local energy at @math . Obtaining a characterization of spacelike initial data for which this lower bound holds remains an open problem. In broad terms, one expects that the strategy of @cite_15 will be applicable in the study of the instability properties of the Cauchy horizon in the full non-linear theory governed by the Einstein vacuum equations (cf. @cite_20 ).
{ "cite_N": [ "@cite_18", "@cite_32", "@cite_27", "@cite_15", "@cite_20" ], "mid": [ "2130710172", "2200779991", "2004482079", "2964048390", "" ], "abstract": [ "This paper considers a trapped characteristic initial value problem for the spherically symmetric Einstein-Maxwell-scalar field equations. For an open set of initial data whose closure contains in particular Reissner-Nordstrdata, the future boundary of the maximal domain of development is found to be a light-like surface along which the curvature blows up, and yet the metric can be continuously extended beyond it. This result is related to the strong cosmic censorship conjecture of Roger Penrose. The principle of determinism in classical physics is expressed mathemat- ically by the uniqueness of solutions to the initial value problem for certain equations of evolution. Indeed, in the context of the Einstein equations of general relativity, where the unknown is the very structure of space and time, uniqueness is equivalent on a fundamental level to the validity of this principle. The question of uniqueness may thus be termed the issue of the predictability of the equation. The present paper explores the issue of predictability in general relativity. Since the work of Leray, it has been known that for the Einstein equations, contrary to common experience, uniqueness for the Cauchy problem in the large does not generally hold even within the class of smooth solutions. In other words, uniqueness may fail without any loss in regularity; such failure is thus a global phenomenon. The central question is whether this violation of predictability may occur in solutions representing actual physical processes. Physical phenomena and concepts related to the general theory of relativity, namely gravitational collapse, black holes, angular momentum, etc., must cer- tainly come into play in the study of this problem. Unfortunately, the math- ematical analysis of this exciting problem is very difficult, at present beyond reach for the vacuum Einstein equations in the physical dimension. Conse-", "Adapting and extending the techniques developed in recent work with Vasy for the study of the Cauchy horizon of cosmological spacetimes, we obtain boundedness, regularity and decay of linear scalar waves on subextremal Reissner-Nordstr \"om and (slowly rotating) Kerr spacetimes, without any symmetry assumptions; in particular, we provide simple microlocal and scattering theoretic proofs of analogous results by Franzen. We show polynomial decay of linear waves relative to a Sobolev space of order slightly above @math . This complements the generic @math blow-up result of Luk and Oh.", "We consider a spherically symmetric, double characteristic initial value problem for the (real) Einstein-Maxwell-scalar field equations. On the initial outgoing characteristic, the data is assumed to satisfy the Price law decay widely believed to hold on an event horizon arising from the collapse of an asymptotically flat Cauchy surface. We establish that the heuristic mass inflation scenario put forth by Israel and Poisson is mathematically correct in the context of this initial value problem. In particular, the maximal future development has a future boundary over which the space-time is extendible as a C0 metric but along which the Hawking mass blows up identically; thus, the space-time is inextendible as a C1 metric. In view of recent results of the author in collaboration with I. Rodnianski, which rigorously establish the validity of Price's law as an upper bound for the decay of scalar field hair, the C0 extendibility result applies to the collapse of complete, asymptotically flat, spacelike initial data where the scalar field is compactly supported. This shows that under Christodoulou's C0 formulation, the strong cosmic censorship conjecture is false for this system. © 2005 Wiley Periodicals, Inc.", "Abstract We prove that a large class of smooth solutions ψ to the linear wave equation □ g ψ = 0 on subextremal rotating Kerr spacetimes which are regular and decaying along the event horizon become singular at the Cauchy horizon. More precisely, we show that assuming appropriate upper and lower bounds on the energy along the event horizon, the solution has infinite (non-degenerate) energy on any spacelike hypersurfaces intersecting the Cauchy horizon transversally. Extrapolating from known results in the Reissner–Nordstrom case, the assumed upper and lower bounds required for our theorem are conjectured to hold for solutions arising from generic smooth and compactly supported initial data on a Cauchy hypersurface. This result is motivated by the strong cosmic censorship conjecture in general relativity.", "" ] }
1512.08260
2283184565
In this paper, we provide an elementary, unified treatment of two distinct blue-shift instabilities for the scalar wave equation on a fixed Kerr black hole background: the celebrated blue-shift at the Cauchy horizon (familiar from the strong cosmic censorship conjecture) and the time-reversed red-shift at the event horizon (relevant in classical scattering theory). Our first theorem concerns the latter and constructs solutions to the wave equation on Kerr spacetimes such that the radiation field along the future event horizon vanishes and the radiation field along future null infinity decays at an arbitrarily fast polynomial rate, yet, the local energy of the solution is infinite near any point on the future event horizon. Our second theorem constructs solutions to the wave equation on rotating Kerr spacetimes such that the radiation field along the past event horizon (extended into the black hole) vanishes and the radiation field along past null infinity decays at an arbitrarily fast polynomial rate, yet, the local energy of the solution is infinite near any point on the Cauchy horizon. The results make essential use of the scattering theory developed in Dafermos, Rodnianski and Shlapentokh-Rothman (A scattering theory for the wave equation on Kerr black hole exteriors (2014). arXiv:1412.8379) and exploit directly the time-translation invariance of the scattering map and the non-triviality of the transmission map.
Let us also note several other classical attempts in the physics literature to understand the blue-shift instability at the Cauchy horizon @cite_34 @cite_4 @cite_9 . In the case of Reissner--Nordstr " o m or Kerr, the local red-shift along @math and the local blue-shift along @math both vanish. This leads to fundamentally different expectations for the qualitative behavior of waves in the black hole exterior and interior, see @cite_35 @cite_10 @cite_28 @cite_6 @cite_0 @cite_12 for the current state of the art. We note, however, that even the question of boundedness for general solutions to on extremal Kerr exteriors remains an open problem (see the discussion in @cite_25 ).
{ "cite_N": [ "@cite_35", "@cite_4", "@cite_28", "@cite_9", "@cite_6", "@cite_0", "@cite_34", "@cite_10", "@cite_25", "@cite_12" ], "mid": [ "2100194997", "2076262004", "2964046786", "2028523941", "2963014429", "2963324175", "2136187123", "2593560900", "2963397643", "" ], "abstract": [ "We study the problem of stability and instability of extreme Reissner-Nordstrom spacetimes for linear scalar perturbations. Specifically, we consider solutions to the linear wave equation ( g =0 ) on a suitable globally hyperbolic subset of such a spacetime, arising from regular initial data prescribed on a Cauchy hypersurface Σ0 crossing the future event horizon ( H ^ + ) . We obtain boundedness, decay and non-decay results. Our estimates hold up to and including the horizon ( H ^ + ) . The fundamental new aspect of this problem is the degeneracy of the redshift on ( H ^ + ) . Several new analytical features of degenerate horizons are also presented.", "We describe the evolution of a scalar test field on the interior of a Reissner-Nordstroem black hole. For a wide variety of initial field configurations the energy density in the scalar field is shown to develop singularities in a neighborhood of the geometry's Cauchy horizon, suggesting that for a stellar collapse curvature singularities will develop prior to encountering the Cauchy horizon. The extension to the interior of stationary perturbations due to exterior sources is shown not to disrupt the Cauchy horizon.", "Abstract We study the Cauchy problem for the wave equation □ g ψ = 0 on extreme Kerr backgrounds. Specifically, we consider regular axisymmetric initial data prescribed on a Cauchy hypersurface Σ 0 which connects the future event horizon with spacelike or null infinity, and we solve the linear wave equation on the domain of dependence of Σ 0 . We show that the spacetime integral of an energy-type density is bounded by the initial conserved flux corresponding to the stationary Killing field T , and we derive boundedness of the non-degenerate energy flux corresponding to a globally timelike vector field N . Finally, we prove uniform pointwise boundedness and power-law decay for ψ up to and including the event horizon H + .", "The stability of the inner Reissner-Nordstroem geometry is studied with test massless integer-spin fields. In contrast to previous mathematical treatments we present physical arguments for the processes involved and show that ray tracing and simple first-order scattering suffice to elucidate most of the results. Monochromatic waves which are of small amplitude and ingoing near the outer horizon develop infinite energy densities near the inner Cauchy horizon (as measured by a freely falling observer). Previous work has shown that certain derivatives of the field in a general (nonmonochromatic) disturbance must fall off exponentially near the inner (Cauchy) horizon (r = r sub - ) if energy densities are to remain finite. Thus the solution is unstable to physically reasonable perturbations which arise outside the black hole because such perturbations, if localized near past null infinity (I sup - ), cannot be localized near r sub + , the outer horizon. The mass-energy of an infalling disturbance would generate multipole moments on the black hole. Price, Sibgatullin, and Alekseev have shown that such moments are radiated away as ''tails'' which travel outward and are rescattered inward yielding a wave field with a time dependence t sup -p , p > 0. This decay in time is sufficiently slow that themore » tails yield infinite energy densities on the Cauchy horizon. (The amplification of the low-frequency tails upon interacting with the time-dependent potential between the horizons is an important feature guaranteeing the infinite energy density.) The interior structure of the analytically extended solution is thus disrupted by finite external disturbances. have further shown that even perturbations which are localized as they cross the outer horizon produce singularities at the inner horizon. It is shown that this singularity arises when the incoming radiation is first scattered just inside the outer horizon« less", "", "We consider solutions to the linear wave equation in the interior region of extremal Reissner–Nordstrom black holes. We show that, under suitable assumptions on the initial data, the solutions can be extended continuously beyond the Cauchy horizon and, moreover, that their local energy is finite. This result is in contrast with previously established results for subextremal Reissner–Nordstrom black holes, where the local energy was shown to generically blow up at the Cauchy horizon.", "The behaviour, on the Cauchy horizon, of a flux of gravitational and or electromagnetic radiation crossing the event horizon of a Reissner-Nordstrom black-hole is investigated as a problem in the theory of one-dimensional potential-scattering. It is shown that the flux of radiation received by an observer crossing the Cauchy horizon, along a radial time-like geodesic, diverges for all physically reasonable perturbations crossing the event horizon, even including those with compact support.", "This paper contains the second part of a two-part series on the stability and instability of extreme Reissner–Nordstrom spacetimes for linear scalar perturbations. We continue our study of solutions to the linear wave equation ( g =0 ) on a suitable globally hyperbolic subset of such a spacetime, arising from regular initial data prescribed on a Cauchy hypersurface Σ0 crossing the future event horizon ( H ^ + ). We here obtain definitive energy and pointwise decay, non-decay and blow-up results. Our estimates hold up to and including the horizon ( H ^ + ). A hierarchy of conservations laws on degenerate horizons is also derived.", "This paper concludes the series begun in [M. Dafermos and I. Rodnianski, Decay for solutions of the wave equation on Kerr exterior spacetimes I II: the cases jaj M or axisymmetry, arXiv:1010.5132], providing the complete proof of denitive boundedness and decay results for the scalar wave equation on Kerr backgrounds in the general subextremal jaj < M case without symmetry assumptions. The essential ideas of the proof (together with explicit constructions of the most dicult mul", "" ] }
1512.08048
2276974676
Vehicles are becoming more and more connected, this opens up a larger attack surface which not only affects the passengers inside vehicles, but also people around them. These vulnerabilities exist because modern systems are built on the comparatively less secure and old CAN bus framework which lacks even basic authentication. Since a new protocol can only help future vehicles and not older vehicles, our approach tries to solve the issue as a data analytics problem and use machine learning techniques to secure cars. We develop a Hidden Markov Model to detect anomalous states from real data collected from vehicles. Using this model, while a vehicle is in operation, we are able to detect and issue alerts. Our model could be integrated as a plug-n-play device in all new and old cars.
There are two ways to address the current security problem. One way is to prevent the attack from happening and the second one is to detect and mitigate the potential risk. One of the main reasons which enable attackers to inject potentially malicious messages on the CAN bus is that the protocol lacks authentication mechanisms @cite_10 . Researchers tried to address this problem by using cryptography. @cite_1 looked at the requirements of cryptographic functions for car security. They proposed embedded solutions to add cryptographic functions to different ECU’s which can provide security against malicious manipulations.
{ "cite_N": [ "@cite_1", "@cite_10" ], "mid": [ "2153861733", "2116520617" ], "abstract": [ "For new automotive applications and services, information technology (IT) has gained central importance. IT-related costs in car manufacturing are already high and they will increase dramatically in the future. Yet whereas safety and reliability have become a relatively well-established field, the protection of vehicular IT systems against systematic manipulation or intrusion has only recently started to emerge. Nevertheless, IT security is already the base of some vehicular applications such as immobilizers or digital tachographs. To securely enable future automotive applications and business models, IT security will be one of the central technologies for the next generation of vehicles. After a state-of-the-art overview of IT security in vehicles, we give a short introduction into cryptographic terminology and functionality. This contribution will then identify the need for automotive IT security while presenting typical attacks, resulting security objectives, and characteristic constraints within the automotive area. We will introduce core security technologies and relevant security mechanisms followed by a detailed description of critical vehicular applications, business models, and components relying on IT security. We conclude our contribution with a detailed statement about challenges and opportunities for the automotive IT community for embedding IT security in vehicles.", "Modern automobiles are no longer mere mechanical devices; they are pervasively monitored and controlled by dozens of digital computers coordinated via internal vehicular networks. While this transformation has driven major advancements in efficiency and safety, it has also introduced a range of new potential risks. In this paper we experimentally evaluate these issues on a modern automobile and demonstrate the fragility of the underlying system structure. We demonstrate that an attacker who is able to infiltrate virtually any Electronic Control Unit (ECU) can leverage this ability to completely circumvent a broad array of safety-critical systems. Over a range of experiments, both in the lab and in road tests, we demonstrate the ability to adversarially control a wide range of automotive functions and completely ignore driver input including disabling the brakes, selectively braking individual wheels on demand, stopping the engine, and so on. We find that it is possible to bypass rudimentary network security protections within the car, such as maliciously bridging between our car's two internal subnets. We also present composite attacks that leverage individual weaknesses, including an attack that embeds malicious code in a car's telematics unit and that will completely erase any evidence of its presence after a crash. Looking forward, we discuss the complex challenges in addressing these vulnerabilities while considering the existing automotive ecosystem." ] }
1512.08048
2276974676
Vehicles are becoming more and more connected, this opens up a larger attack surface which not only affects the passengers inside vehicles, but also people around them. These vulnerabilities exist because modern systems are built on the comparatively less secure and old CAN bus framework which lacks even basic authentication. Since a new protocol can only help future vehicles and not older vehicles, our approach tries to solve the issue as a data analytics problem and use machine learning techniques to secure cars. We develop a Hidden Markov Model to detect anomalous states from real data collected from vehicles. Using this model, while a vehicle is in operation, we are able to detect and issue alerts. Our model could be integrated as a plug-n-play device in all new and old cars.
An interesting paper by @cite_3 that is quite relevant to our study uses accelerometer, GPS data to develop a movement and behavior model for cattle by using hidden markov models. The authors collect real data for individual cows in the herd and then try to predict their movements using machine learning models. The authors develop a 3 state model which was able to describe animal movement and state transition behavior accurately.
{ "cite_N": [ "@cite_3" ], "mid": [ "2042690051" ], "abstract": [ "The study described in this paper developed a model of animal movement, which explicitly recognised each individual as the central unit of measure. The model was developed by learning from a real dataset that measured and calculated, for individual cows in a herd, their linear and angular positions and directional and angular speeds. Two learning algorithms were implemented: a Hidden Markov model (HMM) and a long-term prediction algorithm. It is shown that a HMM can be used to describe the animal's movement and state transition behaviour within several “stay” areas where cows remained for long periods. Model parameters were estimated for hidden behaviour states such as relocating, foraging and bedding. For cows’ movement between the “stay” areas a long-term prediction algorithm was implemented. By combining these two algorithms it was possible to develop a successful model, which achieved similar results to the animal behaviour data collected. This modelling methodology could easily be applied to interactions of other animal species." ] }
1512.08086
2949334740
In the context of fine-grained visual categorization, the ability to interpret models as human-understandable visual manuals is sometimes as important as achieving high classification accuracy. In this paper, we propose a novel Part-Stacked CNN architecture that explicitly explains the fine-grained recognition process by modeling subtle differences from object parts. Based on manually-labeled strong part annotations, the proposed architecture consists of a fully convolutional network to locate multiple object parts and a two-stream classification network that en- codes object-level and part-level cues simultaneously. By adopting a set of sharing strategies between the computation of multiple object parts, the proposed architecture is very efficient running at 20 frames sec during inference. Experimental results on the CUB-200-2011 dataset reveal the effectiveness of the proposed architecture, from both the perspective of classification accuracy and model interpretability.
. A number of methods have been developed to classify object categories at the subordinate level. Recently, the best performing methods mostly sought for improvement brought by the following three aspects: more discriminative features including deep CNNs for better visual representation @cite_34 @cite_5 @cite_1 @cite_43 @cite_15 , explicit alignment approaches to eliminate pose displacements @cite_22 @cite_0 , and part-based methods to study the impact of object parts @cite_2 @cite_24 @cite_33 @cite_14 @cite_4 . Another line of research explored human-in-the-loop methods @cite_10 @cite_46 @cite_3 to identify the most discriminative regions for classifying fine-grained categories. Although such methods provided direct references of how people perform fine-grained recognition in real life, they were impossible to scale for large systems due to the need of human interactions at test time.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_22", "@cite_33", "@cite_46", "@cite_1", "@cite_3", "@cite_0", "@cite_43", "@cite_24", "@cite_2", "@cite_5", "@cite_15", "@cite_34", "@cite_10" ], "mid": [ "2729172879", "2950918464", "1616462885", "2035818039", "", "", "", "", "2950179405", "2147414309", "", "", "1686810756", "2103444992", "2103490241" ], "abstract": [ "We present a simple and effective architecture for fine-grained visual recognition called Bilinear Convolutional Neural Networks (B-CNNs). These networks represent an image as a pooled outer product of features derived from two CNNs and capture localized feature interactions in a translationally invariant manner. B-CNNs belong to the class of orderless texture representations but unlike prior work they can be trained in an end-to-end manner. Our most accurate model obtains 84.1 , 79.4 , 86.9 and 91.3 per-image accuracy on the Caltech-UCSD birds [67], NABirds [64], FGVC aircraft [42], and Stanford cars [33] dataset respectively and runs at 30 frames-per-second on a NVIDIA Titan X GPU. We then present a systematic analysis of these networks and show that (1) the bilinear features are highly redundant and can be reduced by an order of magnitude in size without significant loss in accuracy, (2) are also effective for other image classification tasks such as texture and scene recognition, and (3) can be trained from scratch on the ImageNet dataset offering consistent improvements over the baseline architecture. Finally, we present visualizations of these models on various datasets using top activations of neural units and gradient-based inversion techniques. The source code for the complete system is available at this http URL.", "We investigate the importance of parts for the tasks of action and attribute classification. We develop a part-based approach by leveraging convolutional network features inspired by recent advances in computer vision. Our part detectors are a deep version of poselets and capture parts of the human body under a distinct set of poses. For the tasks of action and attribute classification, we train holistic convolutional neural networks and show that adding parts leads to top-performing results for both tasks. In addition, we demonstrate the effectiveness of our approach when we replace an oracle person detector, as is the default in the current evaluation protocol for both tasks, with a state-of-the-art person detection system.", "We propose an architecture for fine-grained visual categorization that approaches expert human performance in the classification of bird species. Our architecture first computes an estimate of the object's pose; this is used to compute local image features which are, in turn, used for classification. The features are computed by applying deep convolutional nets to image patches that are located and normalized by the pose. We perform an empirical study of a number of pose normalization schemes, including an investigation of higher order geometric warping functions. We propose a novel graph-based clustering algorithm for learning a compact pose normalization space. We perform a detailed investigation of state-of-the-art deep convolutional feature implementations and fine-tuning feature learning for fine-grained classification. We observe that a model that integrates lower-level feature layers with pose-normalized extraction routines and higher-level feature layers with unaligned image features works best. Our experiments advance state-of-the-art performance on bird species recognition, with a large improvement of correct classification rates over previous methods (75 vs. 55-65 ).", "Part and attribute based representations are widely used to support high-level search and retrieval applications. However, learning computer vision models for automatically extracting these from images requires significant effort in the form of part and attribute labels and annotations. We propose an annotation framework based on comparisons between pairs of instances within a set, which aims to reduce the overhead in manually specifying the set of part and attribute labels. Our comparisons are based on intuitive properties such as correspondences and differences, which are applicable to a wide range of categories. Moreover, they require few category specific instructions and lead to simple annotation interfaces compared to traditional approaches. On a number of visual categories we show that our framework can use noisy annotations collected via \"crowdsourcing\" to discover semantic parts useful for detection and parsing, as well as attributes suitable for fine-grained recognition.", "", "", "", "", "We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.", "We propose a method for inferring human attributes (such as gender, hair style, clothes style, expression, action) from images of people under large variation of viewpoint, pose, appearance, articulation and occlusion. Convolutional Neural Nets (CNN) have been shown to perform very well on large scale object recognition problems. In the context of attribute classification, however, the signal is often subtle and it may cover only a small part of the image, while the image is dominated by the effects of pose and viewpoint. Discounting for pose variation would require training on very large labeled datasets which are not presently available. Part-based models, such as poselets [4] and DPM [12] have been shown to perform well for this problem but they are limited by shallow low-level features. We propose a new method which combines part-based models and deep learning by training pose-normalized CNNs. We show substantial improvement vs. state-of-the-art methods on challenging attribute classification tasks in unconstrained settings. Experiments confirm that our method outperforms both the best part-based methods on this problem and conventional CNNs trained on the full bounding box of the person.", "", "", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "The design of low-level image features is critical for computer vision algorithms. Orientation histograms, such as those in SIFT [16] and HOG [3], are the most successful and popular features for visual object and scene recognition. We highlight the kernel view of orientation histograms, and show that they are equivalent to a certain type of match kernels over image patches. This novel view allows us to design a family of kernel descriptors which provide a unified and principled framework to turn pixel attributes (gradient, color, local binary pattern, etc.) into compact patch-level features. In particular, we introduce three types of match kernels to measure similarities between image patches, and construct compact low-dimensional kernel descriptors from these match kernels using kernel principal component analysis (KPCA) [23]. Kernel descriptors are easy to design and can turn any type of pixel attribute into patch-level features. They outperform carefully tuned and sophisticated features including SIFT and deep belief networks. We report superior performance on standard image classification benchmarks: Scene-15, Caltech-101, CIFAR10 and CIFAR10-ImageNet.", "We present an interactive, hybrid human-computer method for object classification. The method applies to classes of objects that are recognizable by people with appropriate expertise (e.g., animal species or airplane model), but not (in general) by people without such expertise. It can be seen as a visual version of the 20 questions game, where questions based on simple visual attributes are posed interactively. The goal is to identify the true class while minimizing the number of questions asked, using the visual content of the image. We introduce a general framework for incorporating almost any off-the-shelf multi-class object recognition algorithm into the visual 20 questions game, and provide methodologies to account for imperfect user responses and unreliable computer vision algorithms. We evaluate our methods on Birds-200, a difficult dataset of 200 tightly-related bird species, and on the Animals With Attributes dataset. Our results demonstrate that incorporating user input drives up recognition accuracy to levels that are good enough for practical applications, while at the same time, computer vision reduces the amount of human interaction required." ] }
1512.08086
2949334740
In the context of fine-grained visual categorization, the ability to interpret models as human-understandable visual manuals is sometimes as important as achieving high classification accuracy. In this paper, we propose a novel Part-Stacked CNN architecture that explicitly explains the fine-grained recognition process by modeling subtle differences from object parts. Based on manually-labeled strong part annotations, the proposed architecture consists of a fully convolutional network to locate multiple object parts and a two-stream classification network that en- codes object-level and part-level cues simultaneously. By adopting a set of sharing strategies between the computation of multiple object parts, the proposed architecture is very efficient running at 20 frames sec during inference. Experimental results on the CUB-200-2011 dataset reveal the effectiveness of the proposed architecture, from both the perspective of classification accuracy and model interpretability.
. Fully convolutional network (FCN) is a fast and effective approach to produce dense prediction with convolutional networks. Successful examples can be found on tasks including sliding window detection @cite_19 , semantic segmentation @cite_30 , and human pose estimation @cite_17 . We find the problem of part landmark localization in fine-grained recognition closely related to human pose estimation, in which a critical step is to detect a set of key points indicating multiple components of human body.
{ "cite_N": [ "@cite_30", "@cite_19", "@cite_17" ], "mid": [ "2952632681", "1487583988", "2952422028" ], "abstract": [ "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.", "We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.", "This paper proposes a new hybrid architecture that consists of a deep Convolutional Network and a Markov Random Field. We show how this architecture is successfully applied to the challenging problem of articulated human pose estimation in monocular images. The architecture can exploit structural domain constraints such as geometric relationships between body joint locations. We show that joint training of these two model paradigms improves performance and allows us to significantly outperform existing state-of-the-art techniques." ] }
1512.08168
2285278765
Given a nite alphabet and a deterministic nite automaton on , the problem of determining whether the language recognized by the automaton contains any pangram is NP-complete. Various other language classes and problems around pangrams are analyzed.
The All Colors Shortest Path problem (ACSP) @cite_5 asks the shortest path in an undirected graph with nodes colored, under the constraint that the paths must visit all colors. The pangram problem for can be regarded as an edge-colored and directed version of All Colors Path problem. Our result shows that the problem is -complete even if the shortestness condition is dropped and just the existence of such paths is asked.
{ "cite_N": [ "@cite_5" ], "mid": [ "2240749562" ], "abstract": [ "All Colors Shortest Path problem defined on an undirected graph aims at finding a shortest, possibly non-simple, path where every color occurs at least once, assuming that each vertex in the graph is associated with a color known in advance. To the best of our knowledge, this paper is the first to define and investigate this problem. Even though the problem is computationally similar to generalized minimum spanning tree, and the generalized traveling salesman problems, allowing for non-simple paths where a node may be visited multiple times makes All Colors Shortest Path problem novel and computationally unique. In this paper we prove that All Colors Shortest Path problem is NP-hard, and does not lend itself to a constant factor approximation. We also propose several heuristic solutions for this problem based on LP-relaxation, simulated annealing, ant colony optimization, and genetic algorithm, and provide extensive simulations for a comparative analysis of them. The heuristics presented are not the standard implementations of the well known heuristic algorithms, but rather sophisticated models tailored for the problem in hand. This fact is acknowledged by the very promising results reported." ] }
1512.07972
2951875877
Power side channel is a very important category of side channels, which can be exploited to steal confidential information from a computing system by analyzing its power consumption. In this paper, we demonstrate the existence of various power side channels on popular mobile devices such as smartphones. Based on unprivileged power consumption traces, we present a list of real-world attacks that can be initiated to identify running apps, infer sensitive UIs, guess password lengths, and estimate geo-locations. These attack examples demonstrate that power consumption traces can be used as a practical side channel to gain various confidential information of mobile apps running on smartphones. Based on these power side channels, we discuss possible exploitations and present a general approach to exploit a power side channel on an Android smartphone, which demonstrates that power side channels pose imminent threats to the security and privacy of mobile users. We also discuss possible countermeasures to mitigate the threats of power side channels.
Most smartphones are power-hungry devices, while many batteries could last only less than a day in typical usage. The most power consuming components are CPU, network, screen and various sensors such as GPS and camera @cite_26 . For most Android smartphones, the screen consumes a majority of the total power. The screen power could be affected significantly due to its brightness and color of pixels @cite_18 , thus it is able to reduce the power consumption of an app by adjusting its color schemes @cite_45 .
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_45" ], "mid": [ "1965638809", "1768994003", "" ], "abstract": [ "Emerging organic light-emitting diode (OLED)-based displays obviate external lighting, and consume drastically different power when displaying different colors, due to their emissive nature. This creates a pressing need for OLED display power models for system energy management, optimization as well as energy-efficient GUI design, given the display content or even the graphical-user interface (GUI) code. In this work, we study this opportunity using commercial QVGA OLED displays and user studies. We first present a comprehensive treatment of power modeling of OLED displays, providing models that estimate power consumption based on pixel, image, and code, respectively. These models feature various tradeoffs between computation efficiency and accuracy so that they can be employed in different layers of a mobile system. We validate the proposed models using a commercial QVGA OLED module and a mobile device with a QVGA OLED display. Then, based on the models, we propose techniques that adapt GUIs based on existing mechanisms as well as arbitrarily under usability constraints. Our measurement and user studies show that more than 75 percent display power reduction can be achieved with user acceptance.", "Mobile consumer-electronics devices, especially phones, are powered from batteries which are limited in size and therefore capacity. This implies that managing energy well is paramount in such devices. Good energy management requires a good understanding of where and how the energy is used. To this end we present a detailed analysis of the power consumption of a recent mobile phone, the Openmoko Neo Freerunner. We measure not only overall system power, but the exact breakdown of power consumption by the device's main hardware components. We present this power breakdown for micro-benchmarks as well as for a number of realistic usage scenarios. These results are validated by overall power measurements of two other devices: the HTC Dream and Google Nexus One. We develop a power model of the Freerunner device and analyse the energy usage and battery lifetime under a number of usage patterns. We discuss the significance of the power drawn by various components, and identify the most promising areas to focus on for further improvements of power management. We also analyse the energy impact of dynamic voltage and frequency scaling of the device's application processor.", "" ] }
1512.07972
2951875877
Power side channel is a very important category of side channels, which can be exploited to steal confidential information from a computing system by analyzing its power consumption. In this paper, we demonstrate the existence of various power side channels on popular mobile devices such as smartphones. Based on unprivileged power consumption traces, we present a list of real-world attacks that can be initiated to identify running apps, infer sensitive UIs, guess password lengths, and estimate geo-locations. These attack examples demonstrate that power consumption traces can be used as a practical side channel to gain various confidential information of mobile apps running on smartphones. Based on these power side channels, we discuss possible exploitations and present a general approach to exploit a power side channel on an Android smartphone, which demonstrates that power side channels pose imminent threats to the security and privacy of mobile users. We also discuss possible countermeasures to mitigate the threats of power side channels.
For a mobile application, the power of a particular execution trace in a certain stable environment is determined by the behavior of the app. Different types of apps consume different power according to their usage of CPU cycles, network traffic and screen brightness, etc. Power consumption for an app can therefore be modeled by their resource usages @cite_10 , system call traces @cite_32 and source code @cite_6 , etc.
{ "cite_N": [ "@cite_10", "@cite_32", "@cite_6" ], "mid": [ "1577404640", "2091229785", "1983885898" ], "abstract": [ "Understanding the energy consumption of a smartphone application is a key area of interest for end users, as well as application and system software developers. Previous work has only been able to provide limited information concerning the energy consumption of individual applications because of limited access to underlying hardware and system software. The energy consumption of a smartphone application is, therefore, often estimated with low accuracy and granularity. In this paper, we propose AppScope, an Android-based energy metering system. This system monitors application's hardware usage at the kernel level and accurately estimates energy consumption. AppScope is implemented as a kernel module and uses an event-driven monitoring method that generates low overhead and provides high accuracy. The evaluation results indicate that AppScope accurately estimates the energy consumption of Android applications expending approximately 35mW and 2.1 in power consumption and CPU utilization overhead, respectively.", "Computer systems face increasing challenges in simultaneously meeting an application's energy, performance, and reliability goals. While energy and performance tradeoffs have been studied through different dynamic voltage and frequency scaling (DVFS) policies and power management schemes, tradeoffs of energy and performance with reliability have not been studied for general purpose computing. This is particularly relevant for application domains such as multimedia, where some limited application error tolerance can be exploited to reduce energy [7]. In this paper, we present EPROF, an optimization framework based on Mixed-Integer Linear Programming (MILP) that selects possible schedules for running tasks on multiprocessors in order to minimize energy while meeting constraints on application performance and reliability. We consider parallel applications that express (on task graphs) the performance and reliability goals they need to achieve, and that run on chip multiprocessors made up of heterogeneous processor cores that offer different energy performance reli-ability tradeoffs. For the StreamIt benchmarks [16], EPROF can identify schedules that offer up to 34 energy reduction over a baseline method while achieving the targeted performance and reliability. More broadly, EPROF demonstrates how these three degrees of freedom (energy, performance and reliability) can be flexibly exploited as needed for different applications.", "Optimizing the energy efficiency of mobile applications can greatly increase user satisfaction. However, developers lack viable techniques for estimating the energy consumption of their applications. This paper proposes a new approach that is both lightweight in terms of its developer requirements and provides fine-grained estimates of energy consumption at the code level. It achieves this using a novel combination of program analysis and per-instruction energy modeling. In evaluation, our approach is able to estimate energy consumption to within 10 of the ground truth for a set of mobile applications from the Google Play store. Additionally, it provides useful and meaningful feedback to developers that helps them to understand application energy consumption behavior." ] }
1512.07972
2951875877
Power side channel is a very important category of side channels, which can be exploited to steal confidential information from a computing system by analyzing its power consumption. In this paper, we demonstrate the existence of various power side channels on popular mobile devices such as smartphones. Based on unprivileged power consumption traces, we present a list of real-world attacks that can be initiated to identify running apps, infer sensitive UIs, guess password lengths, and estimate geo-locations. These attack examples demonstrate that power consumption traces can be used as a practical side channel to gain various confidential information of mobile apps running on smartphones. Based on these power side channels, we discuss possible exploitations and present a general approach to exploit a power side channel on an Android smartphone, which demonstrates that power side channels pose imminent threats to the security and privacy of mobile users. We also discuss possible countermeasures to mitigate the threats of power side channels.
In order to protect computing systems from side-channel attacks, many research work have proposed various mitigation or protection methods. Examples include redesigning the encryption methods to prevent power analysis @cite_16 @cite_51 , predictive mitigation of timing channels for interactive applications @cite_35 , system-level protection against cache-based side channel attacks @cite_2 , and new cache designs to thwart software cache-based side channel attacks @cite_53 .
{ "cite_N": [ "@cite_35", "@cite_53", "@cite_2", "@cite_16", "@cite_51" ], "mid": [ "2171690178", "2166293920", "1584476834", "2155244510", "" ], "abstract": [ "Timing channels remain a difficult and important problem for information security. Recent work introduced predictive mitigation, a new way to mitigating leakage through timing channels; this mechanism works by predicting timing from past behavior, and then enforcing the predictions. This paper generalizes predictive mitigation to a larger and important class of systems: systems that receive input requests from multiple clients and deliver responses. The new insight is that timing predictions may be a function of any public information, rather than being a function simply of output events. Based on this insight, a more general mechanism and theory of predictive mitigation becomes possible. The result is that bounds on timing leakage can be tightened, achieving asymptotically logarithmic leakage under reasonable assumptions. By applying it to web applications, the generalized predictive mitigation mechanism is shown to be effective in practice.", "Software cache-based side channel attacks are a serious new class of threats for computers. Unlike physical side channel attacks that mostly target embedded cryptographic devices, cache-based side channel attacks can also undermine general purpose systems. The attacks are easy to perform, effective on most platforms, and do not require special instruments or excessive computation power. In recently demonstrated attacks on software implementations of ciphers like AES and RSA, the full key can be recovered by an unprivileged user program performing simple timing measurements based on cache misses. We first analyze these attacks, identifying cache interference as the root cause of these attacks. We identify two basic mitigation approaches: the partition-based approach eliminates cache interference whereas the randomization-based approach randomizes cache interference so that zero information can be inferred. We present new security-aware cache designs, the Partition-Locked cache (PLcache) and Random Permutation cache (RPcache), analyze and prove their security, and evaluate their performance. Our results show that our new cache designs with built-in security can defend against cache-based side channel attacks in general-rather than only specific attacks on a given cryptographic algorithm-with very little performance degradation and hardware cost.", "Cloud services are rapidly gaining adoption due to the promises of cost efficiency, availability, and on-demand scaling. To achieve these promises, cloud providers share physical resources to support multi-tenancy of cloud platforms. However, the possibility of sharing the same hardware with potential attackers makes users reluctant to offload sensitive data into the cloud. Worse yet, researchers have demonstrated side channel attacks via shared memory caches to break full encryption keys of AES, DES, and RSA. We present STEALTHMEM, a system-level protection mechanism against cache-based side channel attacks in the cloud. STEALTHMEM manages a set of locked cache lines per core, which are never evicted from the cache, and efficiently multiplexes them so that each VM can load its own sensitive data into the locked cache lines. Thus, any VM can hide memory access patterns on confidential data from other VMs. Unlike existing state-of-the-art mitigation methods, STEALTHMEM works with existing commodity hardware and does not require profound changes to application software. We also present a novel idea and prototype for isolating cache lines while fully utilizing memory by exploiting architectural properties of set-associative caches. STEALTHMEM imposes 5.9 of performance overhead on the SPEC 2006 CPU benchmark, and between 2 and 5 overhead on secured AES, DES and Blowfish, requiring only between 3 and 34 lines of code changes from the original implementations.", "For making elliptic curve point multiplication secure against side-channel attacks, various methods have been proposed using special point representations for specifically chosen elliptic curves. We show that the same goal can be achieved based on conventional elliptic curve arithmetic implementations. Our point multiplication method is much more general than the proposals requiring non-standard point representations; in particular, it can be used with the curves recommended by NIST and SECG. It also provides efficiency advantages over most earlier proposals.", "" ] }
1512.07972
2951875877
Power side channel is a very important category of side channels, which can be exploited to steal confidential information from a computing system by analyzing its power consumption. In this paper, we demonstrate the existence of various power side channels on popular mobile devices such as smartphones. Based on unprivileged power consumption traces, we present a list of real-world attacks that can be initiated to identify running apps, infer sensitive UIs, guess password lengths, and estimate geo-locations. These attack examples demonstrate that power consumption traces can be used as a practical side channel to gain various confidential information of mobile apps running on smartphones. Based on these power side channels, we discuss possible exploitations and present a general approach to exploit a power side channel on an Android smartphone, which demonstrates that power side channels pose imminent threats to the security and privacy of mobile users. We also discuss possible countermeasures to mitigate the threats of power side channels.
Chari @cite_43 propose a sound approach to counteract power analysis attacks. It includes an abstract model which approximates power consumption in most devices and a generic technique to create provably resistant implementations for devices where the power model has reasonable properties. They prove a lower bound on the number of experiments required to mount statistical attacks on devices whose physical characteristics satisfy reasonable properties.
{ "cite_N": [ "@cite_43" ], "mid": [ "2612208439" ], "abstract": [ "Side channel cryptanalysis techniques such as the analysis of instantaneous power consumption, have been extremely effective in attacking implementations on simple hardware platforms. There are several proposed solutions to resist these attacks, most of which are ad-hoc and can easily be rendered ineffective. A scientific approach is to create a model for the physical characteristics of the device, and then design implementations provably secure in that model, i.e, they resist generic attacks with an a priori bound on the number of experiments. We propose an abstract model which approximates power consumption in most devices and in particular small single-chip devices. Using this, we propose a generic technique to create provably resistant implementations for devices where the power model has reasonable properties, and a source of randomness exists. We prove a lower bound on the number of experiments required to mount statistical attacks on devices whose physical characteristics satisfy reasonable properties." ] }
1512.07972
2951875877
Power side channel is a very important category of side channels, which can be exploited to steal confidential information from a computing system by analyzing its power consumption. In this paper, we demonstrate the existence of various power side channels on popular mobile devices such as smartphones. Based on unprivileged power consumption traces, we present a list of real-world attacks that can be initiated to identify running apps, infer sensitive UIs, guess password lengths, and estimate geo-locations. These attack examples demonstrate that power consumption traces can be used as a practical side channel to gain various confidential information of mobile apps running on smartphones. Based on these power side channels, we discuss possible exploitations and present a general approach to exploit a power side channel on an Android smartphone, which demonstrates that power side channels pose imminent threats to the security and privacy of mobile users. We also discuss possible countermeasures to mitigate the threats of power side channels.
Power side channels have also been discovered on other systems besides smart-cards. For example, Hlavacs @cite_54 demonstrate that energy consumption side-channel attack can be performed between virtual machines in a cloud.
{ "cite_N": [ "@cite_54" ], "mid": [ "2014011556" ], "abstract": [ "Virtualized data centers where several virtual machines (VMs) are hosted per server are becoming more popular due to Cloud Computing. As a consequence of energy efficiency concerns, the exact combination of VMs running on a specific server will most likely change over time. We present experimental results how to use the energy power consumption logs of a power monitored server as a side-channel that allows us to recognize the exact combination of VMs it currently hosts to a high degree. For classification, we use a maximum log-likelihood approach, which works well for comparably small training and test set sizes. We also show to which degree a specific VM can be recognized, regardless of other VMs currently running on the same server, and show false negative positive rates. To cross-validate our results, we have used a Kolmogorov-Smirnov test, resulting in comparable quality of recognition within shorter time. In order to clarify whether our approach is generalizable and yields reproducible results, we have set up a second experimental infrastructure in Lyon, using a different hardware platform and power measurement device. We have obtained similar results and have experimented with different CPU frequency scaling governors, yielding comparable quality of recognition. As a result, energy consumption data of servers must be protected carefully, as it is potentially valuable information for an attacker trying to track down a VM to mount further attack steps." ] }
1512.07972
2951875877
Power side channel is a very important category of side channels, which can be exploited to steal confidential information from a computing system by analyzing its power consumption. In this paper, we demonstrate the existence of various power side channels on popular mobile devices such as smartphones. Based on unprivileged power consumption traces, we present a list of real-world attacks that can be initiated to identify running apps, infer sensitive UIs, guess password lengths, and estimate geo-locations. These attack examples demonstrate that power consumption traces can be used as a practical side channel to gain various confidential information of mobile apps running on smartphones. Based on these power side channels, we discuss possible exploitations and present a general approach to exploit a power side channel on an Android smartphone, which demonstrates that power side channels pose imminent threats to the security and privacy of mobile users. We also discuss possible countermeasures to mitigate the threats of power side channels.
On mobile platforms, Michalevsky proposed PowerSpy @cite_33 , which investigates the relation between signal strength and the power pattern of the smartphone and showed that they can infer smartphone users' whereabouts based on the power traces.
{ "cite_N": [ "@cite_33" ], "mid": [ "2950383328" ], "abstract": [ "Modern mobile platforms like Android enable applications to read aggregate power usage on the phone. This information is considered harmless and reading it requires no user permission or notification. We show that by simply reading the phone's aggregate power consumption over a period of a few minutes an application can learn information about the user's location. Aggregate phone power consumption data is extremely noisy due to the multitude of components and applications that simultaneously consume power. Nevertheless, by using machine learning algorithms we are able to successfully infer the phone's location. We discuss several ways in which this privacy leak can be remedied." ] }
1512.07730
2200878340
Suppose that we have @math sensors and each one intends to send a function @math (e.g. a signal or an image) to a receiver common to all @math sensors. During transmission, each @math gets convolved with a function @math . The receiver records the function @math , given by the sum of all these convolved signals. When and under which conditions is it possible to recover the individual signals @math and the blurring functions @math from just one received signal @math ? This challenging problem, which intertwines blind deconvolution with blind demixing, appears in a variety of applications, such as audio processing, image processing, neuroscience, spectroscopy, and astronomy. It is also expected to play a central role in connection with the future Internet-of-Things. We will prove that under reasonable and practical assumptions, it is possible to solve this otherwise highly ill-posed problem and recover the @math transmitted functions @math and the impulse responses @math in a robust, reliable, and efficient manner from just one single received function @math by solving a semidefinite program. We derive explicit bounds on the number of measurements needed for successful recovery and prove that our method is robust in the presence of noise. Our theory is actually sub-optimal, since numerical experiments demonstrate that, quite remarkably, recovery is still possible if the number of measurements is close to the number of degrees of freedom.
Problems of the type or are ubiquitous in many applied scientific disciplines and in applications, see e.g @cite_19 @cite_24 @cite_25 @cite_37 @cite_16 @cite_23 @cite_29 @cite_45 @cite_22 @cite_11 @cite_43 . Thus, there is a large body of works to solve different versions of these problems. Most of the existing works however require the availability of multiple received signals @math . And indeed, it is not hard to imagine that for instance an SVD-based approach will succeed if @math (and must fail if @math ). A sparsity-based approach can be found in @cite_8 . However, in this paper we are interested in the case where we have only one single received signal @math -- a single snapshot, in the jargon of array processing. Hence, there is little overlap between these methods heavily relying on multiple snapshots (many of which do not come with any theory) and the work presented here.
{ "cite_N": [ "@cite_37", "@cite_22", "@cite_8", "@cite_29", "@cite_24", "@cite_19", "@cite_43", "@cite_45", "@cite_23", "@cite_16", "@cite_25", "@cite_11" ], "mid": [ "1986257159", "", "2804343591", "2030633827", "2159187245", "1607854404", "", "", "2163170199", "2116142355", "2011677178", "" ], "abstract": [ "Convolutive mixtures of images are common in photography of semi-reflections. They also occur in microscopy and tomography. Their formation process involves focusing on an object layer, over which defocused layers are superimposed. We seek blind source separation (BSS) of such mixtures. However, achieving this by direct optimization of mutual information is very complex and suffers from local minima. Thus, we devise an efficient approach to solve these problems. While achieving high quality image separation, we take steps that make the problem significantly simpler than a direct formulation of convolutive image mixtures. These steps make the problem practically convex, yielding a unique global solution to which convergence can be fast. The convolutive BSS problem is converted into a set of instantaneous (pointwise) problems, using a short time Fourier transform (STFT). Standard BSS solutions for instantaneous problems suffer, however, from scale and permutation ambiguities. We overcome these ambiguities by exploiting a parametric model of the defocus point spread function. Moreover, we enhance the efficiency of the approach by exploiting the sparsity of the STFT representation as a prior. We apply our algorithm to semi-reflections, and demonstrate it in experiments.", "", "", "Blind source separation is a process in which mixed signals, obtained as a linear combination of various source signals, are decomposed into their original sources. The source signals and their mixture weights are unknown, but a priori information about their statistical behavior and mixing model is available. In this paper, a new algorithm based on generalized cross correlation linear-operator set is proposed. This algorithm significantly improves source-separation quality compared to several other well-known algorithms, such as subband decomposition independent component analysis, block Gaussian likelihood, and convex analysis of mixtures of non-negative sources.", "The problem of blind demodulation of multiuser information symbols in a high-rate code-division multiple-access (CDMA) network in the presence of both multiple-access interference (MAI) and intersymbol interference (ISI) is considered. The dispersive CDMA channel is first cast into a multiple-input multiple-output (MIMO) signal model framework. By applying the theory of blind MIMO channel identification and equalization, it is then shown that under certain conditions the multiuser information symbols can be recovered without any prior knowledge of the channel or the users' signature waveforms (including the desired user's signature waveform), although the algorithmic complexity of such an approach is prohibitively high. However, in practice, the signature waveform of the user of interest is always available at the receiver. It is shown that by incorporating this knowledge, the impulse response of each user's dispersive channel can be identified using a subspace method. It is further shown that based on the identified signal subspace parameters and the channel response, two linear detectors that are capable of suppressing both MAI and ISI, i.e., a zero-forcing detector and a minimum-mean-square-error (MMSE) detector, can be constructed in closed form, at almost no extra computational cost. Data detection can then be furnished by applying these linear detectors (obtained blindly) to the received signal. The major contribution of this paper is the development of these subspace-based blind techniques for joint suppression of MAI and ISI in the dispersive CDMA channels.", "This chapter reviews space-time processing methods for CDMA mobile radio applications with emphasis on blind signal detection. We begin with a motivation for the use of blind space-time processing in CDMA. Next, we develop channel and signal models useful for blind processing. We follow this by considering first space-time single user receivers (ST-RAKE) and then review some basic theory of blind ST-RAKE algorithms. The important problem of multi-user detection (MUD) is considered next which leads to a novel technique allowing the estimation of the minimum-mean-square error linear MUD.", "", "", "We consider the blind multiuser detection problem for asynchronous DS-CDMA systems operating in a multipath environment. Using only the spreading code of the desired user, we first estimate the column vector subspace of the channel matrix by multiple linear prediction. Then, zero-forcing detectors and MMSE detectors with arbitrary delay can be obtained without explicit channel estimation. This avoids any channel estimation error, and the resulting methods are therefore more robust and more accurate. Corresponding batch algorithms and adaptive algorithms are developed. The new algorithms are extremely near-far resistant. Simulations demonstrate the effectiveness of these methods.", "By combining multiple-input multiple-output (MIMO) communication with the orthogonal frequency division multiplexing (OFDM) modulation scheme, MIMO-OFDM systems can achieve high data rates over broadband wireless channels. In this paper, to provide a bandwidth-efficient solution for MIMO-OFDM channel estimation, we establish conditions for channel identifiability and present a blind channel estimation technique based on a subspace approach. The proposed method unifies and generalizes the existing subspace-based methods for blind channel estimation in single-input single-output OFDM systems to blind channel estimation for two different MIMO-OFDM systems distinguished according to the number of transmit and receive antennas. In particular, the proposed method obtains accurate channel estimation and fast convergence with insensitivity to overestimates of the true channel order. If virtual carriers (VCs) are available, the proposed method can work with no or insufficient cyclic prefix (CP), thereby potentially increasing channel utilization. Furthermore, it is shown under specific system conditions that the proposed method can be applied to MIMO-OFDM systems without CPs, regardless of the presence of VCs, and obtains an accurate channel estimate with a small number of OFDM symbols. Thus, this method improves the transmission bandwidth efficiency. Simulation results illustrate the mean-square error performance of the proposed method via numerical experiments", "A time domain blind source separation algorithm of convolutive sound mixtures is studied based on a compact partial inversion formula in closed form. An l1-constrained minimization problem is formulated to find demixing filter coefficients for source separation while capturing scaling invariance and sparseness of solutions. The minimization aims to reduce (lagged) cross correlations of the mixture signals which are modeled stochastically. The problem is non-convex, however it is put in a nonlinear least square form where the robust and convergent Levenberg-Marquardt iterative method is applicable to compute local minimizers. Efficiency is achieved in recovering lower dimensional demixing filter solutions than the physical ones. Computations on recorded and synthetic mixtures show satisfactory performance, and are compared with other iterative methods.", "" ] }
1512.07730
2200878340
Suppose that we have @math sensors and each one intends to send a function @math (e.g. a signal or an image) to a receiver common to all @math sensors. During transmission, each @math gets convolved with a function @math . The receiver records the function @math , given by the sum of all these convolved signals. When and under which conditions is it possible to recover the individual signals @math and the blurring functions @math from just one received signal @math ? This challenging problem, which intertwines blind deconvolution with blind demixing, appears in a variety of applications, such as audio processing, image processing, neuroscience, spectroscopy, and astronomy. It is also expected to play a central role in connection with the future Internet-of-Things. We will prove that under reasonable and practical assumptions, it is possible to solve this otherwise highly ill-posed problem and recover the @math transmitted functions @math and the impulse responses @math in a robust, reliable, and efficient manner from just one single received function @math by solving a semidefinite program. We derive explicit bounds on the number of measurements needed for successful recovery and prove that our method is robust in the presence of noise. Our theory is actually sub-optimal, since numerical experiments demonstrate that, quite remarkably, recovery is still possible if the number of measurements is close to the number of degrees of freedom.
The setup in is reminiscent of a single-antenna multi-user spread spectrum communication scenario @cite_6 . There, the matrix @math represents the spreading matrix assigned to the @math -th user and @math models the associated multipath channel. There are numerous papers on blind channel estimation in connection with CDMA, including the previously cited articles @cite_19 @cite_24 @cite_23 . Our work differs from the existing literature on this topic in several ways: As mentioned before, we do not require that we have multiple received signals, we allow all multipath channels @math to differ from each other, and do not impose a particular channel model. Moreover, we provide a rigorous mathematical theory, instead of just empirical observations.
{ "cite_N": [ "@cite_24", "@cite_19", "@cite_23", "@cite_6" ], "mid": [ "2159187245", "1607854404", "2163170199", "" ], "abstract": [ "The problem of blind demodulation of multiuser information symbols in a high-rate code-division multiple-access (CDMA) network in the presence of both multiple-access interference (MAI) and intersymbol interference (ISI) is considered. The dispersive CDMA channel is first cast into a multiple-input multiple-output (MIMO) signal model framework. By applying the theory of blind MIMO channel identification and equalization, it is then shown that under certain conditions the multiuser information symbols can be recovered without any prior knowledge of the channel or the users' signature waveforms (including the desired user's signature waveform), although the algorithmic complexity of such an approach is prohibitively high. However, in practice, the signature waveform of the user of interest is always available at the receiver. It is shown that by incorporating this knowledge, the impulse response of each user's dispersive channel can be identified using a subspace method. It is further shown that based on the identified signal subspace parameters and the channel response, two linear detectors that are capable of suppressing both MAI and ISI, i.e., a zero-forcing detector and a minimum-mean-square-error (MMSE) detector, can be constructed in closed form, at almost no extra computational cost. Data detection can then be furnished by applying these linear detectors (obtained blindly) to the received signal. The major contribution of this paper is the development of these subspace-based blind techniques for joint suppression of MAI and ISI in the dispersive CDMA channels.", "This chapter reviews space-time processing methods for CDMA mobile radio applications with emphasis on blind signal detection. We begin with a motivation for the use of blind space-time processing in CDMA. Next, we develop channel and signal models useful for blind processing. We follow this by considering first space-time single user receivers (ST-RAKE) and then review some basic theory of blind ST-RAKE algorithms. The important problem of multi-user detection (MUD) is considered next which leads to a novel technique allowing the estimation of the minimum-mean-square error linear MUD.", "We consider the blind multiuser detection problem for asynchronous DS-CDMA systems operating in a multipath environment. Using only the spreading code of the desired user, we first estimate the column vector subspace of the channel matrix by multiple linear prediction. Then, zero-forcing detectors and MMSE detectors with arbitrary delay can be obtained without explicit channel estimation. This avoids any channel estimation error, and the resulting methods are therefore more robust and more accurate. Corresponding batch algorithms and adaptive algorithms are developed. The new algorithms are extremely near-far resistant. Simulations demonstrate the effectiveness of these methods.", "" ] }
1512.07730
2200878340
Suppose that we have @math sensors and each one intends to send a function @math (e.g. a signal or an image) to a receiver common to all @math sensors. During transmission, each @math gets convolved with a function @math . The receiver records the function @math , given by the sum of all these convolved signals. When and under which conditions is it possible to recover the individual signals @math and the blurring functions @math from just one received signal @math ? This challenging problem, which intertwines blind deconvolution with blind demixing, appears in a variety of applications, such as audio processing, image processing, neuroscience, spectroscopy, and astronomy. It is also expected to play a central role in connection with the future Internet-of-Things. We will prove that under reasonable and practical assumptions, it is possible to solve this otherwise highly ill-posed problem and recover the @math transmitted functions @math and the impulse responses @math in a robust, reliable, and efficient manner from just one single received function @math by solving a semidefinite program. We derive explicit bounds on the number of measurements needed for successful recovery and prove that our method is robust in the presence of noise. Our theory is actually sub-optimal, since numerical experiments demonstrate that, quite remarkably, recovery is still possible if the number of measurements is close to the number of degrees of freedom.
The paper @cite_2 considers the following generalization of @cite_39 Since the main result in @cite_2 relies on Lemma 4 of @cite_39 , the issues raised in Remark apply to @cite_2 as well. . Assume that we are given signals @math , the goal is to recover the @math and @math from @math . This setting is somewhat in the spirit of , but it is significantly less challenging, since (i) it assumes the same convolution function @math for each signal @math and (ii) there are as many output signals @math as we have input signals @math .
{ "cite_N": [ "@cite_39", "@cite_2" ], "mid": [ "2140867429", "2248294424" ], "abstract": [ "We consider the problem of recovering two unknown vectors, w and x, of length L from their circular convolution. We make the structural assumption that the two vectors are members of known subspaces, one with dimension N and the other with dimension K. Although the observed convolution is nonlinear in both w and x, it is linear in the rank-1 matrix formed by their outer product wx*. This observation allows us to recast the deconvolution problem as low-rank matrix recovery problem from linear measurements, whose natural convex relaxation is a nuclear norm minimization program. We prove the effectiveness of this relaxation by showing that, for “generic” signals, the program can deconvolve w and x exactly when the maximum of N and K is almost on the order of L. That is, we show that if x is drawn from a random subspace of dimension N, and w is a vector in a subspace of dimension K whose basis vectors are spread out in the frequency domain, then nuclear norm minimization recovers wx* without error. We discuss this result in the context of blind channel estimation in communications. If we have a message of length N, which we code using a random L x N coding matrix, and the encoded message travels through an unknown linear time-invariant channel of maximum length K, then the receiver can recover both the channel response and the message when L ≳ N + K, to within constant and log factors.", "This note considers the problem of blind identification of a linear, time-invariant (LTI) system when the input signals are unknown, but belong to sufficiently diverse, known subspaces. This problem can be recast as the recovery of a rank-1 matrix, and is effectively relaxed using a semidefinite program (SDP). We show that exact recovery of both the unknown impulse response, and the unknown inputs, occurs when the following conditions are met: (1) the impulse response function is spread in the Fourier domain, and (2) the N input vectors belong to generic, known subspaces of dimension K in ℝL. Recent results in the well-understood area of low-rank recovery from underdetermined linear measurements can be adapted to show that exact recovery occurs with high probablility (on the genericity of the subspaces) provided that K,L, and N obey the information-theoretic scalings, namely L ≳ K and N ≳ 1 up to log factors." ] }
1512.07730
2200878340
Suppose that we have @math sensors and each one intends to send a function @math (e.g. a signal or an image) to a receiver common to all @math sensors. During transmission, each @math gets convolved with a function @math . The receiver records the function @math , given by the sum of all these convolved signals. When and under which conditions is it possible to recover the individual signals @math and the blurring functions @math from just one received signal @math ? This challenging problem, which intertwines blind deconvolution with blind demixing, appears in a variety of applications, such as audio processing, image processing, neuroscience, spectroscopy, and astronomy. It is also expected to play a central role in connection with the future Internet-of-Things. We will prove that under reasonable and practical assumptions, it is possible to solve this otherwise highly ill-posed problem and recover the @math transmitted functions @math and the impulse responses @math in a robust, reliable, and efficient manner from just one single received function @math by solving a semidefinite program. We derive explicit bounds on the number of measurements needed for successful recovery and prove that our method is robust in the presence of noise. Our theory is actually sub-optimal, since numerical experiments demonstrate that, quite remarkably, recovery is still possible if the number of measurements is close to the number of degrees of freedom.
The current manuscript can as well be seen as an extension of our work on self-calibration @cite_33 to the multi-sensor case. In this context, we also refer to related (single-input-single-output) analysis in @cite_40 @cite_17 .
{ "cite_N": [ "@cite_40", "@cite_33", "@cite_17" ], "mid": [ "2951249062", "67860792", "589200591" ], "abstract": [ "Blind deconvolution (BD), the resolution of a signal and a filter given their convolution, arises in many applications. Without further constraints, BD is ill-posed. In practice, subspace or sparsity constraints have been imposed to reduce the search space, and have shown some empirical success. However, existing theoretical analysis on uniqueness in BD is rather limited. As an effort to address the still mysterious question, we derive sufficient conditions under which two vectors can be uniquely identified from their circular convolution, subject to subspace or sparsity constraints. These sufficient conditions provide the first algebraic sample complexities for BD. We first derive a sufficient condition that applies to almost all bases or frames. For blind deconvolution of vectors in @math , with two subspace constraints of dimensions @math and @math , the required sample complexity is @math . Then we impose a sub-band structure on one basis, and derive a sufficient condition that involves a relaxed sample complexity @math , which we show to be optimal. We present the extensions of these results to BD with sparsity constraints or mixed constraints, with the sparsity level replacing the subspace dimension. The cost for the unknown support in this case is an extra factor of 2 in the sample complexity.", "The design of high-precision sensing devises becomes ever more difficult and expensive. At the same time, the need for precise calibration of these devices (ranging from tiny sensors to space telescopes) manifests itself as a major roadblock in many scientific and technological endeavors. To achieve optimal performance of advanced high-performance sensors one must carefully calibrate them, which is often difficult or even impossible to do in practice. In this work we bring together three seemingly unrelated concepts, namely self-calibration, compressive sensing, and biconvex optimization. The idea behind self-calibration is to equip a hardware device with a smart algorithm that can compensate automatically for the lack of calibration. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations where both and the diagonal matrix (which models the calibration error) are unknown. By 'lifting' this biconvex inverse problem we arrive at a convex optimization problem. By exploiting sparsity in the signal model, we derive explicit theoretical guarantees under which both and can be recovered exactly, robustly, and numerically efficiently via linear programming. Applications in array calibration and wireless communications are discussed and numerical simulations are presented, confirming and complementing our theoretical analysis.", "Neural recordings, returns from radars and sonars, images in astronomy and single-molecule microscopy can be modeled as a linear superposition of a small number of scaled and delayed copies of a band-limited or diffraction-limited point spread function, which is either determined by the nature or designed by the users; in other words, we observe the convolution between a point spread function and a sparse spike signal with unknown amplitudes and delays. While it is of great interest to accurately resolve the spike signal from as few samples as possible, however, when the point spread function is not known a priori, this problem is terribly ill-posed. This paper proposes a convex optimization framework to simultaneously estimate the point spread function as well as the spike signal, by mildly constraining the point spread function to lie in a known low-dimensional subspace. By applying the lifting trick, we obtain an underdetermined linear system of an ensemble of signals with joint spectral sparsity, to which atomic norm minimization is applied. Under mild randomness assumptions of the low-dimensional subspace as well as a separation condition of the spike signal, we prove the proposed algorithm, dubbed as AtomicLift, is guaranteed to recover the spike signal up to a scaling factor as soon as the number of samples is large enough. The extension of AtomicLift to handle noisy measurements is also discussed. Numerical examples are provided to validate the effectiveness of the proposed approaches." ] }
1512.07734
2210714229
Recently, several large-scale RDF knowledge bases have been built and applied in many knowledge-based applications. To further increase the number of facts in RDF knowledge bases, logic rules can be used to predict new facts based on the existing ones. Therefore, how to automatically learn reliable rules from large-scale knowledge bases becomes increasingly important. In this paper, we propose a novel rule learning approach named RDF2Rules for RDF knowledge bases. RDF2Rules first mines frequent predicate cycles (FPCs), a kind of interesting frequent patterns in knowledge bases, and then generates rules from the mined FPCs. Because each FPC can produce multiple rules, and effective pruning strategy is used in the process of mining FPCs, RDF2Rules works very efficiently. Another advantage of RDF2Rules is that it uses the entity type information when generates and evaluates rules, which makes the learned rules more accurate. Experiments show that our approach outperforms the compared approach in terms of both efficiency and accuracy.
There are also some work that uses similar structures as predicate path to predict new facts in knowledge bases, such as @cite_15 @cite_5 . But these work focus on how to accurately predicate relations based on multiple predicate relation paths. How to effectively discover useful predicate paths are not discussed in these work. Our work focus on how to learn frequent predicate paths and use them to generate rules; the paths discovered by our approach can be used as input for the above two approaches. Most recently, there have been some work try to combine logic rules and knowledge embedding to predict new facts, such as @cite_10 @cite_0 . These work also do not focus on how to learn rules, but on how to use rules to make accurate predictions. So rules learned by our approach can also used in these approaches.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_15", "@cite_10" ], "mid": [ "2952854166", "2295128594", "1756422141", "2274308990" ], "abstract": [ "Representation learning of knowledge bases (KBs) aims to embed both entities and relations into a low-dimensional space. Most existing methods only consider direct relations in representation learning. We argue that multiple-step relation paths also contain rich inference patterns between entities, and propose a path-based representation learning model. This model considers relation paths as translations between entities for representation learning, and addresses two key challenges: (1) Since not all relation paths are reliable, we design a path-constraint resource allocation algorithm to measure the reliability of relation paths. (2) We represent relation paths via semantic composition of relation embeddings. Experimental results on real-world datasets show that, as compared with baselines, our model achieves significant and consistent improvements on knowledge base completion and relation extraction from text.", "The Heterogeneous Information Network (HIN) is a graph data model in which nodes and edges are annotated with class and relationship labels. Large and complex datasets, such as Yago or DBLP, can be modeled as HINs. Recent work has studied how to make use of these rich information sources. In particular, meta-paths, which represent sequences of node classes and edge types between two nodes in a HIN, have been proposed for such tasks as information retrieval, decision making, and product recommendation. Current methods assume meta-paths are found by domain experts. However, in a large and complex HIN, retrieving meta-paths manually can be tedious and difficult. We thus study how to discover meta-paths automatically. Specifically, users are asked to provide example pairs of nodes that exhibit high proximity. We then investigate how to generate meta-paths that can best explain the relationship between these node pairs. Since this problem is computationally intractable, we propose a greedy algorithm to select the most relevant meta-paths. We also present a data structure to enable efficient execution of this algorithm. We further incorporate hierarchical relationships among node classes in our solutions. Extensive experiments on real-world HIN show that our approach captures important meta-paths in an efficient and scalable manner.", "We consider the problem of performing learning and inference in a large scale knowledge base containing imperfect knowledge with incomplete coverage. We show that a soft inference procedure based on a combination of constrained, weighted, random walks through the knowledge base graph can be used to reliably infer new beliefs for the knowledge base. More specifically, we show that the system can learn to infer different target relations by tuning the weights associated with random walks that follow different paths through the graph, using a version of the Path Ranking Algorithm (Lao and Cohen, 2010b). We apply this approach to a knowledge base of approximately 500,000 beliefs extracted imperfectly from the web by NELL, a never-ending language learner (, 2010). This new system improves significantly over NELL's earlier Horn-clause learning and inference method: it obtains nearly double the precision at rank 100, and the new learning method is also applicable to many more inference tasks.", "Knowledge bases (KBs) are often greatly incomplete, necessitating a demand for KB completion. A promising approach is to embed KBs into latent spaces and make inferences by learning and operating on latent representations. Such embedding models, however, do not make use of any rules during inference and hence have limited accuracy. This paper proposes a novel approach which incorporates rules seamlessly into embedding models for KB completion. It formulates inference as an integer linear programming (ILP) problem, with the objective function generated from embedding models and the constraints translated from rules. Solving the ILP problem results in a number of facts which 1) are the most preferred by the embedding models, and 2) comply with all the rules. By incorporating rules, our approach can greatly reduce the solution space and significantly improve the inference accuracy of embedding models. We further provide a slacking technique to handle noise in KBs, by explicitly modeling the noise with slack variables. Experimental results on two publicly available data sets show that our approach significantly and consistently outperforms state-of-the-art embedding models in KB completion. Moreover, the slacking technique is effective in identifying erroneous facts and ambiguous entities, with a precision higher than 90 ." ] }
1512.07612
2217915259
Since its introduction by Hastings (Phys Rev B 69:104431, 2004), the technique of quasi-adiabatic continuation has become a central tool in the discussion and classification of ground-state phases. It connects the ground states of self-adjoint Hamiltonians in the same phase by a unitary quasi-local transformation. This paper takes a step towards extending this result to non-self-adjoint perturbations, though, for technical reason, we restrict ourselves here to weak perturbations of non-interacting spins. The extension to non-self-adjoint perturbation is important for potential applications to Glauber dynamics (and its quantum analogues). In contrast to the standard quasi-adiabatic transformation, the transformation constructed here is exponentially local. Our scheme is inspired by KAM theory, with frustration-free operators playing the role of integrable Hamiltonians.
In @cite_20 @cite_32 the authors prove that the ground state gap for a class of frustration-free Hamiltonians is stable under arbitrary perturbations in the interaction. They use Hastings' spectral flow technique (or quasi-adiabatic continuation) to map the perturbed Hamiltonian by a similarity transformation to a Hamiltonian that is frustration-free (called locally block diagonal' in @cite_20 ) with respect to the unperturbed ground state and for which therefore a gap can be proved more easily. In our work we show that perturbations, which however need not to be self-adjoint, of classical Hamiltonians are similar to frustration-free ones. Moreover, if the perturbation is exponentially quasi-local, so is the new Hamiltonian. This is a new result that cannot be obtained through the sub-exponentially quasi-local spectral flow.
{ "cite_N": [ "@cite_32", "@cite_20" ], "mid": [ "2135902788", "2003341284" ], "abstract": [ "We prove stability of the spectral gap for gapped, frustration-free Hamiltonians under general, quasi-local perturbations. We present a necessary and sufficient condition for stability, which we call Local Topological Quantum Order and show that this condition implies an area law for the entanglement entropy of the groundstate subspace. This result extends previous work by on the stability of topological quantum order for Hamiltonians composed of commuting projections with a common zero-energy subspace. We conclude with a list of open problems relevant to spectral gaps and topological quantum order.", "Recently, the stability of certain topological phases of matter under weak perturbations was proven. Here, we present a short, alternate proof of the same result. We consider models of topological quantum order for which the unperturbed Hamiltonian H 0 can be written as a sum of local pairwise commuting projectors on a D-dimensional lattice. We consider a perturbed Hamiltonian H = H 0 + V involving a generic perturbation V that can be written as a sum of short-range bounded-norm interactions. We prove that if the strength of V is below a constant threshold value then H has well-defined spectral bands originating from the low-lying eigenvalues of H 0. These bands are separated from the rest of the spectrum and from each other by a constant gap. The width of the band originating from the smallest eigenvalue of H 0 decays faster than any power of the lattice size." ] }
1512.07612
2217915259
Since its introduction by Hastings (Phys Rev B 69:104431, 2004), the technique of quasi-adiabatic continuation has become a central tool in the discussion and classification of ground-state phases. It connects the ground states of self-adjoint Hamiltonians in the same phase by a unitary quasi-local transformation. This paper takes a step towards extending this result to non-self-adjoint perturbations, though, for technical reason, we restrict ourselves here to weak perturbations of non-interacting spins. The extension to non-self-adjoint perturbation is important for potential applications to Glauber dynamics (and its quantum analogues). In contrast to the standard quasi-adiabatic transformation, the transformation constructed here is exponentially local. Our scheme is inspired by KAM theory, with frustration-free operators playing the role of integrable Hamiltonians.
Frustration-freeness was as a helpful property in many studies of gapped systems, see e.g. @cite_22 @cite_17 on lower bounds of ground state gaps. It is thus good news that frustration-free systems appear to be rather generic. In @cite_27 , Hastings showed that every gapped local Hamiltonian can be rewritten as approximately frustration-free Hamiltonian upon increasing the range of the interaction (the error vanishes in the limit of infinite range). As another example, matrix product states in one-dimensional spin chains always possess a frustration-free parent Hamiltonian @cite_11 . A close connection between such parent Hamiltonians and perturbations of classical systems was worked out in @cite_2 .
{ "cite_N": [ "@cite_22", "@cite_17", "@cite_27", "@cite_2", "@cite_11" ], "mid": [ "2092385243", "", "1969631049", "1983705091", "2074426935" ], "abstract": [ "We prove that for any finite set of generalized valence bond solid (GVBS) states of a quantum spin chain there exists a translation invariant finite-range Hamiltonian for which this set is the set of ground states. This result implies that there are GVBS models with arbitrary broken discrete symmetries that are described as combinations of lattice translations, lattice reflections, and local unitary or anti-unitary transformations. We also show that all GVBS models that satisfy some natural conditions have a spectral gap. The existence of a spectral gap is obtained by applying a simple and quite general strategy for proving lower bounds on the spectral gap of the generator of a classical or quantum spin dynamics. This general scheme is interesting in its own right and threfore, although the basic idea is not new, we present it in a system-independent setting. The results are illustrated with a number of examples.", "", "We show that any short-range Hamiltonian with a gap between the ground and excited states can be written as a sum of local operators, such that the ground state is an approximate eigenvector of each operator separately. We then show that the ground state of any such Hamiltonian is close to a generalized matrix product state. The range of the given operators needed to obtain a good approximation to the ground state is proportional to the square of the logarithm of the system size times a characteristic factorization length.'' Applications to many-body quantum simulation are discussed. We also consider density matrices of systems at non zero temperature.", "This article investigates the stability of the ground state subspace of a canonical parent Hamiltonian of a Matrix product state against local perturbations. We prove that the spectral gap of such a Hamiltonian remains stable under weak local perturbations even in the thermodynamic limit, where the entire perturbation might not be bounded. Our discussion is based on preceding work by Yarotsky that develops a perturbation theory for relatively bounded quantum perturbations of classical Hamiltonians. We exploit a renormalization procedure, which on large scale transforms the parent Hamiltonian of a Matrix product state into a classical Hamiltonian plus some perturbation. We can thus extend Yarotsky’s results to provide a perturbation theory for parent Hamiltonians of Matrix product states and recover some of the findings of the independent contributions ( in Phys Rev B 8(11):115108, 2013) and (Michalakis and Pytel in Comm Math Phys 322(2):277–302, 2013).", "We study a construction that yields a class of translation invariant states on quantum spin chains, characterized by the property that the correlations across any bond can be modeled on a finite-dimensional vector space. These states can be considered as generalized valence bond states, and they are dense in the set of all translation invariant states. We develop a complete theory of the ergodic decomposition of such states, including the decomposition into periodic “Neel ordered” states. The ergodic components have exponential decay of correlations. All states considered can be obtained as “local functions” of states of a special kind, so-called “purely generated states,” which are shown to be ground states for suitably chosen finite range VBS interactions. We show that all these generalized VBS models have a spectral gap. Our theory does not require symmetry of the state with respect to a local gauge group. In particular we illustrate our results with a one-parameter family of examples which are not isotropic except for one special case. This isotropic model coincides with the one-dimensional antiferromagnet, recently studied by Affleck, Kennedy, Lieb, and Tasaki." ] }
1512.07612
2217915259
Since its introduction by Hastings (Phys Rev B 69:104431, 2004), the technique of quasi-adiabatic continuation has become a central tool in the discussion and classification of ground-state phases. It connects the ground states of self-adjoint Hamiltonians in the same phase by a unitary quasi-local transformation. This paper takes a step towards extending this result to non-self-adjoint perturbations, though, for technical reason, we restrict ourselves here to weak perturbations of non-interacting spins. The extension to non-self-adjoint perturbation is important for potential applications to Glauber dynamics (and its quantum analogues). In contrast to the standard quasi-adiabatic transformation, the transformation constructed here is exponentially local. Our scheme is inspired by KAM theory, with frustration-free operators playing the role of integrable Hamiltonians.
Besides the restriction to frustration-free systems the result @cite_20 @cite_32 rests on assumptions concerning the presence of a local gap and topological order in the unperturbed ground state subspace, which are trivially satisfied in our setting of independent spins and unique ground state.
{ "cite_N": [ "@cite_32", "@cite_20" ], "mid": [ "2135902788", "2003341284" ], "abstract": [ "We prove stability of the spectral gap for gapped, frustration-free Hamiltonians under general, quasi-local perturbations. We present a necessary and sufficient condition for stability, which we call Local Topological Quantum Order and show that this condition implies an area law for the entanglement entropy of the groundstate subspace. This result extends previous work by on the stability of topological quantum order for Hamiltonians composed of commuting projections with a common zero-energy subspace. We conclude with a list of open problems relevant to spectral gaps and topological quantum order.", "Recently, the stability of certain topological phases of matter under weak perturbations was proven. Here, we present a short, alternate proof of the same result. We consider models of topological quantum order for which the unperturbed Hamiltonian H 0 can be written as a sum of local pairwise commuting projectors on a D-dimensional lattice. We consider a perturbed Hamiltonian H = H 0 + V involving a generic perturbation V that can be written as a sum of short-range bounded-norm interactions. We prove that if the strength of V is below a constant threshold value then H has well-defined spectral bands originating from the low-lying eigenvalues of H 0. These bands are separated from the rest of the spectrum and from each other by a constant gap. The width of the band originating from the smallest eigenvalue of H 0 decays faster than any power of the lattice size." ] }
1512.07155
2950074681
Recently, attempts have been made to collect millions of videos to train CNN models for action recognition in videos. However, curating such large-scale video datasets requires immense human labor, and training CNNs on millions of videos demands huge computational resources. In contrast, collecting action images from the Web is much easier and training on images requires much less computation. In addition, labeled web images tend to contain discriminative action poses, which highlight discriminative portions of a video's temporal progression. We explore the question of whether we can utilize web action images to train better CNN models for action recognition in videos. We collect 23.8K manually filtered images from the Web that depict the 101 actions in the UCF101 action video dataset. We show that by utilizing web action images along with videos in training, significant performance boosts of CNN models can be achieved. We then investigate the scalability of the process by leveraging crawled web images (unfiltered) for UCF101 and ActivityNet. We replace 16.2M video frames by 393K unfiltered images and get comparable performance.
Action recognition is an important research topic for which a large number of methods have been proposed @cite_25 . Among these, due to promising performance on realistic videos including web videos and movies, bag-of-words approaches that employ expertly-designed local space-time features have been widely used. Some representative works include space-time interest points @cite_17 and dense trajectories @cite_28 . Advanced feature encoding methods, Fisher vector encoding @cite_15 , can be used to further improve the performance of such methods @cite_36 . Besides bag-of-words approaches, other works make an effort to explicitly model the space-time structures of human actions @cite_32 @cite_16 @cite_19 by using, for example, HCRFs and MRFs.
{ "cite_N": [ "@cite_28", "@cite_36", "@cite_32", "@cite_19", "@cite_15", "@cite_16", "@cite_25", "@cite_17" ], "mid": [ "2105101328", "", "2025508903", "", "1606858007", "", "2098339052", "2142194269" ], "abstract": [ "Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art.", "", "In this paper, we develop a new model for recognizing human actions. An action is modeled as a very sparse sequence of temporally local discriminative key frames - collections of partial key-poses of the actor(s), depicting key states in the action sequence. We cast the learning of key frames in a max-margin discriminative framework, where we treat key frames as latent variables. This allows us to (jointly) learn a set of most discriminative key frames while also learning the local temporal context between them. Key frames are encoded using a spatially-localizable pose let-like representation with HoG and BoW components learned from weak annotations, we rely on structured SVM formulation to align our components and mine for hard negatives to boost localization performance. This results in a model that supports spatio-temporal localization and is insensitive to dropped frames or partial observations. We show classification performance that is competitive with the state of the art on the benchmark UT-Interaction dataset and illustrate that our model outperforms prior methods in an on-line streaming setting.", "", "The Fisher kernel (FK) is a generic framework which combines the benefits of generative and discriminative approaches. In the context of image classification the FK was shown to extend the popular bag-of-visual-words (BOV) by going beyond count statistics. However, in practice, this enriched representation has not yet shown its superiority over the BOV. In the first part we show that with several well-motivated modifications over the original framework we can boost the accuracy of the FK. On PASCAL VOC 2007 we increase the Average Precision (AP) from 47.9 to 58.3 . Similarly, we demonstrate state-of-the-art accuracy on CalTech 256. A major advantage is that these results are obtained using only SIFT descriptors and costless linear classifiers. Equipped with this representation, we can now explore image classification on a larger scale. In the second part, as an application, we compare two abundant resources of labeled images to learn classifiers: ImageNet and Flickr groups. In an evaluation involving hundreds of thousands of training images we show that classifiers learned on Flickr groups perform surprisingly well (although they were not intended for this purpose) and that they can complement classifiers learned on more carefully annotated datasets.", "", "Action recognition has become a very important topic in computer vision, with many fundamental applications, in robotics, video surveillance, human-computer interaction, and multimedia retrieval among others and a large variety of approaches have been described. The purpose of this survey is to give an overview and categorization of the approaches used. We concentrate on approaches that aim on classification of full-body motions, such as kicking, punching, and waving, and we categorize them according to how they represent the spatial and temporal structure of actions; how they segment actions from an input stream of visual data; and how they learn a view-invariant representation of actions.", "The aim of this paper is to address recognition of natural human actions in diverse and realistic video settings. This challenging but important subject has mostly been ignored in the past due to several problems one of which is the lack of realistic and annotated video datasets. Our first contribution is to address this limitation and to investigate the use of movie scripts for automatic annotation of human actions in videos. We evaluate alternative methods for action retrieval from scripts and show benefits of a text-based classifier. Using the retrieved action samples for visual learning, we next turn to the problem of action classification in video. We present a new method for video classification that builds upon and extends several recent ideas including local space-time features, space-time pyramids and multi-channel non-linear SVMs. The method is shown to improve state-of-the-art results on the standard KTH action dataset by achieving 91.8 accuracy. Given the inherent problem of noisy labels in automatic annotation, we particularly investigate and show high tolerance of our method to annotation errors in the training set. We finally apply the method to learning and classifying challenging action classes in movies and show promising results." ] }
1512.07449
2220848139
We propose exact solution approaches for a lateral transhipment problem which, given a pre-specified sequence of customers, seeks an optimal inventory redistribution plan considering travel costs and profits dependent on inventory levels. Trip-duration and vehicle-capacity constraints are also imposed. The same problem arises in some lot sizing applications, in the presence of setup costs and equipment re-qualifications. We introduce a pure dynamic programming approach and a branch-and-bound framework that combines dynamic programming with Lagrangian relaxation. Computational experiments are conducted to determine the most suitable solution approach for different instances, depending on their size, vehicle capacities and duration constraints. The branch-and-bound approach, in particular, solves problems with up to 50 delivery locations in less than ten seconds on a modern computer.
Optimization problems with combined inventory and routing decisions arise in a wide variety of contexts. In inventory routing problems @cite_19 , for example, inventory and routing costs are minimized on a planning horizon. Each route occurs on a specific time period, originates from a central depot and visits some customers to replenish their inventories. The adequate selection of a subset of customers for each period, as in the team orienteering and prize-collecting problems @cite_9 @cite_21 , is thus an essential problem feature.
{ "cite_N": [ "@cite_19", "@cite_9", "@cite_21" ], "mid": [ "", "1963938442", "1971759877" ], "abstract": [ "", "In the team orienteering problem, start and end points are specified along with other locations which have associated scores. Given a fixed amount of time for each of the M members of the team, the goal is to determine M paths from the start point to the end point through a subset of locations in order to maximize the total score. In this paper, a fast and effective heuristic is presented and tested on 353 problems ranging in size from 21 to 102 points. The computational results are presented in detail.", "We consider several vehicle routing problems (VRP) with profits, which seek to select a subset of customers, each one being associated with a profit, and to design service itineraries. When the sum of profits is maximized under distance constraints, the problem is usually called the team orienteering problem. The capacitated profitable tour problem seeks to maximize profits minus travel costs under capacity constraints. Finally, in the VRP with a private fleet and common carrier, some customers can be delegated to an external carrier subject to a cost. Three families of combined decisions must be taken: customer’s selection, assignment to vehicles, and sequencing of deliveries for each route.We propose a new neighborhood search for these problems, which explores an exponential number of solutions in pseudo-polynomial time. The search is conducted with standard VRP neighborhoods on an exhaustive solution representation, visiting all customers. Since visiting all customers is usually infeasible or suboptimal, an efficient select algorithm, based on resource constrained shortest paths, is repeatedly used on any new route to find the optimal subsequence of visits to customers. The good performance of these neighborhood structures is demonstrated by extensive computational experiments with a local search, an iterated local search, and a hybrid genetic algorithm. Intriguingly, even a local-improvement method to the first local optimum of this neighborhood achieves an average gap of 0.09 on classic team orienteering benchmark instances, rivaling with the current state-of-the-art metaheuristics. Promising research avenues on hybridizations with more standard routing neighborhoods are also open." ] }
1512.07449
2220848139
We propose exact solution approaches for a lateral transhipment problem which, given a pre-specified sequence of customers, seeks an optimal inventory redistribution plan considering travel costs and profits dependent on inventory levels. Trip-duration and vehicle-capacity constraints are also imposed. The same problem arises in some lot sizing applications, in the presence of setup costs and equipment re-qualifications. We introduce a pure dynamic programming approach and a branch-and-bound framework that combines dynamic programming with Lagrangian relaxation. Computational experiments are conducted to determine the most suitable solution approach for different instances, depending on their size, vehicle capacities and duration constraints. The branch-and-bound approach, in particular, solves problems with up to 50 delivery locations in less than ten seconds on a modern computer.
Other related problems have been defined on a single planning period, such as the TSP with pickups and deliveries @cite_4 , the balancing problems for static bike sharing systems @cite_10 @cite_3 and also the lateral transhipment problem for a single route (SRLTP), @cite_1 @cite_11 . This latter problem aims to distribute inventory on a network @math via pickups and deliveries using one vehicle, to minimize a non-linear objective. In bike sharing systems @cite_2 , a target level is defined and the objective is to minimize the corresponding total deviation (a piecewise-linear convex function). Some MIP formulations of this problem are introduced in @cite_23 . The objective includes expected shortage costs and travel costs, similar to the TSP with profits @cite_29 . Dynamic route interactions like hand-overs (intermediate storage) and multiple visits are also considered.
{ "cite_N": [ "@cite_4", "@cite_29", "@cite_1", "@cite_3", "@cite_23", "@cite_2", "@cite_10", "@cite_11" ], "mid": [ "2031736545", "1991149726", "", "1921393379", "2032251915", "", "2145656012", "2176306573" ], "abstract": [ "We study a generalization of the well-known traveling salesman problem (TSP) where each customer provides or requires a given non-zero amount of product, and the vehicle in a depot has a given capacity. Each customer and the depot must be visited exactly once by the vehicle supplying the demand while minimizing the total travel distance. We assume that the product collected from pickup customers can be delivered to delivery customers. We introduce a 0-1 integer linear model for this problem and describe a branch-and-cut procedure for finding an optimal solution. The model and the algorithm are adapted to solve instances of TSP with pickup and delivery. Some computational results are presented to analyze the performance of our proposal.", "Traveling salesman problems with profits (TSPs with profits) are a generalization of the traveling salesman problem (TSP), where it is not necessary to visit all vertices. A profit is associated with each vertex. The overall goal is the simultaneous optimization of the collected profit and the travel costs. These two optimization criteria appear either in the objective function or as a constraint. In this paper, a classification of TSPs with profits is proposed, and the existing literature is surveyed. Different classes of applications, modeling approaches, and exact or heuristic solution techniques are identified and compared. Conclusions emphasize the interest of this class of problems, with respect to applications as well as theoretical results.", "", "We consider the necessary redistribution of bicycles in public bicycle sharing systems in order to avoid rental stations to run empty or entirely full. For this purpose we propose a general Variable Neighborhood Search (VNS) with an embedded Variable Neighborhood Descent (VND) that exploits a series of neighborhood structures. While this metaheuristic generates candidate routes for vehicles to visit unbalanced rental stations, the numbers of bikes to be loaded or unloaded at each stop are efficiently derived by one of three alternative methods based on a greedy heuristic, a maximum flow calculation, and linear programming, respectively. Tests are performed on instances derived from real-world data and indicate that the VNS based on a greedy heuristic represents the best compromise for practice. In general the VNS yields good solutions and scales much better to larger instances than two mixed integer programming approaches.", "Bike-sharing systems allow people to rent a bicycle at one of many automatic rental stations scattered around the city, use them for a short journey and return them at any station in the city. A crucial factor for the success of a bike-sharing system is its ability to meet the fluctuating demand for bicycles and for vacant lockers at each station. This is achieved by means of a repositioning operation, which consists of removing bicycles from some stations and transferring them to other stations, using a dedicated fleet of trucks. Operating such a fleet in a large bike-sharing system is an intricate problem consisting of decisions regarding the routes that the vehicles should follow and the number of bicycles that should be removed or placed at each station on each visit of the vehicles. In this paper, we present our modeling approach to the problem that generalizes existing routing models in the literature. This is done by introducing a unique convex objective function as well as time-related considerations. We present two mixed integer linear program formulations, discuss the assumptions associated with each, strengthen them by several valid inequalities and dominance rules, and compare their performances through an extensive numerical study. The results indicate that one of the formulations is very effective in obtaining high quality solutions to real life instances of the problem consisting of up to 104 stations and two vehicles. Finally, we draw insights on the characteristics of good solutions.", "", "Bike-sharing is a new form of sustainable urban public mobility. A common issue observed in bike-sharing systems is imbalances in the distribution of bikes. There are two logistical measures alleviating imbalances: strategic network design and operational repositioning of bikes. IT-systems record data from Bike Sharing Systems (BSS) operation that are suitable for supporting these logistical tasks. A case study shows how Data Mining applied to operational data offers insight into typical usage patterns of bike-sharing systems and is used to forecast bike demand with the aim of supporting and improving strategic and operational planning.", "Previous research has analyzed deterministic and stochastic models of lateral transhipments between different retailers in a supply chain. In these models the analysis assumes given fixed transhipment costs and determines under which situations (magnitudes of excess supply and demand at various retailers) the transhipment is profitable. However, in reality, these depend on aspects like the distance between retailers or the transportation mode chosen. In many situations, combining the transhipments may save transportation costs. For instance, one or more vehicle routes may be used to redistribute the inventory of the potential pickup and delivery stations. This can be done in any sequence as long as the vehicle capacity is not violated and there is enough load on the vehicle to satisfy demand. The corresponding problem is an extension of the one-commodity pickup and delivery traveling salesman and the pickup and delivery vehicle routing problem. When ignoring the routing aspect and assuming given fixed costs, transhipment is only profitable if the quantities are higher than a certain threshold. In contrast to that, the selection of visited retailers is dependent on the transportation costs of the tour and therefore the selected retailers are interrelated. Hence the problem also has aspects of a (team) orienteering problem. The main contribution is the discussion of the tour planning aspects for lateral transhipments which may be valuable for in-house planning but also for price negotiations with external contractors. A mixed integer linear program for the single route and single commodity version is presented and an improved LNS framework to heuristically solve the problem is introduced. Furthermore, the effect of very small load capacity on the structure of optimal solutions is discussed." ] }
1512.07449
2220848139
We propose exact solution approaches for a lateral transhipment problem which, given a pre-specified sequence of customers, seeks an optimal inventory redistribution plan considering travel costs and profits dependent on inventory levels. Trip-duration and vehicle-capacity constraints are also imposed. The same problem arises in some lot sizing applications, in the presence of setup costs and equipment re-qualifications. We introduce a pure dynamic programming approach and a branch-and-bound framework that combines dynamic programming with Lagrangian relaxation. Computational experiments are conducted to determine the most suitable solution approach for different instances, depending on their size, vehicle capacities and duration constraints. The branch-and-bound approach, in particular, solves problems with up to 50 delivery locations in less than ten seconds on a modern computer.
The problem is also very relevant on its own, as a case of routing optimization with a-priori routes @cite_8 . In practical routing applications, retaining some fixed route fragments can lead to a better operational and computational tractability for companies, as well as efficiency gains through driver learning. The corresponding subproblem is called the evaluation problem for a-priori routes, and efficient solution methods are needed to quickly react to changing environments. We also show that the same model encompasses several lot sizing applications with re-qualification costs.
{ "cite_N": [ "@cite_8" ], "mid": [ "1998707332" ], "abstract": [ "Abstract In 1985, Jaillet introduced the probabilistic traveling salesman problem (PTSP), a variant of the classical TSP in which only a subset of the nodes may be present in any given instance of the problem. The goal is to find an a priori tour of minimal expected length, with the strategy of visiting the present nodes in a particular instance in the same order as they appear in the a priori tour. In this paper we reexamine the PTSP using a variety of theoretical and computational approaches. We sharpen the best known bounds for the PTSP, derive several asymptotic relations, and compare from various veiwpoints the PTSP with the re-optimization strategy, i.e., finding an optimal tour in every problem instance. When a Euclidean metric is used and the nodes are uniformly distributed in the unit square, a heuristic for the PTSP is shown to be very close to the re-optimization strategy. We examine some PTSP heuristics with provable worst-case performance, and address the question of finding constant-guarantee heuristics. Implementations of various heuristics, some based on sorting and some on local optimality, permit us to discuss the qualitative and quantitative properties of computational problems with up to 5000 nodes." ] }
1512.06925
2248211510
The success of product quantization (PQ) for fast nearest neighbor search depends on the exponentially reduced complexities of both storage and computation with respect to the codebook size. Recent efforts have been focused on employing sophisticated optimization strategies, or seeking more effective models. Residual quantization (RQ) is such an alternative that holds the same property as PQ in terms of the aforementioned complexities. In addition to being a direct replacement of PQ, hybrids of PQ and RQ can yield more gains for approximate nearest neighbor search. This motivated us to propose a novel approach to optimizing RQ and the related hybrid models. With an observation of the general randomness increase in a residual space, we propose a new strategy that jointly learns a local transformation per residual cluster with an ultimate goal to reduce overall quantization errors. We have shown that our approach can achieve significantly better accuracy on nearest neighbor search than both the original and the optimized PQ on several very large scale benchmarks.
Product quantization works by grouping the feature dimensions into groups, and performs quantization to each feature group. In particular, it performs a @math -means clustering to each group to obtain sub-codebooks, and the global quantization codebook is generated by the Cartesian products of all the small sub-codebooks. In this way, it can generate a huge number of landmark points in the space, which guarantees low quantization error; it has achieved state of the art performance on approximate nearest neighbor search @cite_2 , and can also provide a compact representation to the vectors. Inspired by the success of PQ, some latest works have extended PQ to a more general model by finding an optimized space-decomposition to minimize its overall distortion @cite_27 @cite_18 . A very recent work @cite_13 has deployed this optimized PQ within residual clusters. While it maximizes the strength of locality, it also uses extra space for multiple transformations as well as PQ codebooks.
{ "cite_N": [ "@cite_13", "@cite_27", "@cite_18", "@cite_2" ], "mid": [ "2133995768", "2111006384", "", "2124509324" ], "abstract": [ "We present a simple vector quantizer that combines low distortion with fast search and apply it to approximate nearest neighbor (ANN) search in high dimensional spaces. Leveraging the very same data structure that is used to provide non-exhaustive search, i.e., inverted lists or a multi-index, the idea is to locally optimize an individual product quantizer (PQ) per cell and use it to encode residuals. Local optimization is over rotation and space decomposition, interestingly, we apply a parametric solution that assumes a normal distribution and is extremely fast to train. With a reasonable space and time overhead that is constant in the data size, we set a new state-of-the-art on several public datasets, including a billion-scale one.", "Product quantization is an effective vector quantization approach to compactly encode high-dimensional vectors for fast approximate nearest neighbor (ANN) search. The essence of product quantization is to decompose the original high-dimensional space into the Cartesian product of a finite number of low-dimensional subspaces that are then quantized separately. Optimal space decomposition is important for the performance of ANN search, but still remains unaddressed. In this paper, we optimize product quantization by minimizing quantization distortions w.r.t. the space decomposition and the quantization codebooks. We present two novel methods for optimization: a non-parametric method that alternatively solves two smaller sub-problems, and a parametric method that is guaranteed to achieve the optimal solution if the input data follows some Gaussian distribution. We show by experiments that our optimized approach substantially improves the accuracy of product quantization for ANN search.", "", "This paper introduces a product quantization-based approach for approximate nearest neighbor search. The idea is to decompose the space into a Cartesian product of low-dimensional subspaces and to quantize each subspace separately. A vector is represented by a short code composed of its subspace quantization indices. The euclidean distance between two vectors can be efficiently estimated from their codes. An asymmetric version increases precision, as it computes the approximate distance between a vector and a code. Experimental results show that our approach searches for nearest neighbors efficiently, in particular in combination with an inverted file system. Results for SIFT and GIST image descriptors show excellent search accuracy, outperforming three state-of-the-art approaches. The scalability of our approach is validated on a data set of two billion vectors." ] }
1512.06925
2248211510
The success of product quantization (PQ) for fast nearest neighbor search depends on the exponentially reduced complexities of both storage and computation with respect to the codebook size. Recent efforts have been focused on employing sophisticated optimization strategies, or seeking more effective models. Residual quantization (RQ) is such an alternative that holds the same property as PQ in terms of the aforementioned complexities. In addition to being a direct replacement of PQ, hybrids of PQ and RQ can yield more gains for approximate nearest neighbor search. This motivated us to propose a novel approach to optimizing RQ and the related hybrid models. With an observation of the general randomness increase in a residual space, we propose a new strategy that jointly learns a local transformation per residual cluster with an ultimate goal to reduce overall quantization errors. We have shown that our approach can achieve significantly better accuracy on nearest neighbor search than both the original and the optimized PQ on several very large scale benchmarks.
Different from PQ, RQ works by performing quantization to the whole feature space, and then recursively apply VQ models to the residuals of the previous quantization level, which is a stacked quantization model. In particular, it performs a @math -means clustering to the original feature vectors, and construct @math clusters. For points in each cluster, it computes the residuals between points and the cluster centers. In the next level, it aggregates all the residual vectors for all points, and performs another clustering to these residual vectors. This process is recursively applied (stacked) for several levels. In this way, RQ produces sequential-product codebooks. A comprehensive survey of earlier RQ models can be found in @cite_14 . Recent works have shown the effectiveness of RQ to perform both indexing @cite_39 and data compression @cite_29 tasks in ANN search problems.
{ "cite_N": [ "@cite_29", "@cite_14", "@cite_39" ], "mid": [ "2119913432", "1970491336", "2019338814" ], "abstract": [ "A recently proposed product quantization method is efficient for large scale approximate nearest neighbor search, however, its performance on unstructured vectors is limited. This paper introduces residual vector quantization based approaches that are appropriate for unstructured vectors. Database vectors are quantized by residual vector quantizer. The reproductions are represented by short codes composed of their quantization indices. Euclidean distance between query vector and database vector is approximated by asymmetric distance, i.e., the distance between the query vector and the reproduction of the database vector. An efficient exhaustive search approach is proposed by fast computing the asymmetric distance. A straight forward non-exhaustive search approach is proposed for large scale search. Our approaches are compared to two state-of-the-art methods, spectral hashing and product quantization, on both structured and unstructured datasets. Results show that our approaches obtain the best results in terms of the trade-off between search quality and memory usage.", "Advances in residual vector quantization (RVQ) are surveyed. Definitions of joint encoder optimality and joint decoder optimality are discussed. Design techniques for RVQs with large numbers of stages and generally different encoder and decoder codebooks are elaborated and extended. Fixed-rate RVQs, and variable-rate RVQs that employ entropy coding are examined. Predictive and finite state RVQs designed and integrated into neural-network based source coding structures are revisited. Successive approximation RVQs that achieve embedded and refinable coding are reviewed. A new type of successive approximation RVQ that varies the instantaneous block rate by using different numbers of stages on different blocks is introduced and applied to image waveforms, and a scalar version of the new residual quantizer is applied to image subbands in an embedded wavelet transform coding system.", "This paper presents a k-means based algorithm for approximate nearest neighbor search. The proposed Embedded k-Means algorithm is a two-level clustered index structure which consists of two groups of centroids; additionally, an inverted file is used for recording of the assignments. The coarse-to-fine structure achieves high search efficiency using multi-assignment operations on the coarse level. At the query stage, pruning strategies are utilized to balance the trade-off between search qualities and speeds. Our algorithm is able to explore the neighborhood space of a given query efficiently. Due to its good recall selectivity and memory efficiency, the proposed algorithm is scalable and is able to process very large databases. Experimental results on SIFT and GIST image descriptor datasets show search performance better and comparable to the state-of-the-art approaches with lower memory usage and complexity." ] }
1512.07030
2963452022
Rectified linear activation units are important components for state-of-the-art deep convolutional networks. In this paper, we propose a novel S-shaped rectified linear activation unit (SReLU) to learn both convex and non-convex functions, imitating the multiple function forms given by the two fundamental laws, namely the Webner-Fechner law and the Stevens law, in psychophysics and neural sciences. Specifically, SReLU consists of three piecewise linear functions, which are formulated by four learnable parameters. The SReLU is learned jointly with the training of the whole deep network through back propagation. During the training phase, to initialize SReLU in different layers, we propose a "freezing" method to degenerate SReLU into a predefined leaky rectified linear unit in the initial several training epochs and then adaptively learn the good initial values. SReLU can be universally used in the existing deep networks with negligible additional parameters and computation cost. Experiments with two popular CNN architectures, Network in Network and GoogLeNet on scale-various benchmarks including CI-FAR10, CIFAR100, MNIST and ImageNet demonstrate that SReLU achieves remarkable improvement compared to other activation functions.
One disadvantage of APL is it explicitly forces the rightmost line to have unit slope 1 and bias 0. Although it is stated that if the output of APL serves as the input to a linear function @math , the linear function will restore the freedom of the rightmost line which is lost due to the constraint, we argue that this does not always hold because in many cases for deep networks, the function taking the output of APL as the input is non-linear or unrestorable, such as local response normalizatioin @cite_1 and dropout @cite_1 .
{ "cite_N": [ "@cite_1" ], "mid": [ "2618530766" ], "abstract": [ "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 , respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry." ] }
1512.07019
2205525215
A computerized workflow management system may enforce a security policy, specified in terms of authorized actions and constraints, thereby restricting which users can perform particular steps in a workflow. The existence of a security policy may mean it is impossible to find a valid plan (an assignment of steps to authorized users such that all constraints are satisfied). Work in the literature focuses on the workflow satisfiability problem, a problem that outputs a valid plan if the instance is satisfiable (and a negative result otherwise). In this paper, we introduce the ( ), which enables us to solve problems related to workflows and security policies. In particular, we are able to compute a "least bad" plan when some components of the security policy may be violated. In general, is intractable from both the classical and parameterized complexity point of view. We prove there exists an fixed-parameter tractable (FPT) algorithm to compute a Pareto front for if we restrict our attention to user-independent constraints. We also present a second algorithm to compute a Pareto front which uses mixed integer programming (MIP). We compare the performance of both our algorithms on synthetic instances, and show that the FPT algorithm outperforms the MIP-based one by several orders of magnitude on most of the instances. Finally, we study the important question of workflow resiliency and prove new results establishing that known decision problems are fixed-parameter tractable when restricted to user-independent constraints. We then propose a new way of modeling the availability of users and demonstrate that many questions related to resiliency in the context of this new model may be reduced to instances of .
A constraint @math is said to be (UI, for short) if, for every @math and every permutation @math , we have @math , where @math . Informally, a constraint is UI if it does not depend on the identity of the users. It appears most constraints that are useful in practice are UI @cite_19 . In particular, cardinality constraints, separation-of-duty and binding-of-duty constraints are UI. In our experiments (), we will consider two particular types of UI constraints: an constraint has the form @math , where @math , and is satisfied provided at least @math users are assigned to the steps in @math ; an constraint has the form @math , where @math , and is satisfied provided at most @math users are assigned to the steps in @math . Note that a separation-of-duty constraint @math is the counting constraint @math , and a binding-of-duty constraint @math is the constraint @math .
{ "cite_N": [ "@cite_19" ], "mid": [ "2107311933" ], "abstract": [ "The Workflow Satisfiability Problem (WSP) is a problem of practical interest that arises whenever tasks need to be performed by authorized users, subject to constraints defined by business rules. We are required to decide whether there exists a plan - an assignment of tasks to authorized users - such that all constraints are satisfied. It is natural to see the WSP as a subclass of the Constraint Satisfaction Problem (CSP) in which the variables are tasks and the domain is the set of users. What makes the WSP distinctive is that the number of tasks is usually very small compared to the number of users, so it is appropriate to ask for which constraint languages the WSP is fixed-parameter tractable (FPT), parameterized by the number of tasks. This novel approach to the WSP, using techniques from CSP, has enabled us to design a generic algorithm which is FPT for several families of workflow constraints considered in the literature. Furthermore, we prove that the union of FPT languages remains FPT if they satisfy a simple compatibility condition. Lastly, we identify a new FPT constraint language, user-independent constraints, that includes many of the constraints of interest in business processing systems. We demonstrate that our generic algorithm has provably optimal running time O*(2k log k), for this language, where k is the number of tasks." ] }
1512.07019
2205525215
A computerized workflow management system may enforce a security policy, specified in terms of authorized actions and constraints, thereby restricting which users can perform particular steps in a workflow. The existence of a security policy may mean it is impossible to find a valid plan (an assignment of steps to authorized users such that all constraints are satisfied). Work in the literature focuses on the workflow satisfiability problem, a problem that outputs a valid plan if the instance is satisfiable (and a negative result otherwise). In this paper, we introduce the ( ), which enables us to solve problems related to workflows and security policies. In particular, we are able to compute a "least bad" plan when some components of the security policy may be violated. In general, is intractable from both the classical and parameterized complexity point of view. We prove there exists an fixed-parameter tractable (FPT) algorithm to compute a Pareto front for if we restrict our attention to user-independent constraints. We also present a second algorithm to compute a Pareto front which uses mixed integer programming (MIP). We compare the performance of both our algorithms on synthetic instances, and show that the FPT algorithm outperforms the MIP-based one by several orders of magnitude on most of the instances. Finally, we study the important question of workflow resiliency and prove new results establishing that known decision problems are fixed-parameter tractable when restricted to user-independent constraints. We then propose a new way of modeling the availability of users and demonstrate that many questions related to resiliency in the context of this new model may be reduced to instances of .
It is important to stress that our approach works for any UI constraints. We chose to use counting constraints because such constraints have been widely considered in the literature (often known as cardinality constraints @cite_19 ). Moreover, counting constraints can be efficiently encoded using mixed integer programming, so we can use off-the-shelf solvers to solve WSP and thus compare their performance with that of our bespoke algorithms.
{ "cite_N": [ "@cite_19" ], "mid": [ "2107311933" ], "abstract": [ "The Workflow Satisfiability Problem (WSP) is a problem of practical interest that arises whenever tasks need to be performed by authorized users, subject to constraints defined by business rules. We are required to decide whether there exists a plan - an assignment of tasks to authorized users - such that all constraints are satisfied. It is natural to see the WSP as a subclass of the Constraint Satisfaction Problem (CSP) in which the variables are tasks and the domain is the set of users. What makes the WSP distinctive is that the number of tasks is usually very small compared to the number of users, so it is appropriate to ask for which constraint languages the WSP is fixed-parameter tractable (FPT), parameterized by the number of tasks. This novel approach to the WSP, using techniques from CSP, has enabled us to design a generic algorithm which is FPT for several families of workflow constraints considered in the literature. Furthermore, we prove that the union of FPT languages remains FPT if they satisfy a simple compatibility condition. Lastly, we identify a new FPT constraint language, user-independent constraints, that includes many of the constraints of interest in business processing systems. We demonstrate that our generic algorithm has provably optimal running time O*(2k log k), for this language, where k is the number of tasks." ] }
1512.06989
2123081773
The issue of identifiers is crucial in distributed computing. Informally, identities are used for tackling two of the fundamental difficulties that are inherent to deterministic distributed computing, namely: (1) symmetry breaking, and (2) topological information gathering. In the context of local computation, i.e., when nodes can gather information only from nodes at bounded distances, some insight regarding the role of identities has been established. For instance, it was shown that, for large classes of construction problems, the role of the identities can be rather small. However, for the identities to play no role, some other kinds of mechanisms for breaking symmetry must be employed, such as edge-labeling or sense of direction. When it comes to local distributed decision problems, the specification of the decision task does not seem to involve symmetry breaking. Therefore, it is expected that, assuming nodes can gather sufficient information about their neighborhood, one could get rid of the identities, without employing extra mechanisms for breaking symmetry. We tackle this question in the framework of the ( LOCAL ) model.
The classes LD, NLD and BPLD defined in @cite_33 are the distributed analogues of the classes P, NP and BPP, respectively. The paper provides structural results, developing a notion of local reduction and establishing completeness results. One of the main results is the existence of a sharp threshold for randomization, above which randomization does not help (at least for hereditary languages). More precisely, the BPLD classes were classified into two: below and above the randomization threshold. In @cite_10 , the authors show that the hereditary assumption can be lifted if we restrict our attention to languages on path topologies. These two results from @cite_33 @cite_10 are used in the current paper in a rather surprising manner. The authors in @cite_10 then zoom'' into the spectrum of classes below the randomization threshold, and defines a hierarchy of an infinite set of BPLD classes, each of which is separated from the class above it in the hierarchy.
{ "cite_N": [ "@cite_10", "@cite_33" ], "mid": [ "2951290586", "2067202579" ], "abstract": [ "The paper tackles the power of randomization in the context of locality by analyzing the ability to boost' the success probability of deciding a distributed language. The main outcome of this analysis is that the distributed computing setting contrasts significantly with the sequential one as far as randomization is concerned. Indeed, we prove that in some cases, the ability to increase the success probability for deciding distributed languages is rather limited. Informally, a (p,q)-decider for a language L is a distributed randomized algorithm which accepts instances in L with probability at least p and rejects instances outside of L with probability at least q. It is known that every hereditary language that can be decided in t rounds by a (p,q)-decider, where p^2+q>1, can actually be decided deterministically in O(t) rounds. In one of our results we give evidence supporting the conjecture that the above statement holds for all distributed languages. This is achieved by considering the restricted case of path topologies. We then turn our attention to the range below the aforementioned threshold, namely, the case where p^2+q . We define B_k(t) to be the set of all languages decidable in at most t rounds by a (p,q)-decider, where p^ 1+1 k +q>1. It is easy to see that every language is decidable (in zero rounds) by a (p,q)-decider satisfying p+q=1. Hence, the hierarchy B_k provides a spectrum of complexity classes between determinism and complete randomization. We prove that all these classes are separated: for every integer k 1, there exists a language L satisfying L B_ k+1 (0) but L B_k(t) for any t=o(n). In addition, we show that B_ (t) does not contain all languages, for any t=o(n). Finally, we show that if the inputs can be restricted in certain ways, then the ability to boost the success probability becomes almost null.", "A central theme in distributed network algorithms concerns understanding and coping with the issue of locality . Despite considerable progress, research efforts in this direction have not yet resulted in a solid basis in the form of a fundamental computational complexity theory for locality. Inspired by sequential complexity theory, we focus on a complexity theory for . In the context of locality, solving a decision problem requires the processors to independently inspect their local neighborhoods and then collectively decide whether a given global input instance belongs to some specified language. We consider the standard @math model of computation and define @math (for local decision ) as the class of decision problems that can be solved in @math communication rounds. We first study the intriguing question of whether randomization helps in local distributed computing, and to what extent. Specifically, we define the corresponding randomized class @math , containing all languages for which there exists a randomized algorithm that runs in @math rounds, accepts correct instances with probability at least @math and rejects incorrect ones with probability at least @math . We show that @math is a threshold for the containment of @math in @math . More precisely, we show that there exists a language that does not belong to @math for any @math but does belong to @math for any @math such that @math . On the other hand, we show that, restricted to hereditary languages, @math , for any function @math and any @math such that @math . In addition, we investigate the impact of non-determinism on local decision, and establish some structural results inspired by classical computational complexity theory. Specifically, we show that non-determinism does help, but that this help is limited, as there exist languages that cannot be decided non-deterministically. Perhaps surprisingly, it turns out that it is the combination of randomization with non-determinism that enables to decide languages . Finally, we introduce the notion of local reduction, and establish some completeness results." ] }
1512.06989
2123081773
The issue of identifiers is crucial in distributed computing. Informally, identities are used for tackling two of the fundamental difficulties that are inherent to deterministic distributed computing, namely: (1) symmetry breaking, and (2) topological information gathering. In the context of local computation, i.e., when nodes can gather information only from nodes at bounded distances, some insight regarding the role of identities has been established. For instance, it was shown that, for large classes of construction problems, the role of the identities can be rather small. However, for the identities to play no role, some other kinds of mechanisms for breaking symmetry must be employed, such as edge-labeling or sense of direction. When it comes to local distributed decision problems, the specification of the decision task does not seem to involve symmetry breaking. Therefore, it is expected that, assuming nodes can gather sufficient information about their neighborhood, one could get rid of the identities, without employing extra mechanisms for breaking symmetry. We tackle this question in the framework of the ( LOCAL ) model.
The precise knowledge of the number of nodes @math was shown in @cite_33 to be of large impact on non-deterministic decision. Indeed, with such a knowledge every language can be decided non-deterministically in the model of NLD. We note, however, that the knowledge of an arbitrary upper bound on @math (as assumed here in one of our results) seems to be a much weaker assumption, and, in particular, will not suffice for non-deterministically deciding all languages. In the context of construction problems, it was shown in @cite_28 that in many case, the knowledge of @math (or an upper bound on @math ) is not essential.
{ "cite_N": [ "@cite_28", "@cite_33" ], "mid": [ "2022781277", "2067202579" ], "abstract": [ "Numerous sophisticated local algorithm were suggested in the literature for various fundamental problems. Notable examples are the MIS and (Δ+1)-coloring algorithms by Barenboim and Elkin [6], by Kuhn [22], and by Panconesi and Srinivasan [33], as well as the OΔ2-coloring algorithm by Linial [27]. Unfortunately, most known local algorithms (including, in particular, the aforementioned algorithms) are non-uniform, that is, they assume that all nodes know good estimations of one or more global parameters of the network, e.g., the maximum degree Δ or the number of nodes n. This paper provides a rather general method for transforming a non-uniform local algorithm into a uniform one. Furthermore, the resulting algorithm enjoys the same asymptotic running time as the original non-uniform algorithm. Our method applies to a wide family of both deterministic and randomized algorithms. Specifically, it applies to almost all of the state of the art non-uniform algorithms regarding MIS and Maximal Matching, as well as to many results concerning the coloring problem. (In particular, it applies to all aforementioned algorithms.) To obtain our transformations we introduce a new distributed tool called pruning algorithms, which we believe may be of independent interest.", "A central theme in distributed network algorithms concerns understanding and coping with the issue of locality . Despite considerable progress, research efforts in this direction have not yet resulted in a solid basis in the form of a fundamental computational complexity theory for locality. Inspired by sequential complexity theory, we focus on a complexity theory for . In the context of locality, solving a decision problem requires the processors to independently inspect their local neighborhoods and then collectively decide whether a given global input instance belongs to some specified language. We consider the standard @math model of computation and define @math (for local decision ) as the class of decision problems that can be solved in @math communication rounds. We first study the intriguing question of whether randomization helps in local distributed computing, and to what extent. Specifically, we define the corresponding randomized class @math , containing all languages for which there exists a randomized algorithm that runs in @math rounds, accepts correct instances with probability at least @math and rejects incorrect ones with probability at least @math . We show that @math is a threshold for the containment of @math in @math . More precisely, we show that there exists a language that does not belong to @math for any @math but does belong to @math for any @math such that @math . On the other hand, we show that, restricted to hereditary languages, @math , for any function @math and any @math such that @math . In addition, we investigate the impact of non-determinism on local decision, and establish some structural results inspired by classical computational complexity theory. Specifically, we show that non-determinism does help, but that this help is limited, as there exist languages that cannot be decided non-deterministically. Perhaps surprisingly, it turns out that it is the combination of randomization with non-determinism that enables to decide languages . Finally, we introduce the notion of local reduction, and establish some completeness results." ] }
1512.07046
2213948213
In today's world, we follow news which is distributed globally. Significant events are reported by different sources and in different languages. In this work, we address the problem of tracking of events in a large multilingual stream. Within a recently developed system Event Registry we examine two aspects of this problem: how to compare articles in different languages and how to link collections of articles in different languages which refer to the same event. Taking a multilingual stream and clusters of articles from each language, we compare different cross-lingual document similarity measures based on Wikipedia. This allows us to compute the similarity of any two articles regardless of language. Building on previous work, we show there are methods which scale well and can compute a meaningful similarity between articles from languages with little or no direct overlap in the training data. Using this capability, we then propose an approach to link clusters of articles across languages which represent the same event. We provide an extensive evaluation of the system as a whole, as well as an evaluation of the quality and robustness of the similarity measure and the linking algorithm.
. There exist many variants to modelling documents in a language independent way by using probabilistic graphical models. The models include: Joint Probabilistic Latent Semantic Analysis (JPLSA) @cite_5 , Coupled Probabilistic LSA (CPLSA) @cite_5 , Probabilistic Cross-Lingual LSA (PCLLSA) @cite_6 and Polylingual Topic Models (PLTM) @cite_11 which is a Bayesian version of PCLLSA. The methods (except for CPLSA) describe the multilingual document collections as samples from generative probabilistic models, with variations on the assumptions on the model structure. The topics represent latent variables that are used to generate observed variables (words), a process specific to each language. The parameter estimation is posed as an inference problem which is typically intractable and one usually solves it using approximate techniques. Most variants of solutions are based on Gibbs sampling or Variational Inference, which are nontrivial to implement and may require an experienced practitioner to be applied. Furthermore, representing a new document as a mixture of topics is another potentially hard inference problem which must be solved.
{ "cite_N": [ "@cite_5", "@cite_6", "@cite_11" ], "mid": [ "", "2111068739", "2033593667" ], "abstract": [ "", "Probabilistic latent topic models have recently enjoyed much success in extracting and analyzing latent topics in text in an unsupervised way. One common deficiency of existing topic models, though, is that they would not work well for extracting cross-lingual latent topics simply because words in different languages generally do not co-occur with each other. In this paper, we propose a way to incorporate a bilingual dictionary into a probabilistic topic model so that we can apply topic models to extract shared latent topics in text data of different languages. Specifically, we propose a new topic model called Probabilistic Cross-Lingual Latent Semantic Analysis (PCLSA) which extends the Probabilistic Latent Semantic Analysis (PLSA) model by regularizing its likelihood function with soft constraints defined based on a bilingual dictionary. Both qualitative and quantitative experimental results show that the PCLSA model can effectively extract cross-lingual latent topics from multilingual text data.", "Topic models are a useful tool for analyzing large text collections, but have previously been applied in only monolingual, or at most bilingual, contexts. Meanwhile, massive collections of interlinked documents in dozens of languages, such as Wikipedia, are now widely available, calling for tools that can characterize content in many languages. We introduce a polylingual topic model that discovers topics aligned across multiple languages. We explore the model's characteristics using two large corpora, each with over ten different languages, and demonstrate its usefulness in supporting machine translation and tracking topic trends across languages." ] }
1512.07046
2213948213
In today's world, we follow news which is distributed globally. Significant events are reported by different sources and in different languages. In this work, we address the problem of tracking of events in a large multilingual stream. Within a recently developed system Event Registry we examine two aspects of this problem: how to compare articles in different languages and how to link collections of articles in different languages which refer to the same event. Taking a multilingual stream and clusters of articles from each language, we compare different cross-lingual document similarity measures based on Wikipedia. This allows us to compute the similarity of any two articles regardless of language. Building on previous work, we show there are methods which scale well and can compute a meaningful similarity between articles from languages with little or no direct overlap in the training data. Using this capability, we then propose an approach to link clusters of articles across languages which represent the same event. We provide an extensive evaluation of the system as a whole, as well as an evaluation of the quality and robustness of the similarity measure and the linking algorithm.
. Finally, related work includes monolingual approaches that treat document written in different languages in a monolingual fashion. The intuition is that named entities (for example, Obama'') and cognate words (for example, tsunami'') are written in the same or similar fashion in many languages. For example, the Cross-Language Character n-Gram Model (CL-CNG) @cite_20 represents documents as bags of character @math -grams. Another approach is to use language dependent keyword lists based on cognate words @cite_10 . These approaches may be suitable for comparing documents written in languages that share a writing system, which does not apply to the case of global news tracking.
{ "cite_N": [ "@cite_10", "@cite_20" ], "mid": [ "2135908127", "2028776121" ], "abstract": [ "The Europe Media Monitor system (EMM) gathers and aggregates an average of 50,000 newspaper articles per day in over 40 languages. To manage the information overflow, it was decided to group similar articles per day and per language into clusters and to link daily clusters over time into stories. A story automatically comes into existence when related groups of articles occur within a 7-day window. While cross-lingual links across 19 languages for individual news clusters have been displayed since 2004 as part of a freely accessible online application (http: press.jrc.it NewsExplorer), the newest development is work on linking entire stories across languages. The evaluation of the monolingual aggregation of historical clusters into stories and of the linking of stories across languages yielded mostly satisfying results.", "Cross-language plagiarism detection deals with the automatic identification and extraction of plagiarism in a multilingual setting. In this setting, a suspicious document is given, and the task is to retrieve all sections from the document that originate from a large, multilingual document collection. Our contributions in this field are as follows: (1) a comprehensive retrieval process for cross-language plagiarism detection is introduced, highlighting the differences to monolingual plagiarism detection, (2) state-of-the-art solutions for two important subtasks are reviewed, (3) retrieval models for the assessment of cross-language similarity are surveyed, and, (4) the three models CL-CNG, CL-ESA and CL-ASA are compared. Our evaluation is of realistic scale: it relies on 120,000 test documents which are selected from the corpora JRC-Acquis and Wikipedia, so that for each test document highly similar documents are available in all of the six languages English, German, Spanish, French, Dutch, and Polish. The models are employed in a series of ranking tasks, and more than 100 million similarities are computed with each model. The results of our evaluation indicate that CL-CNG, despite its simple approach, is the best choice to rank and compare texts across languages if they are syntactically related. CL-ESA almost matches the performance of CL-CNG, but on arbitrary pairs of languages. CL-ASA works best on \"exact\" translations but does not generalize well." ] }
1512.06790
2258908750
Image segmentation and 3D pose estimation are two key cogs in any algorithm for scene understanding. However, state-of-the-art CRF-based models for image segmentation rely mostly on 2D object models to construct top-down high-order potentials. In this paper, we propose new top-down potentials for image segmentation and pose estimation based on the shape and volume of a 3D object model. We show that these complex top-down potentials can be easily decomposed into standard forms for efficient inference in both the segmentation and pose estimation tasks. Experiments on a car dataset show that knowledge of segmentation helps perform pose estimation better and vice versa.
The CRF model for image segmentation proposed in @cite_25 @cite_17 is defined on a pixel graph and uses costs based on shape, texture, color, location and edge cues. @cite_26 @cite_19 extended the CRF approach to include long-range interactions between non-adjacent nodes in the graph by using costs defined over larger cliques and in a hierarchical manner. @cite_7 extended the CRF approach to a superpixel graph, where an energy consisting of unary and pairwise potentials captures appearance over superpixel neighbourhoods and local interactions between adjacent superpixels. @cite_0 @cite_23 also introduced new top-down costs based on the bag-of-features classifier @cite_14 , which captures interactions between all regions of an image that have the same label.
{ "cite_N": [ "@cite_26", "@cite_14", "@cite_7", "@cite_0", "@cite_19", "@cite_23", "@cite_25", "@cite_17" ], "mid": [ "2535516436", "1625255723", "2545985378", "", "", "70785481", "2100588357", "" ], "abstract": [ "Most methods for object class segmentation are formulated as a labelling problem over a single choice of quantisation of an image space - pixels, segments or group of segments. It is well known that each quantisation has its fair share of pros and cons; and the existence of a common optimal quantisation level suitable for all object categories is highly unlikely. Motivated by this observation, we propose a hierarchical random field model, that allows integration of features computed at different levels of the quantisation hierarchy. MAP inference in this model can be performed efficiently using powerful graph cut based move making algorithms. Our framework generalises much of the previous work based on pixels or segments. We evaluate its efficiency on some of the most challenging data-sets for object class segmentation, and show it obtains state-of-the-art results.", "We present a novel method for generic visual categorization: the problem of identifying the object content of natural images while generalizing across variations inherent to the object class. This bag of keypoints method is based on vector quantization of affine invariant descriptors of image patches. We propose and compare two alternative implementations using different classifiers: Naive Bayes and SVM. The main advantages of the method are that it is simple, computationally efficient and intrinsically invariant. We present results for simultaneously classifying seven semantic visual categories. These results clearly demonstrate that the method is robust to background clutter and produces good categorization accuracy even without exploiting geometric information.", "We propose a method to identify and localize object classes in images. Instead of operating at the pixel level, we advocate the use of superpixels as the basic unit of a class segmentation or pixel localization scheme. To this end, we construct a classifier on the histogram of local features found in each superpixel. We regularize this classifier by aggregating histograms in the neighborhood of each superpixel and then refine our results further by using the classifier in a conditional random field operating on the superpixel graph. Our proposed method exceeds the previously published state-of-the-art on two challenging datasets: Graz-02 and the PASCAL VOC 2007 Segmentation Challenge.", "", "", "Representing objects using elements from a visual dictionary is widely used in object detection and categorization. Prior work on dictionary learning has shown improvements in the accuracy of object detection and categorization by learning discriminative dictionaries. However none of these dictionaries are learnt for joint object categorization and segmentation. Moreover, dictionary learning is often done separately from classifier training, which reduces the discriminative power of the model. In this paper, we formulate the semantic segmentation problem as a joint categorization, segmentation and dictionary learning problem. To that end, we propose a latent conditional random field (CRF) model in which the observed variables are pixel category labels and the latent variables are visual word assignments. The CRF energy consists of a bottom-up segmentation cost, a top-down bag of (latent) words categorization cost, and a dictionary learning cost. Together, these costs capture relationships between image features and visual words, relationships between visual words and object categories, and spatial relationships among visual words. The segmentation, categorization, and dictionary learning parameters are learnt jointly using latent structural SVMs, and the segmentation and visual words assignments are inferred jointly using energy minimization techniques. Experiments on the Graz02 and CamVid datasets demonstrate the performance of our approach.", "We propose semantic texton forests, efficient and powerful new low-level features. These are ensembles of decision trees that act directly on image pixels, and therefore do not need the expensive computation of filter-bank responses or local descriptors. They are extremely fast to both train and test, especially compared with k-means clustering and nearest-neighbor assignment of feature descriptors. The nodes in the trees provide (i) an implicit hierarchical clustering into semantic textons, and (ii) an explicit local classification estimate. Our second contribution, the bag of semantic textons, combines a histogram of semantic textons over an image region with a region prior category distribution. The bag of semantic textons is computed over the whole image for categorization, and over local rectangular regions for segmentation. Including both histogram and region prior allows our segmentation algorithm to exploit both textural and semantic context. Our third contribution is an image-level prior for segmentation that emphasizes those categories that the automatic categorization believes to be present. We evaluate on two datasets including the very challenging VOC 2007 segmentation dataset. Our results significantly advance the state-of-the-art in segmentation accuracy, and furthermore, our use of efficient decision forests gives at least a five-fold increase in execution speed.", "" ] }
1512.06600
2215738437
Flexibility in power demand, diverse usage patterns and storage capability of plug-in electric vehicles (PEVs) grow the elasticity of residential electricity demand remarkably. This elasticity can be utilized to form the daily aggregated demand profile and or alter instantaneous demand of a system wherein a large number of residential PEVs share one electricity retailer or an aggregator. In this paper, we propose a demand response (DR) technique to manage vehicle-to-grid (V2G) enabled PEVs' electricity assignments (charging and discharging) in order to reduce the overall electricity procurement costs for a retailer bidding to a two-settlement electricity market, i.e., a day-ahead (DA) and a spot or real-time (RT) market. We show that our approach is decentralized, scalable, fast converging and does not violate users' privacy. Extensive simulations show significant overall cost savings can be achieved for a retailer bidding to an operational electricity market by using the proposed algorithm. This technique becomes more needful when the power grid accommodates a large number of intermittent energy resources wherein RT demand altering is crucial due to more likely contingencies and hence more RT price fluctuations and even occurring the so-called . Finally, such retailer could offer better deals to customers as well.
The authors in @cite_10 present a two-stage stochastic optimization approach for an electric vehicle (EV) aggregator engaging in DA and regulation markets to reduce the energy cost by optimal bidding. Nevertheless, their proposed method impose some inconvenience on the customers and the aggregator should have access to private information of the EVs, e.g., arrival time, departure time and battery capacity. The same issue exists in the proposed method in @cite_13 . In @cite_9 , the author discusses how a time-shiftable load, that may comprise of several time-shiftable subloads, can send demand bids to DA and RT markets to minimize its electricity procurement cost. Although this paper provides optimal closed-form solutions for bidding, they do not seem to be applicable for the residential sector wherein the retailer does not have detailed information about customers' preferences due to privacy concerns.
{ "cite_N": [ "@cite_13", "@cite_9", "@cite_10" ], "mid": [ "2225115145", "", "2032489938" ], "abstract": [ "We address the optimal bidding problem of an aggregator that manages the charging of a group of plug-in electric vehicles. The aggregator purchases energy in the day-ahead market, and offers capacity in the frequency regulation market. The charging flexibility of the fleet is expressed as a set of time-varying power and energy constraints. The impact of the aggregator's demand on day-ahead market prices is endogenously modelled by formulating the problem as a bilevel problem: The upper level represents the aggregator's cost minimisation, and the lower level represents the market clearing. We explore several settings of the regulation market, analysing the impact of restrictions that may apply on the capacity offers (e.g., symmetric bids). Moreover, we compare the options of providing regulation with unidirectional charging, and with vehicle-to-grid (V2G). We test the proposed method in a case study for Switzerland, with historical day-ahead and regulation market data, and realistic driving patterns.", "", "This paper determines the optimal bidding strategy of an electric vehicle (EV) aggregator participating in day-ahead energy and regulation markets using stochastic optimization. Key sources of uncertainty affecting the bidding strategy are identified and incorporated in the stochastic optimization model. The aggregator portfolio optimization model should include inevitable deviations between day-ahead cleared bids and actual real-time energy purchases as well as uncertainty for the energy content of regulation signals in order to ensure profit maximization and reliable reserve provision. Energy deviations are characterized as “uninstructed” or “instructed” depending on whether or not the responsibility resides with the aggregator. Price deviations and statistical characteristics of regulation signals are also investigated. Finally, a new battery model is proposed for better approximation of the battery charging characteristic. Test results with an EV aggregator representing one thousand EVs are presented and discussed." ] }
1512.06600
2215738437
Flexibility in power demand, diverse usage patterns and storage capability of plug-in electric vehicles (PEVs) grow the elasticity of residential electricity demand remarkably. This elasticity can be utilized to form the daily aggregated demand profile and or alter instantaneous demand of a system wherein a large number of residential PEVs share one electricity retailer or an aggregator. In this paper, we propose a demand response (DR) technique to manage vehicle-to-grid (V2G) enabled PEVs' electricity assignments (charging and discharging) in order to reduce the overall electricity procurement costs for a retailer bidding to a two-settlement electricity market, i.e., a day-ahead (DA) and a spot or real-time (RT) market. We show that our approach is decentralized, scalable, fast converging and does not violate users' privacy. Extensive simulations show significant overall cost savings can be achieved for a retailer bidding to an operational electricity market by using the proposed algorithm. This technique becomes more needful when the power grid accommodates a large number of intermittent energy resources wherein RT demand altering is crucial due to more likely contingencies and hence more RT price fluctuations and even occurring the so-called . Finally, such retailer could offer better deals to customers as well.
In @cite_4 , charging and discharging of PEVs are managed in order to maximize the social and individual welfare functions. However, in residential sector, defining appropriate utility and welfare functions for the individual users is very ambiguous.
{ "cite_N": [ "@cite_4" ], "mid": [ "1997915897" ], "abstract": [ "Electric vehicles (EVs) will play an important role in the future smart grid because of their capabilities of storing electrical energy in their batteries during off-peak hours and supplying the stored energy to the power grid during peak hours. In this paper, we consider a power system with an aggregator and multiple customers with EVs and propose novel electricity load scheduling algorithms which, unlike previous works, jointly consider the load scheduling for appliances and the energy trading using EVs. Specifically, we allow customers to determine how much energy to purchase from or to sell to the aggregator while taking into consideration the load demands of their residential appliances and the associated electricity bill. We propose two different approaches: a collaborative and a non-collaborative approach. In the collaborative approach, we develop an optimal distributed load scheduling algorithm that maximizes the social welfare of the power system. In the non-collaborative approach, we model the energy scheduling problem as a non-cooperative game among self-interested customers, where each customer determines its own load scheduling and energy trading to maximize its own profit. In order to resolve the unfairness between heavy and light customers in the non-collaborative approach, we propose a tiered billing scheme that can control the electricity rates for customers according to their different energy consumption levels. In both approaches, we also consider the uncertainty in the load demands, with which customers' actual energy consumption may vary from the scheduled energy consumption. To study the impact of the uncertainty, we use the worst-case-uncertainty approach and develop distributed load scheduling algorithms that provide the guaranteed minimum performances in uncertain environments. Subsequently, we show when energy trading leads to an increase in the social welfare and we determine what are the customers' incentives to participate in the energy trading in various usage scenarios including practical environments with uncertain load demands." ] }
1512.06228
2219116735
We use machine learning for designing a medium frequency trading strategy for a portfolio of 5 year and 10 year US Treasury note futures. We formulate this as a classification problem where we predict the weekly direction of movement of the portfolio using features extracted from a deep belief network trained on technical indicators of the portfolio constituents. The experimentation shows that the resulting pipeline is effective in making a profitable trade.
Machine learning application in finance is a challenging problem owing to low signal to noise ratio. Moreover, domain expertise is essential for engineering features which assist in solving an appropriate classification or regression problem. Most prior work in this area concentrates on using popular ML techniques like SVM @cite_4 , @cite_7 , @cite_0 and neural networks @cite_1 , @cite_9 , @cite_6 coupled with rigorously designed features, and the general area of focus is financial time series forecasting. With deep learning techniques, we can learn the latent representation of the raw features and use this representation for further analysis @cite_10 . In this paper, we construct a minimal risk portfolio of 5 year and 10 year T-note futures and use a machine learning pipeline to predict weekly direction of movement of the portfolio using features derived from a deep belief network. The prediction from the pipeline is then used in a day trading strategy. Using derivatives instead of the underlying entities themselves leads to a more feasible problem, since derivatives are less volatile and hence have clearer patterns.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_9", "@cite_1", "@cite_6", "@cite_0", "@cite_10" ], "mid": [ "2032170121", "1988518729", "1587239851", "2029803196", "1515719066", "2012079387", "2138857742" ], "abstract": [ "Support vector machine (SVM) is a very specific type of learning algorithms characterized by the capacity control of the decision function, the use of the kernel functions and the sparsity of the solution. In this paper, we investigate the predictability of financial movement direction with SVM by forecasting the weekly movement direction of NIKKEI 225 index. To evaluate the forecasting ability of SVM, we compare its performance with those of Linear Discriminant Analysis, Quadratic Discriminant Analysis and Elman Backpropagation Neural Networks. The experiment results show that SVM outperforms the other classification methods. Further, we propose a combining model by integrating SVM with the other classification methods. The combining model performs best among all the forecasting methods.", "This paper deals with the application of a novel neural network technique, support vector machine (SVM), in financial time series forecasting. The objective of this paper is to examine the feasibility of SVM in financial time series forecasting by comparing it with a multi-layer back-propagation (BP) neural network. Five real futures contracts that are collated from the Chicago Mercantile Market are used as the data sets. The experiment shows that SVM outperforms the BP neural network based on the criteria of normalized mean square error (NMSE), mean absolute error (MAE), directional symmetry (DS) and weighted directional symmetry (WDS). Since there is no structured way to choose the free parameters of SVMs, the variability in performance with respect to the free parameters is investigated in this study. Analysis of the experimental results proved that it is advantageous to apply SVMs to forecast financial time series.", "From the Publisher: When applied to the world of finance, neural networks are automated trading systems, based on mapping inputs and outputs for forecasting probable future values. In Neural Networks for Financial Forecasting - the first book to focus on the role of neural networks specifically in price forecasting - traders are provided with a solid foundation that explains how neural nets work, what they can accomplish, and how to construct, use, and apply them for maximum profit. It is written by an acknowledged authority who is, himself, the developer of several successful networks. Neural Networks for Financial Forecasting enables you to develop a usable, state-of-the-art network from scratch all the way through completion of training. There are spreadsheets and graphs throughout to illustrate key points, and an appendix of valuable information, including neural network software suppliers and related publications.", "Abstract Artificial neural networks are universal and highly flexible function approximators first used in the fields of cognitive science and engineering. In recent years, neural network applications in finance for such tasks as pattern recognition, classification, and time series forecasting have dramatically increased. However, the large number of parameters that must be selected to develop a neural network forecasting model have meant that the design process still involves much trial and error. The objective of this paper is to provide a practical introductory guide in the design of a neural network for forecasting economic time series data. An eight-step procedure to design a neural network forecasting model is explained including a discussion of tradeoffs in parameter selection, some common pitfalls, and points of disagreement among practitioners.", "From the Publisher: Neural networks are revolutionizing virtually every aspect of financial and investment decision making. Financial firms worldwide are employing neural networks to tackle difficult tasks involving intuitive judgement or requiring the detection of data patterns which elude conventional analytic techniques. Many observers believe neural networks will eventually outperform even the best traders and investors. Neural networks are already being used to trade the securities markets, to forecast the economy and to analyze credit risk. Indeed, apart from the U.S. Department of Defense, the financial services industry has invested more money in neural network research than any other industry or government body. Unlike other types of artificial intelligence, neural networks mimic to some extent the processing characteristics of the human brain. As a result, neural networks can draw conclusions from incomplete data, recognize patterns as they unfold in real time and forecast the future. They can even learn from past mistakes! In Neural Networks in Finance and Investing, Robert Trippi and Efraim Turban have assembled a stellar collection of articles by experts in industry and academia on the applications of neural networks in this important arena. They discuss neural network successes and failures, as well as identify the vast unrealized potential of neural networks in numerous specialized areas of financial decision making. Topics include neural network fundamentals and overview, analysis of financial condition, business failure prediction, debt risk assessment, security market applications, and neural network approaches to financial forecasting. Nowhere else will the finance professional find such an exciting and relevant in-depth examination of neural networks. Individual chapters discuss how to use neural networks to forecast the stock market, to trade commodities, to assess bond and mortgage risk, to predict bankruptcy and to implement investment strategies. Taken toge", "Abstract Support vector machines (SVMs) are promising methods for the prediction of financial time-series because they use a risk function consisting of the empirical error and a regularized term which is derived from the structural risk minimization principle. This study applies SVM to predicting the stock price index. In addition, this study examines the feasibility of applying SVM in financial forecasting by comparing it with back-propagation neural networks and case-based reasoning. The experimental results show that SVM provides a promising alternative to stock market prediction.", "Much recent research has been devoted to learning algorithms for deep architectures such as Deep Belief Networks and stacks of auto-encoder variants, with impressive results obtained in several areas, mostly on vision and language data sets. The best results obtained on supervised learning tasks involve an unsupervised learning component, usually in an unsupervised pre-training phase. Even though these new algorithms have enabled training deep models, many questions remain as to the nature of this difficult learning problem. The main question investigated here is the following: how does unsupervised pre-training work? Answering this questions is important if learning in deep architectures is to be further improved. We propose several explanatory hypotheses and test them through extensive simulations. We empirically show the influence of pre-training with respect to architecture depth, model capacity, and number of training examples. The experiments confirm and clarify the advantage of unsupervised pre-training. The results suggest that unsupervised pre-training guides the learning towards basins of attraction of minima that support better generalization from the training data set; the evidence from these results supports a regularization explanation for the effect of pre-training." ] }
1512.06238
2264490347
In this paper we consider the following question: can we optimize objective functions from the training data we use to learn them? We formalize this question through a novel framework we call optimization from samples (OPS). In OPS, we are given sampled values of a function drawn from some distribution and the objective is to optimize the function under some constraint. While there are interesting classes of functions that can be optimized from samples, our main result is an impossibility. We show that there are classes of functions which are statistically learnable and optimizable, but for which no reasonable approximation for optimization from samples is achievable. In particular, our main result shows that there is no constant factor approximation for maximizing coverage functions under a cardinality constraint using polynomially-many samples drawn from any distribution. We also show tight approximation guarantees for maximization under a cardinality constraint of several interesting classes of functions including unit-demand, additive, and general monotone submodular functions, as well as a constant factor approximation for monotone submodular functions with bounded curvature.
Another line of work which combines decision-making and learning is online learning (see survey @cite_46 ). In online learning, a player iteratively makes decisions. For each decision, the player incurs a cost and the cost function for the current iteration is immediately revealed. The objective is to minimize regret, which is the difference between the sum of the costs of the decisions of the player and the sum of the costs of the best fixed decision. The fundamental differences with our framework are that decisions are made online after each observation, instead of offline given a collection of observations. The benchmarks, regret in one case and the optimal solution in the other, are not comparable.
{ "cite_N": [ "@cite_46" ], "mid": [ "2513180554" ], "abstract": [ "This monograph portrays optimization as a process. In many practical applications the environment is so complex that it is infeasible to lay out a comprehensive theoretical model and use classical algorithmic theory and mathematical optimization. It is necessary as well as beneficial to take a robust approach, by applying an optimization method that learns as one goes along, learning from experience as more aspects of the problem are observed. This view of optimization as a process has become prominent in varied fields and has led to some spectacular success in modeling and systems that are now part of our daily lives." ] }
1512.06238
2264490347
In this paper we consider the following question: can we optimize objective functions from the training data we use to learn them? We formalize this question through a novel framework we call optimization from samples (OPS). In OPS, we are given sampled values of a function drawn from some distribution and the objective is to optimize the function under some constraint. While there are interesting classes of functions that can be optimized from samples, our main result is an impossibility. We show that there are classes of functions which are statistically learnable and optimizable, but for which no reasonable approximation for optimization from samples is achievable. In particular, our main result shows that there is no constant factor approximation for maximizing coverage functions under a cardinality constraint using polynomially-many samples drawn from any distribution. We also show tight approximation guarantees for maximization under a cardinality constraint of several interesting classes of functions including unit-demand, additive, and general monotone submodular functions, as well as a constant factor approximation for monotone submodular functions with bounded curvature.
In addition to the PMAC learning results mentioned in the introduction for coverage functions, there are multiple learning results for submodular functions. Monotone submodular functions are @math - PMAC learnable over product distributions for some constant @math under some assumptions @cite_38 . Impossibility results arise for general distributions, in which case submodular functions are not @math - PMAC learnable @cite_38 . Finally, submodular functions can be @math - PMAC learned for the uniform distribution over all sets with a running time and sample complexity exponential in @math and polynomial in @math @cite_31 . This exponential dependency is necessary since @math samples are needed to learn submodular functions with @math -error of @math over this distribution @cite_17 .
{ "cite_N": [ "@cite_38", "@cite_31", "@cite_17" ], "mid": [ "", "2950933581", "2963852843" ], "abstract": [ "", "We investigate the approximability of several classes of real-valued functions by functions of a small number of variables ( juntas ). Our main results are tight bounds on the number of variables required to approximate a function @math within @math -error @math over the uniform distribution: 1. If @math is submodular, then it is @math -close to a function of @math variables. This is an exponential improvement over previously known results. We note that @math variables are necessary even for linear functions. 2. If @math is fractionally subadditive (XOS) it is @math -close to a function of @math variables. This result holds for all functions with low total @math -influence and is a real-valued analogue of Friedgut's theorem for boolean functions. We show that @math variables are necessary even for XOS functions. As applications of these results, we provide learning algorithms over the uniform distribution. For XOS functions, we give a PAC learning algorithm that runs in time @math . For submodular functions we give an algorithm in the more demanding PMAC learning model (Balcan and Harvey, 2011) which requires a multiplicative @math factor approximation with probability at least @math over the target distribution. Our uniform distribution algorithm runs in time @math . This is the first algorithm in the PMAC model that over the uniform distribution can achieve a constant approximation factor arbitrarily close to 1 for all submodular functions. As follows from the lower bounds in (, 2013) both of these algorithms are close to optimal. We also give applications for proper learning, testing and agnostic learning with value queries of these classes.", "We study the complexity of approximate representation and learning of submodular functions over the uniform distribution on the Boolean hypercubef0; 1g n . Our main result is the following structural theorem: any submodular function is -close in‘2 to a real-valued decision tree (DT) of depth O(1= 2 ). This immediately implies that any submodular function is -close to a function of at most" ] }
1512.06578
2062903085
In e-healthcare record systems (EHRS), attribute-based encryption (ABE) appears as a natural way to achieve fine-grained access control on health records. Some proposals exploit key-policy ABE (KP-ABE) to protect privacy in such a way that all users are associated with specific access policies and only the ciphertexts matching the users' access policies can be decrypted. An issue with KP-ABE is that it requires an a priori formulation of access policies during key generation, which is not always practicable in EHRS because the policies to access health records are sometimes determined after key generation. In this paper, we revisit KP-ABE and propose a dynamic ABE paradigm, referred to as access policy redefinable ABE (APR-ABE). To address the above issue, APR-ABE allows users to redefine their access policies and delegate keys for the redefined ones; hence, a priori precise policies are no longer mandatory. We construct an APR-ABE scheme with short ciphertexts and prove its full security in the standard model under several static assumptions.
ABE is a versatile cryptographic primitive allowing fine-grained access control over encrypted files. ABE was introduced by Sahai and Waters @cite_9 . Goyal @cite_18 formulated two complementary forms of ABE, i.e., Key-Policy ABE and Ciphertext-Policy ABE, and presented the first KP-ABE scheme. The first CP-ABE scheme was proposed by Bethencourt in @cite_16 , although its security proof relies on generic bilinear group model. Ostrovsky @cite_12 developed a KP-ABE scheme to handle any non-monotone structure; hence, negated clauses can be included in the policies. Waters @cite_14 presented a CP-ABE construction that allows any attribute access structure to be expressed by a Linear Secret Sharing Scheme (LSSS). Attrapadung @cite_7 gave a KP-ABE scheme permitting non-monotone access structures and constant-size ciphertexts. To reduce decryption time, Hohenberger and Waters @cite_15 presented a KP-ABE with fast decryption.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_7", "@cite_9", "@cite_15", "@cite_16", "@cite_12" ], "mid": [ "2138001464", "1510795740", "2117616411", "1498316612", "66767074", "2108072891", "2076046175" ], "abstract": [ "As more sensitive data is shared and stored by third-party sites on the Internet, there will be a need to encrypt data stored at these sites. One drawback of encrypting data, is that it can be selectively shared only at a coarse-grained level (i.e., giving another party your private key). We develop a new cryptosystem for fine-grained sharing of encrypted data that we call Key-Policy Attribute-Based Encryption (KP-ABE). In our cryptosystem, ciphertexts are labeled with sets of attributes and private keys are associated with access structures that control which ciphertexts a user is able to decrypt. We demonstrate the applicability of our construction to sharing of audit-log information and broadcast encryption. Our construction supports delegation of private keys which subsumesHierarchical Identity-Based Encryption (HIBE).", "We present a new methodology for realizing Ciphertext-Policy Attribute Encryption (CP-ABE) under concrete and noninteractive cryptographic assumptions in the standard model. Our solutions allow any encryptor to specify access control in terms of any access formula over the attributes in the system. In our most efficient system, ciphertext size, encryption, and decryption time scales linearly with the complexity of the access formula. The only previous work to achieve these parameters was limited to a proof in the generic group model. We present three constructions within our framework. Our first system is proven selectively secure under a assumption that we call the decisional Parallel Bilinear Diffie-Hellman Exponent (PBDHE) assumption which can be viewed as a generalization of the BDHE assumption. Our next two constructions provide performance tradeoffs to achieve provable security respectively under the (weaker) decisional Bilinear-Diffie-Hellman Exponent and decisional Bilinear Diffie-Hellman assumptions.", "Attribute-based encryption (ABE), as introduced by Sahai and Waters, allows for fine-grained access control on encrypted data. In its key-policy flavor, the primitive enables senders to encrypt messages under a set of attributes and private keys are associated with access structures that specify which ciphertexts the key holder will be allowed to decrypt. In most ABE systems, the ciphertext size grows linearly with the number of ciphertext attributes and the only known exceptions only support restricted forms of threshold access policies. This paper proposes the first key-policy attribute-based encryption (KP-ABE) schemes allowing for non-monotonic access structures (i.e., that may contain negated attributes) and with constant ciphertext size. Towards achieving this goal, we first show that a certain class of identity-based broadcast encryption schemes generically yields monotonic KPABE systems in the selective set model. We then describe a new efficient identity-based revocation mechanism that, when combined with a particular instantiation of our general monotonic construction, gives rise to the first truly expressive KP-ABE realization with constant-size ciphertexts. The downside of these new constructions is that private keys have quadratic size in the number of attributes. On the other hand, they reduce the number of pairing evaluations to a constant, which appears to be a unique feature among expressive KP-ABE schemes.", "We introduce a new type of Identity-Based Encryption (IBE) scheme that we call Fuzzy Identity-Based Encryption. In Fuzzy IBE we view an identity as set of descriptive attributes. A Fuzzy IBE scheme allows for a private key for an identity, ω, to decrypt a ciphertext encrypted with an identity, ω ′, if and only if the identities ω and ω ′ are close to each other as measured by the “set overlap” distance metric. A Fuzzy IBE scheme can be applied to enable encryption using biometric inputs as identities; the error-tolerance property of a Fuzzy IBE scheme is precisely what allows for the use of biometric identities, which inherently will have some noise each time they are sampled. Additionally, we show that Fuzzy-IBE can be used for a type of application that we term “attribute-based encryption”. In this paper we present two constructions of Fuzzy IBE schemes. Our constructions can be viewed as an Identity-Based Encryption of a message under several attributes that compose a (fuzzy) identity. Our IBE schemes are both error-tolerant and secure against collusion attacks. Additionally, our basic construction does not use random oracles. We prove the security of our schemes under the Selective-ID security model.", "Attribute-based encryption (ABE) is a vision of public key encryption that allows users to encrypt and decrypt messages based on user attributes. This functionality comes at a cost. In a typical implementation, the size of the ciphertext is proportional to the number of attributes associated with it and the decryption time is proportional to the number of attributes used during decryption. Specifically, many practical ABE implementations require one pairing operation per attribute used during decryption.", "In several distributed systems a user should only be able to access data if a user posses a certain set of credentials or attributes. Currently, the only method for enforcing such policies is to employ a trusted server to store the data and mediate access control. However, if any server storing the data is compromised, then the confidentiality of the data will be compromised. In this paper we present a system for realizing complex access control on encrypted data that we call ciphertext-policy attribute-based encryption. By using our techniques encrypted data can be kept confidential even if the storage server is untrusted; moreover, our methods are secure against collusion attacks. Previous attribute-based encryption systems used attributes to describe the encrypted data and built policies into user's keys; while in our system attributes are used to describe a user's credentials, and a party encrypting data determines a policy for who can decrypt. Thus, our methods are conceptually closer to traditional access control methods such as role-based access control (RBAC). In addition, we provide an implementation of our system and give performance measurements.", "We construct an Attribute-Based Encryption (ABE) scheme that allows a user's private key to be expressed in terms of any access formula over attributes. Previous ABE schemes were limited to expressing only monotonic access structures. We provide a proof of security for our scheme based on the Decisional Bilinear Diffie-Hellman (BDH) assumption. Furthermore, the performance of our new scheme compares favorably with existing, less-expressive schemes." ] }
1512.06578
2062903085
In e-healthcare record systems (EHRS), attribute-based encryption (ABE) appears as a natural way to achieve fine-grained access control on health records. Some proposals exploit key-policy ABE (KP-ABE) to protect privacy in such a way that all users are associated with specific access policies and only the ciphertexts matching the users' access policies can be decrypted. An issue with KP-ABE is that it requires an a priori formulation of access policies during key generation, which is not always practicable in EHRS because the policies to access health records are sometimes determined after key generation. In this paper, we revisit KP-ABE and propose a dynamic ABE paradigm, referred to as access policy redefinable ABE (APR-ABE). To address the above issue, APR-ABE allows users to redefine their access policies and delegate keys for the redefined ones; hence, a priori precise policies are no longer mandatory. We construct an APR-ABE scheme with short ciphertexts and prove its full security in the standard model under several static assumptions.
The flexible encryption property of ABE made it widely adopted in e-healthcare record systems. Li @cite_10 leveraged ABE to encrypt personal health records in cloud computing and exploited multi-authority ABE to achieve a high degree of privacy of records. Yu @cite_22 adopted and tailored ABE for wireless sensors of e-healthcare systems. Liang @cite_2 also applied ABE to secure private health records in health social networks. In their solution, users can verify each other's identifiers without seeing sensitive attributes, which yields a high level of privacy. Noting that the application of KP-ABE to distributed sensors in e-healthcare systems introduces several challenges regarding attribute and user revocation, Hur @cite_23 proposed an access control scheme using KP-ABE that has efficient attribute and user revocation capabilities.
{ "cite_N": [ "@cite_10", "@cite_22", "@cite_23", "@cite_2" ], "mid": [ "2118875948", "2141274790", "2080636584", "2144710287" ], "abstract": [ "Personal health record (PHR) is an emerging patient-centric model of health information exchange, which is often outsourced to be stored at a third party, such as cloud providers. However, there have been wide privacy concerns as personal health information could be exposed to those third party servers and to unauthorized parties. To assure the patients' control over access to their own PHRs, it is a promising method to encrypt the PHRs before outsourcing. Yet, issues such as risks of privacy exposure, scalability in key management, flexible access, and efficient user revocation, have remained the most important challenges toward achieving fine-grained, cryptographically enforced data access control. In this paper, we propose a novel patient-centric framework and a suite of mechanisms for data access control to PHRs stored in semitrusted servers. To achieve fine-grained and scalable data access control for PHRs, we leverage attribute-based encryption (ABE) techniques to encrypt each patient's PHR file. Different from previous works in secure data outsourcing, we focus on the multiple data owner scenario, and divide the users in the PHR system into multiple security domains that greatly reduces the key management complexity for owners and users. A high degree of patient privacy is guaranteed simultaneously by exploiting multiauthority ABE. Our scheme also enables dynamic modification of access policies or file attributes, supports efficient on-demand user attribute revocation and break-glass access under emergency scenarios. Extensive analytical and experimental results are presented which show the security, scalability, and efficiency of our proposed scheme.", "Distributed sensor data storage and retrieval have gained increasing popularity in recent years for supporting various applications. While distributed architecture enjoys a more robust and fault-tolerant wireless sensor network (WSN), such architecture also poses a number of security challenges especially when applied in mission-critical applications such as battlefield and e-healthcare. First, as sensor data are stored and maintained by individual sensors and unattended sensors are easily subject to strong attacks such as physical compromise, it is significantly harder to ensure data security. Second, in many mission-critical applications, fine-grained data access control is a must as illegal access to the sensitive data may cause disastrous results and or be prohibited by the law. Last but not least, sensor nodes usually are resource-constrained, which limits the direct adoption of expensive cryptographic primitives. To address the above challenges, we propose, in this paper, a distributed data access control scheme that is able to enforce fine-grained access control over sensor data and is resilient against strong attacks such as sensor compromise and user colluding. The proposed scheme exploits a novel cryptographic primitive called attribute-based encryption (ABE), tailors, and adapts it for WSNs with respect to both performance and security requirements. The feasibility of the scheme is demonstrated by experiments on real sensor platforms. To our best knowledge, this paper is the first to realize distributed fine-grained data access control for WSNs.", "Distributed sensor networks are becoming a robust solution that allows users to directly access data generated by individual sensors. In many practical scenarios, fine-grained access control is a pivotal security requirement to enhance usability and protect sensitive sensor information from unauthorized access. Recently, there have been proposed many schemes to adapt public key cryptosystems into sensor systems consisting of high-end sensor nodes in order to enforce security policy efficiently. However, the drawback of these approaches is that the complexity of computation increases linear to the expressiveness of the access policy. Key-policy attribute-based encryption is a promising cryptographic solution to enforce fine-grained access policies on the sensor data. However, the problem of applying it to distributed sensor networks introduces several challenges with regard to the attribute and user revocation. In this paper, we propose an access control scheme using KP-ABE with efficient attribute and user revocation capability for distributed sensor networks that are composed of high-end sensor devices. They can be achieved by the proxy encryption mechanism which takes advantage of attribute-based encryption and selective group key distribution. The analysis results indicate that the proposed scheme achieves efficient user access control while requiring the same computation overhead at each sensor as the previous schemes.", "In this paper, we propose two attribute-oriented authentication and transmission schemes for secure and privacy-preserving health information sharing in health social networks (HSNs). HSN users are tagged with formalized attributes. The attribute-oriented authentication scheme enables each HSN user to generate an attribute proof for itself, where its sensitive attributes are anonymized. By verifying provided attribute proof, other users are able to know what attributes an HSN user has. The attribute-oriented transmission scheme enables an HSN user to encrypt his her health information into a ciphertext bonded with a customized access policy. The access policy is defined by a target set of attributes. Only users who satisfy the access policy are able to decrypt the ciphertext. Through security analysis, we show that the proposed schemes can effectively resist various attacks including forgery attack, attribute-trace attack, eavesdropping attack, and collusion attack. Through extensive simulation studies, we demonstrate that both schemes can offer satisfactory performance in helping HSN users to share health information." ] }
1512.06578
2062903085
In e-healthcare record systems (EHRS), attribute-based encryption (ABE) appears as a natural way to achieve fine-grained access control on health records. Some proposals exploit key-policy ABE (KP-ABE) to protect privacy in such a way that all users are associated with specific access policies and only the ciphertexts matching the users' access policies can be decrypted. An issue with KP-ABE is that it requires an a priori formulation of access policies during key generation, which is not always practicable in EHRS because the policies to access health records are sometimes determined after key generation. In this paper, we revisit KP-ABE and propose a dynamic ABE paradigm, referred to as access policy redefinable ABE (APR-ABE). To address the above issue, APR-ABE allows users to redefine their access policies and delegate keys for the redefined ones; hence, a priori precise policies are no longer mandatory. We construct an APR-ABE scheme with short ciphertexts and prove its full security in the standard model under several static assumptions.
There are some works resolving delegation in different applications. To achieve both fine-grained access control and high performance for enterprise users, Wang @cite_6 proposed a solution that combines hierarchical identity-based encryption with CP-ABE to allow a performance-expressivity tradeoff. In that scheme, various authorities rather than attributes are hierarchically organized in order to generate keys for users in their domains. Wan @cite_4 extended ciphertext-policy attribute-set-based encryption with a hierarchical structure of users to achieve scalability and flexibility for access control in cloud computing systems. Li @cite_1 enhanced ABE by organizing attributes in a tree-like structure to achieve delegation, which is similar to our arrangement of attributes; however, their delegation is still limited to increasingly restrictive access policies. Besides, the security of the proposed scheme is only selective. Indeed, all these schemes are proposed to adapt ABE for specific applications, while our APR-ABE aims at permitting users to redefine their access policies and delegate secret keys in a way that does not need to be increasingly restrictive.
{ "cite_N": [ "@cite_1", "@cite_4", "@cite_6" ], "mid": [ "2020753934", "1993341076", "1998586673" ], "abstract": [ "Attribute-based encryption (ABE) has been envisioned as a promising cryptographic primitive for realizing secure and flexible access control. However, ABE is being criticized for its high scheme overhead as extensive pairing operations are usually required. In this paper, we focus on improving the efficiency of ABE by leveraging a previously overlooked fact, i.e., the often-found hierarchical relationships among the attributes that are inherent to many access control scenarios. As the first research effort along this direction, we coin the notion of hierarchical ABE (HABE), which can be viewed as the generalization of traditional ABE in the sense that both definitions are equal when all attributes are independent. We further give a concrete HABE construction considering a tree hierarchy among the attributes, which is provably secure. More importantly, our construction exhibits significant improvements over the traditional ABE when attribute hierarchies exist.", "Cloud computing has emerged as one of the most influential paradigms in the IT industry in recent years. Since this new computing technology requires users to entrust their valuable data to cloud providers, there have been increasing security and privacy concerns on outsourced data. Several schemes employing attribute-based encryption (ABE) have been proposed for access control of outsourced data in cloud computing; however, most of them suffer from inflexibility in implementing complex access control policies. In order to realize scalable, flexible, and fine-grained access control of outsourced data in cloud computing, in this paper, we propose hierarchical attribute-set-based encryption (HASBE) by extending ciphertext-policy attribute-set-based encryption (ASBE) with a hierarchical structure of users. The proposed scheme not only achieves scalability due to its hierarchical structure, but also inherits flexibility and fine-grained access control in supporting compound attributes of ASBE. In addition, HASBE employs multiple value assignments for access expiration time to deal with user revocation more efficiently than existing schemes. We formally prove the security of HASBE based on security of the ciphertext-policy attribute-based encryption (CP-ABE) scheme by Bethencourt and analyze its performance and computational complexity. We implement our scheme and show that it is both efficient and flexible in dealing with access control for outsourced data in cloud computing with comprehensive experiments.", "Cloud computing, as an emerging computing paradigm, enables users to remotely store their data into a cloud so as to enjoy scalable services on-demand. Especially for small and medium-sized enterprises with limited budgets, they can achieve cost savings and productivity enhancements by using cloud-based services to manage projects, to make collaborations, and the like. However, allowing cloud service providers (CSPs), which are not in the same trusted domains as enterprise users, to take care of confidential data, may raise potential security and privacy issues. To keep the sensitive user data confidential against untrusted CSPs, a natural way is to apply cryptographic approaches, by disclosing decryption keys only to authorized users. However, when enterprise users outsource confidential data for sharing on cloud servers, the adopted encryption system should not only support fine-grained access control, but also provide high performance, full delegation, and scalability, so as to best serve the needs of accessing data anytime and anywhere, delegating within enterprises, and achieving a dynamic set of users. In this paper, we propose a scheme to help enterprises to efficiently share confidential data on cloud servers. We achieve this goal by first combining the hierarchical identity-based encryption (HIBE) system and the ciphertext-policy attribute-based encryption (CP-ABE) system, and then making a performance-expressivity tradeoff, finally applying proxy re-encryption and lazy re-encryption to our scheme." ] }
1512.06581
1591213347
Existing semantically secure public-key searchable encryption schemes take search time linear with the total number of the ciphertexts. This makes retrieval from large-scale databases prohibitive. To alleviate this problem, this paper proposes searchable public-key ciphertexts with hidden structures (SPCHS) for keyword search as fast as possible without sacrificing semantic security of the encrypted keywords. In SPCHS, all keyword-searchable ciphertexts are structured by hidden relations, and with the search trapdoor corresponding to a keyword, the minimum information of the relations is disclosed to a search algorithm as the guidance to find all matching ciphertexts efficiently. We construct an SPCHS scheme from scratch in which the ciphertexts have a hidden star-like structure. We prove our scheme to be semantically secure in the random oracle (RO) model. The search complexity of our scheme is dependent on the actual number of the ciphertexts containing the queried keyword, rather than the number of all ciphertexts. Finally, we present a generic SPCHS construction from anonymous identity-based encryption and collision-free full-identity malleable identity-based key encapsulation mechanism (IBKEM) with anonymity. We illustrate two collision-free full-identity malleable IBKEM instances, which are semantically secure and anonymous, respectively, in the RO and standard models. The latter instance enables us to construct an SPCHS scheme with semantic security in the standard model.
Search on encrypted data has been extensively investigated in recent years. From a cryptographic perspective, the existing works fall into two categories, i.e. , symmetric searchable encryption @cite_44 and public-key searchable encryption.
{ "cite_N": [ "@cite_44" ], "mid": [ "2146828512" ], "abstract": [ "Searchable symmetric encryption (SSE) allows a party to outsource the storage of its data to another party (a server) in a private manner, while maintaining the ability to selectively search over it. This problem has been the focus of active research in recent years. In this paper we show two solutions to SSE that simultaneously enjoy the following properties: Both solutions are more efficient than all previous constant-round schemes. In particular, the work performed by the server per returned document is constant as opposed to linear in the size of the data. Both solutions enjoy stronger security guarantees than previous constant-round schemes. In fact, we point out subtle but serious problems with previous notions of security for SSE, and show how to design constructions which avoid these pitfalls. Further, our second solution also achieves what we call adaptive SSE security, where queries to the server can be chosen adaptively (by the adversary) during the execution of the search; this notion is both important in practice and has not been previously considered. Surprisingly, despite being more secure and more efficient, our SSE schemes are remarkably simple. We consider the simplicity of both solutions as an important step towards the deployment of SSE technologies.As an additional contribution, we also consider multi-user SSE. All prior work on SSE studied the setting where only the owner of the data is capable of submitting search queries. We consider the natural extension where an arbitrary group of parties other than the owner can submit search queries. We formally define SSE in the multi-user setting, and present an efficient construction that achieves better performance than simply using access control mechanisms." ] }
1512.06581
1591213347
Existing semantically secure public-key searchable encryption schemes take search time linear with the total number of the ciphertexts. This makes retrieval from large-scale databases prohibitive. To alleviate this problem, this paper proposes searchable public-key ciphertexts with hidden structures (SPCHS) for keyword search as fast as possible without sacrificing semantic security of the encrypted keywords. In SPCHS, all keyword-searchable ciphertexts are structured by hidden relations, and with the search trapdoor corresponding to a keyword, the minimum information of the relations is disclosed to a search algorithm as the guidance to find all matching ciphertexts efficiently. We construct an SPCHS scheme from scratch in which the ciphertexts have a hidden star-like structure. We prove our scheme to be semantically secure in the random oracle (RO) model. The search complexity of our scheme is dependent on the actual number of the ciphertexts containing the queried keyword, rather than the number of all ciphertexts. Finally, we present a generic SPCHS construction from anonymous identity-based encryption and collision-free full-identity malleable identity-based key encapsulation mechanism (IBKEM) with anonymity. We illustrate two collision-free full-identity malleable IBKEM instances, which are semantically secure and anonymous, respectively, in the RO and standard models. The latter instance enables us to construct an SPCHS scheme with semantic security in the standard model.
Following the seminal work on PEKS, Abdalla. @cite_24 fills some gaps w.r.t. consistency for PEKS and deals with the transformations among primitives related to PEKS. Some efforts have also been devoted to make PEKS versatile. The work of this kind includes conjunctive search @cite_17 @cite_18 @cite_27 @cite_26 @cite_20 @cite_39 , range search @cite_41 @cite_28 @cite_8 , subset search @cite_8 , time-scope search @cite_24 @cite_40 , similarity search @cite_31 , authorized search @cite_47 @cite_30 , equality test between heterogeneous ciphertexts @cite_4 , and fuzzy keyword search @cite_9 . In addition, @cite_37 proposed a PEKS scheme to keep the privacy of keyword search trapdoors.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_26", "@cite_47", "@cite_4", "@cite_37", "@cite_8", "@cite_41", "@cite_28", "@cite_9", "@cite_39", "@cite_24", "@cite_27", "@cite_40", "@cite_31", "@cite_20", "@cite_17" ], "mid": [ "2122058555", "1512194687", "1516033050", "2122170871", "1581001589", "198181672", "1589843374", "1513535838", "2154448764", "2080574324", "2161214567", "2120976781", "1836365731", "1583184426", "1487324238", "2056046693", "1580750219" ], "abstract": [ "In a public key setting, Alice encrypts an email with the public key of Bob, so that only Bob will be able to learn the contents of the email. Consider a scenario where the computer of Alice is infected and unbeknown to Alice it also embeds a malware into the message. Bob's company, Carol, cannot scan his email for malicious content as it is encrypted so the burden is on Bob to do the scan. This is not efficient. We construct a mechanism that enables Bob to provide trapdoors to Carol such that Carol, given an encrypted data and a malware signature, is able to check whether the encrypted data contains the malware signature, without decrypting it. We refer to this mechanism as public-key encryption with delegated search (PKEDS). We formalize PKEDS and give a construction based on ElGamal public-key encryption (PKE). The proposed scheme has ciphertexts which are both searchable and decryptable. This property of the scheme is crucial since an entity can search the entire content of the message, in contrast to existing searchable public-key encryption schemes where the search is done only in the metadata part. We prove in the standard model that the scheme is ciphertext indistinguishable and trapdoor indistinguishable under the Symmetric External Diffie-Hellman (SXDH) assumption. We prove also the ciphertext one-wayness of the scheme under the modified Computational Diffie-Hellman (mCDH) assumption. We show that our PKEDS scheme can be used in different applications such as detecting encrypted malware and forwarding encrypted email.", "We study the setting in which a user stores encrypted documents (e.g. e-mails) on an untrusted server. In order to retrieve documents satisfying a certain search criterion, the user gives the server a capability that allows the server to identify exactly those documents. Work in this area has largely focused on search criteria consisting of a single keyword. If the user is actually interested in documents containing each of several keywords (conjunctive keyword search) the user must either give the server capabilities for each of the keywords individually and rely on an intersection calculation (by either the server or the user) to determine the correct set of documents, or alternatively, the user may store additional information on the server to facilitate such searches. Neither solution is desirable; the former enables the server to learn which documents match each individual keyword of the conjunctive search and the latter results in exponential storage if the user allows for searches on every set of keywords.", "We study the problem of a public key encryption with conjunctive keyword search (PECK). The keyword searchable encryption enables a user to outsource his data to the storage of an untrusted server and to have the ability to selectively search his data without leaking information. The PECK scheme provides the document search containing each of several keywords over a public key setting. First, we construct an efficient PECK scheme whose security is proven over a decisional linear Diffie-Hellman assumption in the random oracle model. In comparison with previous schemes, our scheme has the shortest ciphertext size and private key size, and requires a comparable computation overhead. Second, we discuss problems related to the security proof of previous schemes and show they cannot guarantee complete security. Finally, we introduce a new concept called a multi-user PECK scheme, which can achieve an efficient computation and communication overhead and effectively manage the storage in a server for a number of users.", "When outsourcing data to third-party servers, searchable encryption is an important enabling technique which simultaneously allows the data owner to keep his data in encrypted form and the third-party servers to search in the ciphertexts. Motivated by an encrypted email retrieval and archive scenario, we investigate asymmetric searchable encryption (ASE) schemes which support two special features, namely message recovery and flexible search authorization. With this new primitive, a data owner can keep his data encrypted under his public key and assign different search privileges to third-party servers. In the security model, we define the standard IND-CCA security against any outside attacker and define adapted ciphertext indistinguishability properties against inside attackers according to their functionalities. Moreover, we take into account the potential information leakage from trapdoors, and define two trapdoor security properties. Employing the bilinear property of pairings and a deliberately-designed double encryption technique, we present a provably secure instantiation of the primitive based on the DLIN and BDH assumptions in the random oracle model.", "We present a (probabilistic) public key encryption (PKE) scheme such that when being implemented in a bilinear group, anyone is able to check whether two ciphertexts are encryptions of the same message. Interestingly, bilinear map operations are not required in key generation, encryption or decryption procedures of the PKE scheme, but is only required when people want to do an equality test (on the encrypted messages) between two ciphertexts that may be generated using different public keys. We show that our PKE scheme can be used in different applications such as searchable encryption and partitioning encrypted data. Moreover, we show that when being implemented in a non-bilinear group, the security of our PKE scheme can be strengthened from One-Way CCA to a weak form of IND-CCA.", "Asymmetric searchable encryption allows searches to be carried over ciphertexts, through delegation, and by means of trapdoors issued by the owner of the data. Public Key Encryption with Keyword Search (PEKS) is a primitive with such functionality that provides delegation of exact-match searches. As it is important that ciphertexts preserve data privacy, it is also important that trapdoors do not expose the user’s search criteria. The difficulty of formalizing a security model for trapdoor privacy lies in the verification functionality, which gives the adversary the power of verifying if a trapdoor encodes a particular keyword. In this paper, we provide a broader view on what can be achieved regarding trapdoor privacy in asymmetric searchable encryption schemes, and bridge the gap between previous definitions, which give limited privacy guarantees in practice against search patterns. Since it is well-known that PEKS schemes can be trivially constructed from any Anonymous IBE scheme, we propose the security notion of Key Unlinkability for IBE, which leads to strong guarantees of trapdoor privacy in PEKS, and we construct a scheme that achieves this security notion.", "We construct public-key systems that support comparison queries (x ≥ a) on encrypted data as well as more general queries such as subset queries (x∈ S). Furthermore, these systems support arbitrary conjunctive queries (P1 ∧ ... ∧ Pl) without leaking information on individual conjuncts. We present a general framework for constructing and analyzing public-key systems supporting queries on encrypted data.", "We introduce the concept of Anonymous Multi-Attribute Encryption with Range Query and Conditional Decryption (AMERQCD). In AMERQCD, a plaintext is encrypted under a point in multidimensional space. To a computationally bounded adversary, the ciphertext hides both the plaintext and the point under which it is encrypted. In a range query, a master key owner releases the decryption key for an arbitrary hyper-rectangle in space, thus allowing decryption of ciphertexts previously encrypted under any point within the hyper-rectangle. However, a computationally bounded adversary cannot learn any information on ciphertexts outside the range covered by the decryption key (except the fact that they do not lie within this range). We give an efficient construction based on the Decision Bilinear Diffie-Hellman (D-BDH) and Decision Linear (D-Linear) assumption.", "We design an encryption scheme called multi-dimensional range query over encrypted data (MRQED), to address the privacy concerns related to the sharing of network audit logs and various other applications. Our scheme allows a network gateway to encrypt summaries of network flows before submitting them to an untrusted repository. When network intrusions are suspected, an authority can release a key to an auditor, allowing the auditor to decrypt flows whose attributes (e.g., source and destination addresses, port numbers, etc.) fall within specific ranges. However, the privacy of all irrelevant flows are still preserved. We formally define the security for MRQED and prove the security of our construction under the decision bilinear Diffie-Hellman and decision linear assumptions in certain bilinear groups. We study the practical performance of our construction in the context of network audit logs. Apart from network audit logs, our scheme also has interesting applications for financial audit logs, medical privacy, untrusted remote storage, etc. In particular, we show that MRQED implies a solution to its dual problem, which enables investors to trade stocks through a broker in a privacy-preserving manner.", "Public-key encryption with keyword search (PEKS) is a versatile tool. It allows a third party knowing the search trapdoor of a keyword to search encrypted documents containing that keyword without decrypting the documents or knowing the keyword. However, it is shown that the keyword will be compromised by a malicious third party under a keyword guess attack (KGA) if the keyword space is in a polynomial size. We address this problem with a keyword privacy enhanced variant of PEKS referred to as public-key encryption with fuzzy keyword search (PEFKS). In PEFKS, each keyword corresponds to an exact keyword search trapdoor and a fuzzy keyword search trapdoor. Two or more keywords share the same fuzzy keyword trapdoor. To search encrypted documents containing a specific keyword, only the fuzzy keyword search trapdoor is provided to the third party, i.e., the searcher. Thus, in PEFKS, a malicious searcher can no longer learn the exact keyword to be searched even if the keyword space is small. We propose a universal transformation which converts any anonymous identity-based encryption (IBE) scheme into a secure PEFKS scheme. Following the generic construction, we instantiate the first PEFKS scheme proven to be secure under KGA in the case that the keyword space is in a polynomial size.", "We study the problem of searching on data that is encrypted using a public key system. Consider user Bob who sends email to user Alice encrypted under Alice’s public key. An email gateway wants to test whether the email contains the keyword “urgent” so that it could route the email accordingly. Alice, on the other hand does not wish to give the gateway the ability to decrypt all her messages. We define and construct a mechanism that enables Alice to provide a key to the gateway that enables the gateway to test whether the word “urgent” is a keyword in the email without learning anything else about the email. We refer to this mechanism as Public Key Encryption with keyword Search. As another example, consider a mail server that stores various messages publicly encrypted for Alice by others. Using our mechanism Alice can send the mail server a key that will enable the server to identify all messages containing some specific keyword, but learn nothing else. We define the concept of public key encryption with keyword search and give several constructions.", "We identify and fill some gaps with regard to consistency (the extent to which false positives are produced) for public-key encryption with keyword search (PEKS). We define computational and statistical relaxations of the existing notion of perfect consistency, show that the scheme of [7] is computationally consistent, and provide a new scheme that is statistically consistent. We also provide a transform of an anonymous IBE scheme to a secure PEKS scheme that, unlike the previous one, guarantees consistency. Finally we suggest three extensions of the basic notions considered here, namely anonymous HIBE, public-key encryption with temporary keyword search, and identity-based encryption with keyword search.", "We present two provably secure and efficient schemes for performing conjunctive keyword searches over symmetrically encrypted data. Our first scheme is based on Shamir Secret Sharing and provides the most efficient search technique in this context to date. Although the size of its trapdoors is linear in the number of documents being searched, we empirically show that this overhead remains reasonable in practice. Nonetheless, to address this limitation we provide an alternative based on bilinear pairings that yields constant size trapdoors. This latter construction is not only asymptotically more efficient than previous secure conjunctive keyword search schemes in the symmetric setting, but incurs significantly less storage overhead. Additionally, unlike most previous work, our constructions are proven secure in the standard model.", "In this paper we explore restricted delegation of searches on encrypted audit logs. We show how to limit the exposure of private information stored in the log during such a search and provide a technique to delegate searches on the log to an investigator. These delegated searches are limited to authorized keywords that pertain to specific time periods, and provide guarantees of completeness to the investigator. Moreover, we show that investigators can efficiently find all relevant records, and can authenticate retrieved records without interacting with the owner of the log. In addition, we provide an empirical evaluation of our techniques using encrypted logs consisting of approximately 27,000 records of IDS alerts collected over a span of a few months.", "In this paper, we consider the problem of predicate encryption and focus on the predicate for testing whether the Hamming distance between the attribute X of a data item and a target V is equal to (or less than) a threshold t where X and V are of length m. Existing solutions either do not provide attribute protection or produce a big ciphertext of size O(2 m ). For the equality version of the problem, we provide a scheme which is match-concealing (MC) secure and the sizes of the ciphertext and token are both O(m). For the inequality version of the problem, we give a practical scheme, also achieving MC security, which produces a ciphertext with size (O(m^ t_ max ) ) if the maximum value of t, t max , is known in advance and is a constant. We also show how to update the ciphertext if the user wants to increase t max without constructing the ciphertext from scratch.", "A keyword-searchable encryption scheme allows a user with a \"trapdoor\" for a keyword to efficiently retrieve some of encrypted data containing the specific keyword over a remote server. The scheme for keyword-searchable encryption is considered as one of crucial building blocks that solves the security problems of privacy and data confidentiality in many settings, such as outsourced database systems and mail (or file) servers. However, most existing schemes support only a single keyword for searching, but do not allow for boolean combinations of keywords. It makes the use of schemes impractical in real applications. To address this problem, we propose an efficient construction for conjunctive keyword-searchable encryption, in which the size of trapdoors is almost same as that for searching a single keyword. Our construction is proven secure against adaptive chosen-keyword attacks in the random oracle model under the external co-Diffie-Hellman assumption. Compared to previous works, our construction has much better performance in terms of both computational and communication cost.", "In a public key encryption, we may want to enable someone to test whether something is a keyword in a given document without leaking anything else about the document. An email gateway, for example, may be desired to test whether the email contains a keyword “urgent” so that it could route the email accordingly, without leaking any content to the gateway. This mechanism was referred as public key encryption with keyword search [4]. Similarly, a user may want to enable an email gateway to search keywords conjunctively, such as “urgent” email from “Bob” about “finance”, without leaking anything else about the email. We refer to this mechanism as public key encryption with conjunctive field keyword search. In this paper, we define the security model of this mechanism and propose two efficient schemes whose security is proved in the random oracle model." ] }
1512.06080
2398119258
The web is changing the way in which data warehouses are designed, used, and queried. With the advent of initiatives such as Open Data and Open Government, organizations want to share their multidimensional data cubes and make them available to be queried online. The RDF data cube vocabulary (QB), the W3C standard to publish statistical data in RDF, presents several limitations to fully support the multidimensional model. The QB4OLAP vocabulary extends QB to overcome these limitations, allowing to im- plement the typical OLAP operations, such as rollup, slice, dice, and drill-across using standard SPARQL queries. In this paper we introduce a formal data model where the main object is the data cube, and define OLAP operations using this model, independent of the underlying representation of the cube. We show then that a cube expressed using our model can be represented using the QB4OLAP vocabulary, and finally we provide a SPARQL implementation of OLAP operations over data cubes in QB4OLAP.
K " a mpgen and Harth @cite_11 study the extraction of statistical data published using the QB vocabulary into a MD database. The authors propose a mapping between the concepts in QB and a MD data model, and implement these mappings via SPARQL queries. There are four main phases in the proposed methodology: (1) Extraction, where the user defines relevant data sets which are retrieved from the web and stored in a local triple store. Then, SPARQL queries are performed over this triple store to retrieve metadata on the schema, as well as data instances; (2) Creation of a relational representation of the MD data model, using the metadata retrieved in the previous step, and the population of this model with the retrieved data; (3) Creation of a MD model to allow OLAP operations over the underlying relational representation. Such model is expressed using XML for Analysis (XMLA) http: xmlforanalysis.com , which allows the serialization of MD models and is implemented by several OLAP clients and servers; (4) Specification of queries over the DW, using OLAP client applications.
{ "cite_N": [ "@cite_11" ], "mid": [ "2062345051" ], "abstract": [ "The amount of available Linked Data on the Web is increasing, and data providers start to publish statistical datasets that comprise numerical data. Such statistical datasets differ significantly from the currently predominant network-style data published on the Web. We explore the possibility of integrating statistical data from multiple Linked Data sources. We provide a mapping from statistical Linked Data into the Multidimensional Model used in data warehouses. We use an extract-transform-load (ETL) pipeline to convert statistical Linked Data into a format suitable for loading into an open-source OLAP system, and thus demonstrate how standard OLAP infrastructure can be used for elaborate querying and visualisation of integrated statistical Linked Data. We discuss lessons learned from three experiments and identify areas which require future work to ultimately arrive at a well-interlinked set of statistical data from multiple sources which is processable with standard OLAP systems." ] }
1512.06080
2398119258
The web is changing the way in which data warehouses are designed, used, and queried. With the advent of initiatives such as Open Data and Open Government, organizations want to share their multidimensional data cubes and make them available to be queried online. The RDF data cube vocabulary (QB), the W3C standard to publish statistical data in RDF, presents several limitations to fully support the multidimensional model. The QB4OLAP vocabulary extends QB to overcome these limitations, allowing to im- plement the typical OLAP operations, such as rollup, slice, dice, and drill-across using standard SPARQL queries. In this paper we introduce a formal data model where the main object is the data cube, and define OLAP operations using this model, independent of the underlying representation of the cube. We show then that a cube expressed using our model can be represented using the QB4OLAP vocabulary, and finally we provide a SPARQL implementation of OLAP operations over data cubes in QB4OLAP.
The second line of research tries to overcome the drawbacks of the first one, exploring data models and tools that allow publishing and performing OLAP-like analysis directly over SW MD data. Terms like @cite_1 , @cite_0 , , or even , refer to the capability of incorporating situational data into the decision process with little or no intervention of programmers or designers. The web, and in particular the SW, is considered as a large source of data that could enrich decision processes. Abell ' @cite_1 present a framework to support self-service BI, based on the notion of , i.e., MD cubes that can be dynamically extended both in their schema and their instances, and in which data and metadata can be associated with quality and provenance annotations.
{ "cite_N": [ "@cite_0", "@cite_1" ], "mid": [ "1445821577", "2140690054" ], "abstract": [ "Traditional business intelligence has focused on creating dimensional models and data warehouses, where after a high modeling and creation cost structurally similar queries are processed on a regular basis. So called \"ad-hoc\" queries aggregate data from one or several dimensional models, but fail to incorporate other external information that is not considered in the pre-defined data model. We focus on a different kind of business intelligence, which spontaneously correlates data from a company’s data warehouse with \"external\" information sources that may come from the corporate intranet, are acquired from some external vendor, or are derived from the internet. Such situational applications are usually short-lived programs created for a small group of users with a specific business need. We will showcase the state-of-the-art for situational applications as well as the impact of Web 2.0 for these applications. We will also present examples and research challenges that the information management research community needs to address in order to arrive at a platform for Situational Business Intelligence.", "Self-service business intelligence is about enabling non-expert users to make well-informed decisions by enriching the decision process with situational data, i.e., data that have a narrow focus on a specific business problem and, typically, a short lifespan for a small group of users. Often, these data are not owned and controlled by the decision maker; their search, extraction, integration, and storage for reuse or sharing should be accomplished by decision makers without any intervention by designers or programmers. The goal of this paper is to present the framework we envision to support self-service business intelligence and the related research challenges; the underlying core idea is the notion of fusion cubes, i.e., multidimensional cubes that can be dynamically extended both in their schema and their instances, and in which situational data and metadata are associated with quality and provenance annotations." ] }
1512.06080
2398119258
The web is changing the way in which data warehouses are designed, used, and queried. With the advent of initiatives such as Open Data and Open Government, organizations want to share their multidimensional data cubes and make them available to be queried online. The RDF data cube vocabulary (QB), the W3C standard to publish statistical data in RDF, presents several limitations to fully support the multidimensional model. The QB4OLAP vocabulary extends QB to overcome these limitations, allowing to im- plement the typical OLAP operations, such as rollup, slice, dice, and drill-across using standard SPARQL queries. In this paper we introduce a formal data model where the main object is the data cube, and define OLAP operations using this model, independent of the underlying representation of the cube. We show then that a cube expressed using our model can be represented using the QB4OLAP vocabulary, and finally we provide a SPARQL implementation of OLAP operations over data cubes in QB4OLAP.
In @cite_4 the authors present a framework for Exploratory OLAP over Linked Open Data sources, where the MD schema of the data cube is expressed in QB4OLAP and VoID. Based on this MD schema the system is able to query data sources, extract and aggregate data, and build an OLAP cube. The MD information retrieved from external sources is also stored using QB4OLAP.
{ "cite_N": [ "@cite_4" ], "mid": [ "2134412814" ], "abstract": [ "Business Intelligence (BI) tools provide fundamental support for analyzing large volumes of information. Data Warehouses (DW) and Online Analytical Processing (OLAP) tools are used to store and analyze data. Nowadays more and more information is available on the Web in the form of Resource Description Framework (RDF), and BI tools have a huge potential of achieving better results by integrating real-time data from web sources into the analysis process. In this paper, we describe a framework for so-called exploratory OLAP over RDF sources. We propose a system that uses a multidimensional schema of the OLAP cube expressed in RDF vocabularies. Based on this information the system is able to query data sources, extract and aggregate data, and build a cube. We also propose a computer-aided process for discovering previously unknown data sources and building a multidimensional schema of the cube. We present a use case to demonstrate the applicability of the approach." ] }
1512.06080
2398119258
The web is changing the way in which data warehouses are designed, used, and queried. With the advent of initiatives such as Open Data and Open Government, organizations want to share their multidimensional data cubes and make them available to be queried online. The RDF data cube vocabulary (QB), the W3C standard to publish statistical data in RDF, presents several limitations to fully support the multidimensional model. The QB4OLAP vocabulary extends QB to overcome these limitations, allowing to im- plement the typical OLAP operations, such as rollup, slice, dice, and drill-across using standard SPARQL queries. In this paper we introduce a formal data model where the main object is the data cube, and define OLAP operations using this model, independent of the underlying representation of the cube. We show then that a cube expressed using our model can be represented using the QB4OLAP vocabulary, and finally we provide a SPARQL implementation of OLAP operations over data cubes in QB4OLAP.
For an exhaustive study of the possibilities of using SW technologies in OLAP, we refer the reader to the survey by Abell ' @cite_2 .
{ "cite_N": [ "@cite_2" ], "mid": [ "2139452609" ], "abstract": [ "This paper describes the convergence of some of the most influential technologies in the last few years, namely data warehousing (DW), on-line analytical processing (OLAP), and the Semantic Web (SW). OLAP is used by enterprises to derive important business-critical knowledge from data inside the company. However, the most interesting OLAP queries can no longer be answered on internal data alone, external data must also be discovered (most often on the web), acquired, integrated, and (analytically) queried, resulting in a new type of OLAP, exploratory OLAP . When using external data, an important issue is knowing the precise semantics of the data. Here, SW technologies come to the rescue, as they allow semantics (ranging from very simple to very complex) to be specified for web-available resources. SW technologies do not only support capturing the “passive” semantics, but also support active inference and reasoning on the data. The paper first presents a characterization of DW OLAP environments, followed by an introduction to the relevant SW foundation concepts. Then, it describes the relationship of multidimensional (MD) models and SW technologies, including the relationship between MD models and SW formalisms. Next, the paper goes on to survey the use of SW technologies for data modeling and data provisioning, including semantic data annotation and semantic-aware extract, transform, and load (ETL) processes. Finally, all the findings are discussed and a number of directions for future research are outlined, including SW support for intelligent MD querying, using SW technologies for providing context to data warehouses, and scalability issues." ] }
1512.06110
2212622123
Morphological inflection generation is the task of generating the inflected form of a given lemma corresponding to a particular linguistic transformation. We model the problem of inflection generation as a character sequence to sequence learning problem and present a variant of the neural encoder-decoder model for solving it. Our model is language independent and can be trained in both supervised and semi-supervised settings. We evaluate our system on seven datasets of morphologically rich languages and achieve either better or comparable results to existing state-of-the-art models of inflection generation.
Similar to the encoder in our framework, extract sub-word features using a forward-backward LSTM from a word, and use them in a traditional weighted FST to generate inflected forms. Neural encoder-decoder models of string transduction have also been used for sub-word level transformations like grapheme-to-phoneme conversion @cite_32 @cite_43 .
{ "cite_N": [ "@cite_43", "@cite_32" ], "mid": [ "1593247906", "1916501714" ], "abstract": [ "Grapheme-to-phoneme (G2P) models are key components in speech recognition and text-to-speech systems as they describe how words are pronounced. We propose a G2P model based on a Long Short-Term Memory (LSTM) recurrent neural network (RNN). In contrast to traditional joint-sequence based G2P approaches, LSTMs have the flexibility of taking into consideration the full context of graphemes and transform the problem from a series of grapheme-to-phoneme conversions to a word-to-pronunciation conversion. Training joint-sequence based G2P require explicit grapheme-to-phoneme alignments which are not straightforward since graphemes and phonemes don't correspond one-to-one. The LSTM based approach forgoes the need for such explicit alignments. We experiment with unidirectional LSTM (ULSTM) with different kinds of output delays and deep bidirectional LSTM (DBLSTM) with a connectionist temporal classification (CTC) layer. The DBLSTM-CTC model achieves a word error rate (WER) of 25.8 on the public CMU dataset for US English. Combining the DBLSTM-CTC model with a joint n-gram model results in a WER of 21.3 , which is a 9 relative improvement compared to the previous best WER of 23.4 from a hybrid system.", "Sequence-to-sequence translation methods based on generation with a side-conditioned language model have recently shown promising results in several tasks. In machine translation, models conditioned on source side words have been used to produce target-language text, and in image captioning, models conditioned images have been used to generate caption text. Past work with this approach has focused on large vocabulary tasks, and measured quality in terms of BLEU. In this paper, we explore the applicability of such models to the qualitatively different grapheme-to-phoneme task. Here, the input and output side vocabularies are small, plain n-gram models do well, and credit is only given when the output is exactly correct. We find that the simple side-conditioned generation approach is able to rival the state-of-the-art, and we are able to significantly advance the stat-of-the-art with bi-directional long short-term memory (LSTM) neural networks that use the same alignment information that is used in conventional approaches." ] }
1512.06110
2212622123
Morphological inflection generation is the task of generating the inflected form of a given lemma corresponding to a particular linguistic transformation. We model the problem of inflection generation as a character sequence to sequence learning problem and present a variant of the neural encoder-decoder model for solving it. Our model is language independent and can be trained in both supervised and semi-supervised settings. We evaluate our system on seven datasets of morphologically rich languages and achieve either better or comparable results to existing state-of-the-art models of inflection generation.
Generation of inflectional morphology has been particularly useful in statistical machine translation, both in translation from morphologically rich languages @cite_20 , and into morphologically rich languages @cite_19 @cite_29 @cite_34 @cite_31 . Modeling the morphological structure of a word has also shown to improve the quality of word clusters @cite_0 and word vector representations @cite_50 .
{ "cite_N": [ "@cite_29", "@cite_0", "@cite_19", "@cite_50", "@cite_31", "@cite_34", "@cite_20" ], "mid": [ "2117642127", "2018789714", "2136094405", "", "", "2152249239", "2170464899" ], "abstract": [ "We improve the quality of statistical machine translation (SMT) by applying models that predict word forms from their stems using extensive morphological and syntactic information from both the source and target languages. Our inflection generation models are trained independently of the SMT system. We investigate different ways of combining the inflection prediction component with the SMT system by training the base MT system on fully inflected forms or on word stems. We applied our inflection generation models in translating English into two morphologically complex languages, Russian and Arabic, and show that our model improves the quality of SMT over both phrasal and syntax-based SMT systems according to BLEU and human judgements.", "In this paper we discuss algorithms for clustering words into classes from unlabelled text using unsupervised algorithms, based on distributional and morphological information. We show how the use of morphological information can improve the performance on rare words, and that this is robust across a wide range of languages.", "We present a novel method for predicting inflected word forms for generating morphologically rich languages in machine translation. We utilize a rich set of syntactic and morphological knowledge sources from both source and target sentences in a probabilistic model, and evaluate their contribution in generating Russian and Arabic sentences. Our results show that the proposed model substantially outperforms the commonly used baseline of a trigram target language model; in particular, the use of morphological and syntactic features leads to large gains in prediction accuracy. We also show that the proposed method is effective with a relatively small amount of data.", "", "", "This paper extends the training and tuning regime for phrase-based statistical machine translation to obtain fluent translations into morphologically complex languages (we build an English to Finnish translation system). Our methods use unsupervised morphology induction. Unlike previous work we focus on morphologically productive phrase pairs -- our decoder can combine morphemes across phrase boundaries. Morphemes in the target language may not have a corresponding morpheme or word in the source language. Therefore, we propose a novel combination of post-processing morphology prediction with morpheme-based translation. We show, using both automatic evaluation scores and linguistically motivated analyses of the output, that our methods outperform previously proposed ones and provide the best known results on the English-Finnish Europarl translation task. Our methods are mostly language independent, so they should improve translation into other target languages with complex morphology.", "In statistical machine translation, estimating word-to-word alignment probabilities for the translation model can be difficult due to the problem of sparse data: most words in a given corpus occur at most a handful of times. With a highly inflected language such as Czech, this problem can be particularly severe. In addition, much of the morphological variation seen in Czech words is not reflected in either the morphology or syntax of a language like English. In this work, we show that using morphological analysis to modify the Czech input can improve a Czech-English machine translation system. We investigate several different methods of incorporating morphological information, and show that a system that combines these methods yields the best results. Our final system achieves a BLEU score of .333, as compared to .270 for the baseline word-to-word system." ] }
1512.06110
2212622123
Morphological inflection generation is the task of generating the inflected form of a given lemma corresponding to a particular linguistic transformation. We model the problem of inflection generation as a character sequence to sequence learning problem and present a variant of the neural encoder-decoder model for solving it. Our model is language independent and can be trained in both supervised and semi-supervised settings. We evaluate our system on seven datasets of morphologically rich languages and achieve either better or comparable results to existing state-of-the-art models of inflection generation.
Inflection generation is complementary to the task of morphological and phonological segmentation, where the existing word form needs to be segmented to obtained meaningful sub-word units @cite_42 @cite_3 @cite_54 @cite_35 @cite_2 @cite_44 . An additional line of work that benefits from implicit modeling of morphology is neural character-based natural language processing, e.g., part-of-speech tagging @cite_33 @cite_5 and dependency parsing @cite_21 . These models have been successful when applied to morphologically rich languages, as they are able to capture word formation patterns.
{ "cite_N": [ "@cite_35", "@cite_33", "@cite_54", "@cite_42", "@cite_21", "@cite_3", "@cite_44", "@cite_2", "@cite_5" ], "mid": [ "1839584883", "2101609803", "1975638594", "157090039", "2951336364", "", "2461808544", "1908676432", "2949563612" ], "abstract": [ "Most state-of-the-art systems today produce morphological analysis based only on orthographic patterns. In contrast, we propose a model for unsupervised morphological analysis that integrates orthographic and semantic views of words. We model word formation in terms of morphological chains, from base words to the observed words, breaking the chains into parent-child relations. We use log-linear models with morpheme and word-level features to predict possible parents, including their modifications, for each word. The limited set of candidate parents for each word render contrastive estimation feasible. Our model consistently matches or outperforms five state-of-the-art systems on Arabic, English and Turkish.", "Distributed word representations have recently been proven to be an invaluable resource for NLP. These representations are normally learned using neural networks and capture syntactic and semantic information about words. Information about word morphology and shape is normally ignored when learning word representations. However, for tasks like part-of-speech tagging, intra-word information is extremely useful, specially when dealing with morphologically rich languages. In this paper, we propose a deep neural network that learns character-level representation of words and associate them with usual word representations to perform POS tagging. Using the proposed approach, while avoiding the use of any handcrafted feature, we produce state-of-the-art POS taggers for two languages: English, with 97.32 accuracy on the Penn Treebank WSJ corpus; and Portuguese, with 97.47 accuracy on the Mac-Morpho corpus, where the latter represents an error reduction of 12.2 on the best previous known result.", "Morphological segmentation breaks words into morphemes (the basic semantic units). It is a key component for natural language processing systems. Unsupervised morphological segmentation is attractive, because in every language there are virtually unlimited supplies of text, but very few labeled resources. However, most existing model-based systems for unsupervised morphological segmentation use directed generative models, making it difficult to leverage arbitrary overlapping features that are potentially helpful to learning. In this paper, we present the first log-linear model for unsupervised morphological segmentation. Our model uses overlapping features such as morphemes and their contexts, and incorporates exponential priors inspired by the minimum description length (MDL) principle. We present efficient algorithms for learning and inference by combining contrastive estimation with sampling. Our system, based on monolingual features only, outperforms a state-of-the-art system by a large margin, even when the latter uses bilingual information such as phrasal alignment and phonetic correspondence. On the Arabic Penn Treebank, our system reduces F1 error by 11 compared to Morfessor.", "", "We present extensions to a continuous-state dependency parsing method that makes it applicable to morphologically rich languages. Starting with a high-performance transition-based parser that uses long short-term memory (LSTM) recurrent neural networks to learn representations of the parser state, we replace lookup-based word representations with representations constructed from the orthographic representations of the words, also using LSTMs. This allows statistical sharing across word forms that are similar on the surface. Experiments for morphologically rich languages show that the parsing model benefits from incorporating the character-based encodings of words.", "", "We present a model of morphological segmentation that jointly learns to segment and restore orthographic changes, e.g., funniest7! fun-y-est. We term this form of analysis canonical segmentation and contrast it with the traditional surface segmentation, which segments a surface form into a sequence of substrings, e.g., funniest7! funn-i-est. We derive an importance sampling algorithm for approximate inference in the model and report experimental results on English, German and Indonesian.", "The observed pronunciations or spellings of words are often explained as arising from the \"underlying forms\" of their morphemes. These forms are latent strings that linguists try to reconstruct by hand. We propose to reconstruct them automatically at scale, enabling generalization to new words. Given some surface word types of a concatenative language along with the abstract morpheme sequences that they express, we show how to recover consistent underlying forms for these morphemes, together with the (stochastic) phonology that maps each concatenation of underlying forms to a surface form. Our technique involves loopy belief propagation in a natural directed graphical model whose variables are unknown strings and whose conditional distributions are encoded as finite-state machines with trainable weights. We define training and evaluation paradigms for the task of surface word prediction, and report results on subsets of 7 languages.", "We introduce a model for constructing vector representations of words by composing characters using bidirectional LSTMs. Relative to traditional word representation models that have independent vectors for each word type, our model requires only a single vector per character type and a fixed set of parameters for the compositional model. Despite the compactness of this model and, more importantly, the arbitrary nature of the form-function relationship in language, our \"composed\" word representations yield state-of-the-art results in language modeling and part-of-speech tagging. Benefits over traditional baselines are particularly pronounced in morphologically rich languages (e.g., Turkish)." ] }
1512.06098
2473658253
We consider the inverse problem of reconstructing the posterior measure over the trajec- tories of a diffusion process from discrete time observations and continuous time constraints. We cast the problem in a Bayesian framework and derive approximations to the posterior distributions of single time marginals using variational approximate inference. We then show how the approximation can be extended to a wide class of discrete-state Markov jump pro- cesses by making use of the chemical Langevin equation. Our empirical results show that the proposed method is computationally efficient and provides good approximations for these classes of inverse problems.
In @cite_5 we propose an expectation propagation method for diffusion process models where the prior process is Gaussian--Markov. In this paper, we extend this method to models with non-nonlinear prior processes. Here we use expectation propagation only to approximate the likelihood terms. We avoid approximating the prior process by using moment-closure approximations on the process resulting from the prior and the likelihood approximations. When we choose a Gaussian--Markov prior process, the method proposed in this paper is identical to the one proposed in @cite_5 .
{ "cite_N": [ "@cite_5" ], "mid": [ "2117804315" ], "abstract": [ "We propose an approximate inference algorithm for continuous time Gaussian Markov process models with both discrete and continuous time likelihoods. We show that the continuous time limit of the expectation propagation algorithm exists and results in a hybrid fixed point iteration consisting of (1) expectation propagation updates for discrete time terms and (2) variational updates for the continuous time term. We introduce post-inference corrections methods that improve on the marginals of the approximation. This approach extends the classical Kalman-Bucy smoothing procedure to non-Gaussian observations, enabling continuous-time inference in a variety of models, including spiking neuronal models (state-space models with point process observations) and box likelihood models. Experimental results on real and simulated data demonstrate high distributional accuracy and significant computational savings compared to discrete-time approaches in a neural application." ] }
1512.06098
2473658253
We consider the inverse problem of reconstructing the posterior measure over the trajec- tories of a diffusion process from discrete time observations and continuous time constraints. We cast the problem in a Bayesian framework and derive approximations to the posterior distributions of single time marginals using variational approximate inference. We then show how the approximation can be extended to a wide class of discrete-state Markov jump pro- cesses by making use of the chemical Langevin equation. Our empirical results show that the proposed method is computationally efficient and provides good approximations for these classes of inverse problems.
In @cite_10 the authors present a variational approach to approximate non-linear processes with time-only dependent diffusion terms by Orstein-Uhlenbeck Gaussian-Markov processes. To our knowledge the extension of the approach in to prior processes with state-dependent diffusion terms is not straightforward since a Gaussian-Markov approximation to the posterior process would lead to an ill-defined variational objective. The approach presented in this paper provides a convenient way to avoid this problem. We only obtain approximations of the posterior marginals instead of a process approximation , however, we can address inference problems where the diffusion terms are state dependent. In recent work, @cite_9 proposed an alternative variational approach based on an approximating process with fixed marginal laws. This extends the Gaussian approximation of @cite_10 to cater for cases where Gaussian marginals are not appropriate, e.g. in stochastic reaction networks where concentrations are constrained positive. The constraint on the marginals however considerably limits the flexibility of their algorithm, and requires a considerable amount of user input; furthermore, it is unclear how accurate the approximation is in general.
{ "cite_N": [ "@cite_9", "@cite_10" ], "mid": [ "2962855231", "1607456919" ], "abstract": [ "We consider a hidden Markov model, where the signal process, given by a diffusion, is only indirectly observed through some noisy measurements. The article develops a variational method for approximating the hidden states of the signal process given the full set of observations. This, in particular, leads to systematic approximations of the smoothing densities of the signal process. The paper then demonstrates how an efficient inference scheme, based on this variational approach to the approximation of the hidden states, can be designed to estimate the unknown parameters of stochastic differential equations. Two examples at the end illustrate the efficacy and the accuracy of the presented method.", "Stochastic differential equations arise naturally in a range of contexts, from financial to environmental modeling. Current solution methods are limited in their representation of the posterior process in the presence of data. In this work, we present a novel Gaussian process approximation to the posterior measure over paths for a general class of stochastic differential equations in the presence of observations. The method is applied to two simple problems: the Ornstein-Uhlenbeck process, of which the exact solution is known and can be compared to, and the double-well system, for which standard approaches such as the ensemble Kalman smoother fail to provide a satisfactory result. Experiments show that our variational approximation is viable and that the results are very promising as the variational approximate solution outperforms standard Gaussian process regression for non-Gaussian Markov processes." ] }
1512.06098
2473658253
We consider the inverse problem of reconstructing the posterior measure over the trajec- tories of a diffusion process from discrete time observations and continuous time constraints. We cast the problem in a Bayesian framework and derive approximations to the posterior distributions of single time marginals using variational approximate inference. We then show how the approximation can be extended to a wide class of discrete-state Markov jump pro- cesses by making use of the chemical Langevin equation. Our empirical results show that the proposed method is computationally efficient and provides good approximations for these classes of inverse problems.
@cite_2 and @cite_4 propose a continuous time extension of the popular unscented transformation in to obtain Gaussian state space approximations in SDE models with time-only dependent diffusion terms and both non-linear non-Gaussian discrete and continuous time observation. In the authors compare these approaches to the variational method in which they then use to improve on their smoothing estimates. In a recent work @cite_12 present a mean-field variational approximation where they approximate the posterior process with a set of independent univariate Gaussian processes (factorised approximation). The considered model has polynomial drift terms and state-independent diffusion terms and the observations are at discrete time-points. Due to a clever parameterisations (piecewise polynomials) of the mean and the variance function of the variational approximation the dimensionality of the state can scale to thousands.
{ "cite_N": [ "@cite_4", "@cite_12", "@cite_2" ], "mid": [ "2164770369", "1979555530", "2103777996" ], "abstract": [ "This paper is concerned with Bayesian optimal filtering and smoothing of non-linear continuous-discrete state space models, where the state dynamics are modeled with non-linear Ito-type stochastic differential equations, and measurements are obtained at discrete time instants from a non-linear measurement model with Gaussian noise. We first show how the recently developed sigma-point approximations as well as the multi-dimensional Gauss-Hermite quadrature and cubature approximations can be applied to classical continuous-discrete Gaussian filtering. We then derive two types of new Gaussian approximation based smoothers for continuous-discrete models and apply the numerical methods to the smoothers. We also show how the latter smoother can be efficiently implemented by including one additional cross-covariance differential equation to the filter prediction step. The performance of the methods is tested in a simulated application.", "This work introduces a Gaussian variational mean-field approximation for inference in dynamical systems which can be modeled by ordinary stochastic differential equations. This new approach allows one to express the variational free energy as a functional of the marginal moments of the approximating Gaussian process. A restriction of the moment equations to piecewise polynomial functions, over time, dramatically reduces the complexity of approximate inference for stochastic differential equation models and makes it comparable to that of discrete time hidden Markov models. The algorithm is demonstrated on state and parameter estimation for nonlinear problems with up to 1000 dimensional state vectors and compares the results empirically with various well-known inference methodologies.", "This article considers the application of the unscented transformation to approximate fixed-interval optimal smoothing of continuous-time non-linear stochastic dynamic systems. The proposed methodology can be applied to systems, where the dynamics can be modeled with non-linear stochastic differential equations and the noise corrupted measurements are obtained continuously or at discrete times. The smoothing algorithm is based on computing the continuous-time limit of the recently proposed unscented Rauch-Tung-Striebel smoother, which is an approximate optimal smoothing algorithm for discrete-time stochastic dynamic systems." ] }
1512.05685
2261249247
Deciding which vocabulary terms to use when modeling data as Linked Open Data (LOD) is far from trivial. Choosing too general vocabulary terms, or terms from vocabularies that are not used by other LOD datasets, is likely to lead to a data representation, which will be harder to understand by humans and to be consumed by Linked data applications. In this technical report, we propose TermPicker: a novel approach for vocabulary reuse by recommending RDF types and properties based on exploiting the information on how other data providers on the LOD cloud use RDF types and properties to describe their data. To this end, we introduce the notion of so-called schema-level patterns (SLPs). They capture how sets of RDF types are connected via sets of properties within some data collection, e.g., within a dataset on the LOD cloud. TermPicker uses such SLPs and generates a ranked list of vocabulary terms for reuse. The lists of recommended terms are ordered by a ranking model which is computed using the machine learning approach Learning To Rank (L2R). TermPicker is evaluated based on the recommendation quality that is measured using the Mean Average Precision (MAP) and the Mean Reciprocal Rank at the first five positions (MRR@5). Our results illustrate an improvement of the recommendation quality by 29 - 36 when using SLPs compared to the beforehand investigated baselines of recommending solely popular vocabulary terms or terms from the same vocabulary. The overall best results are achieved using SLPs in conjunction with the Learning To Rank algorithm Random Forests.
@cite_11 propose another approach for searching ontologies from different domains. When searching for ontologies of a particular domain, a collection of terms that represent the given domain is retrieved and used to expand the user query. This is especially helpful when starting to choose vocabulary terms for reuse from scratch.
{ "cite_N": [ "@cite_11" ], "mid": [ "2119788152" ], "abstract": [ "As more ontologies become publicly available, finding the \"right\" ontologies becomes much harder. In this paper, we address the problem of ontology search: finding a collection of ontologies from an ontology repository that are relevant to the user's query. In particular, we look at the case when users search for ontologies relevant to a particular topic (e.g., an ontology about anatomy). Ontologies that are most relevant to such query often do not have the query term in the names of their concepts (e.g., the Foundational Model of Anatomy ontology does not have the term \"anatomy\" in any of its concepts' names). Thus, we present a new ontology-search technique that helps users in these types of searches. When looking for ontologies on a particular topic (e.g., anatomy), we retrieve from the Web a collection of terms that represent the given domain (e.g., terms such as body, brain, skin, etc. for anatomy). We then use these terms to expand the user query. We evaluate our algorithm on queries for topics in the biomedical domain against a repository of biomedical ontologies. We use the results obtained from experts in the biomedical-ontology domain as the gold standard. Our experiments demonstrate that using our method for query expansion improves retrieval results by a 113 , compared to the tools that search only for the user query terms and consider only class and property names (like Swoogle). We show 43 improvement for the case where not only class and property names but also property values are taken into account." ] }
1512.05685
2261249247
Deciding which vocabulary terms to use when modeling data as Linked Open Data (LOD) is far from trivial. Choosing too general vocabulary terms, or terms from vocabularies that are not used by other LOD datasets, is likely to lead to a data representation, which will be harder to understand by humans and to be consumed by Linked data applications. In this technical report, we propose TermPicker: a novel approach for vocabulary reuse by recommending RDF types and properties based on exploiting the information on how other data providers on the LOD cloud use RDF types and properties to describe their data. To this end, we introduce the notion of so-called schema-level patterns (SLPs). They capture how sets of RDF types are connected via sets of properties within some data collection, e.g., within a dataset on the LOD cloud. TermPicker uses such SLPs and generates a ranked list of vocabulary terms for reuse. The lists of recommended terms are ordered by a ranking model which is computed using the machine learning approach Learning To Rank (L2R). TermPicker is evaluated based on the recommendation quality that is measured using the Mean Average Precision (MAP) and the Mean Reciprocal Rank at the first five positions (MRR@5). Our results illustrate an improvement of the recommendation quality by 29 - 36 when using SLPs compared to the beforehand investigated baselines of recommending solely popular vocabulary terms or terms from the same vocabulary. The overall best results are achieved using SLPs in conjunction with the Learning To Rank algorithm Random Forests.
Again, the input for these recommendation services is a single string or a set of strings specifying a vocabulary term or a domain of interest. Whereas these services provide recommendations based on string analyzes, they do not exploit any structural information on how vocabulary terms are connected to each other. In contrast, Falcons' Ontology Search @cite_26 provides the engineer with such information. Compared to traditional ontology matching approach, which align ontologies based on , the authors of Falcons' Ontology Search use different kinds of relatedness, in order to identify which vocabulary terms might express similar semantics. However, it is mainly designed to establish a general relatedness between vocabularies specifying that different vocabularies contain terms that describe similar data. Thus, it does not investigate how data providers on the LOD cloud use vocabulary terms to describe their data and individual relations as it is done by TermPicker.
{ "cite_N": [ "@cite_26" ], "mid": [ "110961551" ], "abstract": [ "When thousands of vocabularies having been published on the SemanticWeb by various authorities, a question arises as to how they are related to each other. Existing work has mainly analyzed their similarity. In this paper, we inspect the more general notion of relatedness, and characterize it from four angles: well-defined semantic relatedness, lexical similarity in contents, closeness in expressivity and distributional relatedness. We present an empirical study of these measures on a large, real data set containing 2,996 vocabularies, and 15 million RDF documents that use them. Then, we propose to apply vocabulary relatedness to the problem of post-selection vocabulary recommendation. We implement such a recommender service as part of a vocabulary search engine, and test its effectiveness against a handcrafted gold standard." ] }
1512.05246
2218408410
Most deep architectures for image classification–even those that are trained to classify a large number of diverse categories–learn shared image representations with a single model. Intuitively, however, categories that are more similar should share more information than those that are very different. While hierarchical deep networks address this problem by learning separate features for subsets of related categories, current implementations require simplified models using fixed architectures specified via heuristic clustering methods. Instead, we propose Blockout, a method for regularization and model selection that simultaneously learns both the model architecture and parameters. A generalization of Dropout, our approach gives a novel parametrization of hierarchical architectures that allows for structure learning via back-propagation. To demonstrate its utility, we evaluate Blockout on the CIFAR and Image Net datasets, demonstrating improved classification accuracy, better regularization performance, faster training, and the clear emergence of hierarchical network structures.
Despite the long history of deep neural networks in computer vision @cite_14 , the modern incarnation of deep learning'' is a relatively recent phenomenon that began with empirical success in the task of image recognition @cite_0 on the ImageNet dataset @cite_18 . Since then, tactful architecture modifications have yielded a steady stream of further improvements @cite_10 @cite_21 , even surpassing human performance @cite_3 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_21", "@cite_3", "@cite_0", "@cite_10" ], "mid": [ "2117539524", "2310919327", "2097117768", "1677182931", "2163605009", "1849277567" ], "abstract": [ "The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.", "", "We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.", "Rectified activation units (rectifiers) are essential for state-of-the-art neural networks. In this work, we study rectifier neural networks for image classification from two aspects. First, we propose a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit. PReLU improves model fitting with nearly zero extra computational cost and little overfitting risk. Second, we derive a robust initialization method that particularly considers the rectifier nonlinearities. This method enables us to train extremely deep rectified models directly from scratch and to investigate deeper or wider network architectures. Based on the learnable activation and advanced initialization, we achieve 4.94 top-5 test error on the ImageNet 2012 classification dataset. This is a 26 relative improvement over the ILSVRC 2014 winner (GoogLeNet, 6.66 [33]). To our knowledge, our result is the first to surpass the reported human-level performance (5.1 , [26]) on this dataset.", "We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.", "Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark [18]. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we explore both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets." ] }
1512.05246
2218408410
Most deep architectures for image classification–even those that are trained to classify a large number of diverse categories–learn shared image representations with a single model. Intuitively, however, categories that are more similar should share more information than those that are very different. While hierarchical deep networks address this problem by learning separate features for subsets of related categories, current implementations require simplified models using fixed architectures specified via heuristic clustering methods. Instead, we propose Blockout, a method for regularization and model selection that simultaneously learns both the model architecture and parameters. A generalization of Dropout, our approach gives a novel parametrization of hierarchical architectures that allows for structure learning via back-propagation. To demonstrate its utility, we evaluate Blockout on the CIFAR and Image Net datasets, demonstrating improved classification accuracy, better regularization performance, faster training, and the clear emergence of hierarchical network structures.
In addition to general classification of arbitrary images, deep learning has also made a significant impact on fine-grained recognition within constrained domains @cite_19 @cite_23 @cite_22 . In these cases, deep neural networks are trained (often alongside additional annotations or segmentations of parts) to recognize subtle differences between similar categories, e.g. bird species. However, these methods are often limited by the availability of training data as they typically require expert annotations for ground truth labels. Some approaches have alleviated this problem by pre-training on large collections of general images and then fine-tuning on smaller, domain-specific datasets @cite_22 . However, learning separate models for many different groups of categories would be inefficient.
{ "cite_N": [ "@cite_19", "@cite_22", "@cite_23" ], "mid": [ "1616462885", "2104657103", "1898560071" ], "abstract": [ "We propose an architecture for fine-grained visual categorization that approaches expert human performance in the classification of bird species. Our architecture first computes an estimate of the object's pose; this is used to compute local image features which are, in turn, used for classification. The features are computed by applying deep convolutional nets to image patches that are located and normalized by the pose. We perform an empirical study of a number of pose normalization schemes, including an investigation of higher order geometric warping functions. We propose a novel graph-based clustering algorithm for learning a compact pose normalization space. We perform a detailed investigation of state-of-the-art deep convolutional feature implementations and fine-tuning feature learning for fine-grained classification. We observe that a model that integrates lower-level feature layers with pose-normalized extraction routines and higher-level feature layers with unaligned image features works best. Our experiments advance state-of-the-art performance on bird species recognition, with a large improvement of correct classification rates over previous methods (75 vs. 55-65 ).", "We propose bilinear models, a recognition architecture that consists of two feature extractors whose outputs are multiplied using outer product at each location of the image and pooled to obtain an image descriptor. This architecture can model local pairwise feature interactions in a translationally invariant manner which is particularly useful for fine-grained categorization. It also generalizes various orderless texture descriptors such as the Fisher vector, VLAD and O2P. We present experiments with bilinear models where the feature extractors are based on convolutional neural networks. The bilinear form simplifies gradient computation and allows end-to-end training of both networks using image labels only. Using networks initialized from the ImageNet dataset followed by domain specific fine-tuning we obtain 84.1 accuracy of the CUB-200-2011 dataset requiring only category labels at training time. We present experiments and visualizations that analyze the effects of fine-tuning and the choice two networks on the speed and accuracy of the models. Results show that the architecture compares favorably to the existing state of the art on a number of fine-grained datasets while being substantially simpler and easier to train. Moreover, our most accurate model is fairly efficient running at 8 frames sec on a NVIDIA Tesla K40 GPU. The source code for the complete system will be made available at http: vis-www.cs.umass.edu bcnn.", "Scaling up fine-grained recognition to all domains of fine-grained objects is a challenge the computer vision community will need to face in order to realize its goal of recognizing all object categories. Current state-of-the-art techniques rely heavily upon the use of keypoint or part annotations, but scaling up to hundreds or thousands of domains renders this annotation cost-prohibitive for all but the most important categories. In this work we propose a method for fine-grained recognition that uses no part annotations. Our method is based on generating parts using co-segmentation and alignment, which we combine in a discriminative mixture. Experimental results show its efficacy, demonstrating state-of-the-art results even when compared to methods that use part annotations during training." ] }
1512.05246
2218408410
Most deep architectures for image classification–even those that are trained to classify a large number of diverse categories–learn shared image representations with a single model. Intuitively, however, categories that are more similar should share more information than those that are very different. While hierarchical deep networks address this problem by learning separate features for subsets of related categories, current implementations require simplified models using fixed architectures specified via heuristic clustering methods. Instead, we propose Blockout, a method for regularization and model selection that simultaneously learns both the model architecture and parameters. A generalization of Dropout, our approach gives a novel parametrization of hierarchical architectures that allows for structure learning via back-propagation. To demonstrate its utility, we evaluate Blockout on the CIFAR and Image Net datasets, demonstrating improved classification accuracy, better regularization performance, faster training, and the clear emergence of hierarchical network structures.
Attempts have also been made to incorporate information from a known hierarchy to improve prediction performance without requiring architecture changes. For example, @cite_20 replaced the flat softmax classification layer with a probabilistic graphical model that respects given relationships between labels. Other methods for incorporating label structure are summarized in @cite_4 . However, they typically rely on fixed, manually-specified hierarchies, which could contain errors and result in biases that reduce performance.
{ "cite_N": [ "@cite_4", "@cite_20" ], "mid": [ "2148141637", "64813323" ], "abstract": [ "In this survey, we argue that using structured vocabularies is capital to the success of image annotation. We analyze literature on image annotation uses and user needs, and we stress the need for automatic annotation. We briefly expose the difficulties posed to machines for this task and how it relates to controlled vocabularies. We survey contributions in the field showing how structures are introduced. First we present studies that use unstructured vocabulary, focusing on those introducing links between categories or between features. Then we review work using structured vocabularies as an input and analyze how the structure is exploited.", "In this paper we study how to perform object classification in a principled way that exploits the rich structure of real world labels. We develop a new model that allows encoding of flexible relations between labels. We introduce Hierarchy and Exclusion (HEX) graphs, a new formalism that captures semantic relations between any two labels applied to the same object: mutual exclusion, overlap and subsumption. We then provide rigorous theoretical analysis that illustrates properties of HEX graphs such as consistency, equivalence, and computational implications of the graph structure. Next, we propose a probabilistic classification model based on HEX graphs and show that it enjoys a number of desirable properties. Finally, we evaluate our method using a large-scale benchmark. Empirical results demonstrate that our model can significantly improve object classification by exploiting the label relations." ] }