aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1905.08377 | 2946205552 | Usage similarity estimation addresses the semantic proximity of word instances in different contexts. We apply contextualized (ELMo and BERT) word and sentence embeddings to this task, and propose supervised models that leverage these representations for prediction. Our models are further assisted by lexical substitute annotations automatically assigned to word instances by context2vec, a neural model that relies on a bidirectional LSTM. We perform an extensive comparison of existing word and sentence representations on benchmark datasets addressing both graded and binary similarity. The best performing models outperform previous methods in both settings. | Due to its high reliance on context, Usim can be viewed as a semantic textual similarity (STS) @cite_15 task with a focus on a specific word instance. This connection motivated us to apply methods initially proposed for sentence similarity to Usim prediction. More precisely, we build sentence representations using different types of word and sentence embeddings, ranging from the classical word-averaging approach with traditional word embeddings @cite_9 , to more recent contextualized word representations . We explore the contribution of each separate method for Usim prediction, and use the best performing ones as features in supervised models. These are trained on sentence pairs labelled with Usim judgments @cite_0 to predict the similarity of new word instances. | {
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_15"
],
"mid": [
"2147809840",
"2250539671",
"2462305634"
],
"abstract": [
"The vast majority of work on word senses has relied on predefined sense inventories and an annotation schema where each word instance is tagged with the best fitting sense. This paper examines the case for a graded notion of word meaning in two experiments, one which uses WordNet senses in a graded fashion, contrasted with the \"winner takes all\" annotation, and one which asks annotators to judge the similarity of two usages. We find that the graded responses correlate with annotations from previous datasets, but sense assignments are used in a way that weakens the case for clear cut sense boundaries. The responses from both experiments correlate with the overlap of paraphrases from the English lexical substitution task which bodes well for the use of substitutes as a proxy for word sense. This paper also provides two novel datasets which can be used for evaluating computational systems.",
"Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.",
"Comunicacio presentada al 10th International Workshop on Semantic Evaluation (SemEval-2016), celebrat els dies 16 i 17 de juny de 2016 a San Diego, California."
]
} |
1905.08377 | 2946205552 | Usage similarity estimation addresses the semantic proximity of word instances in different contexts. We apply contextualized (ELMo and BERT) word and sentence embeddings to this task, and propose supervised models that leverage these representations for prediction. Our models are further assisted by lexical substitute annotations automatically assigned to word instances by context2vec, a neural model that relies on a bidirectional LSTM. We perform an extensive comparison of existing word and sentence representations on benchmark datasets addressing both graded and binary similarity. The best performing models outperform previous methods in both settings. | Previous attempts to automatic Usim prediction involved obtaining vectors encoding a distribution of topics for every target word in context @cite_3 . In this work, Usim was approximated by the cosine similarity of the resulting topic vectors. We show how contextualized representations, and the supervised model that uses them as features, outperform topic-based methods on the graded Usim task. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2252025371"
],
"abstract": [
"We present a method to estimate word use similarity independent of an external sense inventory. This method utilizes a topicmodelling approach to compute the similarity in usage of a single word across a pair of sentences, and we evaluate our method in terms of its ability to reproduce a humanannotated ranking over sentence pairs. We find that our method outperforms a bag-ofwords baseline, and that for certain words there is very strong correlation between our method and human annotators. We also find that lemma-specific models do not outperform general topic models, despite the fact that results with the general model vary substantially by lemma. We provide a detailed analysis of the result, and identify open issues for future research."
]
} |
1905.07791 | 2951998717 | Modern NLP systems require high-quality annotated data. In specialized domains, expert annotations may be prohibitively expensive. An alternative is to rely on crowdsourcing to reduce costs at the risk of introducing noise. In this paper we demonstrate that directly modeling instance difficulty can be used to improve model performance, and to route instances to appropriate annotators. Our difficulty prediction model combines two learned representations: a universal' encoder trained on out-of-domain data, and a task-specific encoder. Experiments on a complex biomedical information extraction task using expert and lay annotators show that: (i) simply excluding from the training data instances predicted to be difficult yields a small boost in performance; (ii) using difficulty scores to weight instances during training provides further, consistent gains; (iii) assigning instances predicted to be difficult to domain experts is an effective strategy for task routing. Our experiments confirm the expectation that for specialized tasks expert annotations are higher quality than crowd labels, and hence preferable to obtain if practical. Moreover, augmenting small amounts of expert data with a larger set of lay annotations leads to further improvements in model performance. | Crowdsourcing annotation is now a well-studied problem @cite_24 @cite_21 @cite_18 @cite_5 . Due to the noise inherent in such annotations, there have also been considerable efforts to develop aggregation models that minimize noise @cite_21 @cite_18 @cite_26 @cite_14 . | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_14",
"@cite_21",
"@cite_24",
"@cite_5"
],
"mid": [
"2250493512",
"2251311344",
"2740579382",
"2515532269",
"1970381522",
"2251551120"
],
"abstract": [
"In natural language processing (NLP) annotation projects, we use inter-annotator agreement measures and annotation guidelines to ensure consistent annotations. However, annotation guidelines often make linguistically debatable and even somewhat arbitrary decisions, and interannotator agreement is often less than perfect. While annotation projects usually specify how to deal with linguistically debatable phenomena, annotator disagreements typically still stem from these “hard” cases. This indicates that some errors are more debatable than others. In this paper, we use small samples of doublyannotated part-of-speech (POS) data for Twitter to estimate annotation reliability and show how those metrics of likely interannotator agreement can be implemented in the loss functions of POS taggers. We find that these cost-sensitive algorithms perform better across annotation projects and, more surprisingly, even on data annotated according to the same guidelines. Finally, we show that POS tagging models sensitive to inter-annotator agreement perform better on the downstream task of chunking.",
"Annotating linguistic data is often a complex, time consuming and expensive endeavour. Even with strict annotation guidelines, human subjects often deviate in their analyses, each bringing different biases, interpretations of the task and levels of consistency. We present novel techniques for learning from the outputs of multiple annotators while accounting for annotator specific behaviour. These techniques use multi-task Gaussian Processes to learn jointly a series of annotator and metadata specific models, while explicitly representing correlations between models which can be learned directly from data. Our experiments on two machine translation quality estimation datasets show uniform significant accuracy gains from multi-task learning, and consistently outperform strong baselines.",
"",
"In this paper, we propose methods to take into account the disagreement between crowd annotators as well as their skills for weighting instances in learning algorithms. The latter can thus better deal with noise in the annotation and produce higher accuracy. We created two passage reranking datasets: one with crowdsource platform, and the second with an expert who completely revised the crowd annotation. Our experiments show that our weighting approach reduces noise improving passage reranking up to 1.47 and 1.85 on MRR and P@1, respectively.",
"Human linguistic annotation is crucial for many natural language processing tasks but can be expensive and time-consuming. We explore the use of Amazon's Mechanical Turk system, a significantly cheaper and faster method for collecting annotations from a broad base of paid non-expert contributors over the Web. We investigate five tasks: affect recognition, word similarity, recognizing textual entailment, event temporal ordering, and word sense disambiguation. For all five, we show high agreement between Mechanical Turk non-expert annotations and existing gold standard labels provided by expert labelers. For the task of affect recognition, we also show that using non-expert labels for training machine learning algorithms can be as effective as using gold standard annotations from experts. We propose a technique for bias correction that significantly improves annotation quality on two tasks. We conclude that many large labeling tasks can be effectively designed and carried out in this method at a fraction of the usual expense.",
"High agreement is a common objective when annotating data for word senses. However, a number of factors make perfect agreement impossible, e.g. the limitations of sense inventories, the difficulty of the examples or the interpretation preferences of the annotators. Estimating potential agreement is thus a relevant task to supplement the evaluation of sense annotations. In this article we propose two methods to predict agreement on wordannotation instances. We experiment with a continuous representation and a threeway discretization of observed agreement. In spite of the difficulty of the task, we find that different levels of agreement can be identified—in particular, low-agreement examples are easier to identify."
]
} |
1905.07791 | 2951998717 | Modern NLP systems require high-quality annotated data. In specialized domains, expert annotations may be prohibitively expensive. An alternative is to rely on crowdsourcing to reduce costs at the risk of introducing noise. In this paper we demonstrate that directly modeling instance difficulty can be used to improve model performance, and to route instances to appropriate annotators. Our difficulty prediction model combines two learned representations: a universal' encoder trained on out-of-domain data, and a task-specific encoder. Experiments on a complex biomedical information extraction task using expert and lay annotators show that: (i) simply excluding from the training data instances predicted to be difficult yields a small boost in performance; (ii) using difficulty scores to weight instances during training provides further, consistent gains; (iii) assigning instances predicted to be difficult to domain experts is an effective strategy for task routing. Our experiments confirm the expectation that for specialized tasks expert annotations are higher quality than crowd labels, and hence preferable to obtain if practical. Moreover, augmenting small amounts of expert data with a larger set of lay annotations leads to further improvements in model performance. | There are also several surveys of crowdsourcing in biomedicine specifically @cite_11 @cite_20 @cite_12 . Some work in this space has contrasted model performance achieved using expert vs. crowd annotated training data @cite_4 @cite_1 @cite_19 . Dumitrache concluded that performance is similar under these supervision types, finding no clear advantage from using expert annotators. This differs from our findings, perhaps owing to differences in design. The experts we used already hold advanced medical degrees, for instance, while those in prior work were medical students. Furthermore, the task considered here would appear to be of greater difficulty: even a system trained on @math 5k instances performs reasonably, but far from perfect. By contrast, in some of the prior work where experts and crowd annotations were deemed equivalent, a classifier trained on 300 examples can achieve very high accuracy @cite_1 . | {
"cite_N": [
"@cite_4",
"@cite_1",
"@cite_19",
"@cite_20",
"@cite_12",
"@cite_11"
],
"mid": [
"",
"",
"2574781439",
"1969605785",
"2757470547",
"2116947789"
],
"abstract": [
"",
"",
"Cognitive computing systems require human labeled data for evaluation and often for training. The standard practice used in gathering this data minimizes disagreement between annotators, and we have found this results in data that fails to account for the ambiguity inherent in language. We have proposed the CrowdTruth method for collecting ground truth through crowdsourcing, which reconsiders the role of people in machine learning based on the observation that disagreement between annotators provides a useful signal for phenomena such as ambiguity in the text. We report on using this method to build an annotated data set for medical relation extraction for the cause and treat relations, and how this data performed in a supervised training experiment. We demonstrate that by modeling ambiguity, labeled data gathered from crowd workers can (1) reach the level of quality of domain experts for this task while reducing the cost, and (2) provide better training data at scale than distant supervision. We further propose and validate new weighted measures for precision, recall, and F-measure, which account for ambiguity in both human and machine performance on this task.",
"The use of crowdsourcing to solve important but complex problems in biomedical and clinical sciences is growing and encompasses a wide variety of approaches. The crowd is diverse and includes online marketplace workers, health information seekers, science enthusiasts and domain experts. In this article, we review and highlight recent studies that use crowdsourcing to advance biomedicine. We classify these studies into two broad categories: (i) mining big data generated from a crowd (e.g. search logs) and (ii) active crowdsourcing via specific technical platforms, e.g. labor markets, wikis, scientific games and community challenges. Through describing each study in detail, we demonstrate the applicability of different methods in a variety of domains in biomedical research, including genomics, biocuration and clinical research. Furthermore, we discuss and highlight the strengths and limitations of different crowdsourcing platforms. Finally, we identify important emerging trends, opportunities and remaining challenges for future crowdsourcing research in biomedicine.",
"Crowdsourcing is “the practice of obtaining participants, services, ideas, or content by soliciting contributions from a large group of people, especially via the Internet.” ( J. Gen. Intern. Med. 29:187, 2014) Although crowdsourcing has been adopted in healthcare research and its potential for analyzing large datasets and obtaining rapid feedback has recently been recognized, no systematic reviews of crowdsourcing in cancer research have been conducted. Therefore, we sought to identify applications of and explore potential uses for crowdsourcing in cancer research. We conducted a systematic review of articles published between January 2005 and June 2016 on crowdsourcing in cancer research, using PubMed, CINAHL, Scopus, PsychINFO, and Embase. Data from the 12 identified articles were summarized but not combined statistically. The studies addressed a range of cancers (e.g., breast, skin, gynecologic, colorectal, prostate). Eleven studies collected data on the Internet using web-based platforms; one recruited participants in a shopping mall using paper-and-pen data collection. Four studies used Amazon Mechanical Turk for recruiting and or data collection. Study objectives comprised categorizing biopsy images (n = 6), assessing cancer knowledge (n = 3), refining a decision support system (n = 1), standardizing survivorship care-planning (n = 1), and designing a clinical trial (n = 1). Although one study demonstrated that “the wisdom of the crowd” (NCI Budget Fact Book, 2017) could not replace trained experts, five studies suggest that distributed human intelligence could approximate or support the work of trained experts. Despite limitations, crowdsourcing has the potential to improve the quality and speed of research while reducing costs. Longitudinal studies should confirm and refine these findings.",
"Motivation: Bioinformatics is faced with a variety of problems that require human involvement. Tasks like genome annotation, image analysis, knowledge-base population and protein structure determination all benefit from human input. In some cases, people are needed in vast quantities, whereas in others, we need just a few with rare abilities. Crowdsourcing encompasses an emerging collection of approaches for harnessing such distributed human intelligence. Recently, the bioinformatics community has begun to apply crowdsourcing in a variety of contexts, yet few resources are available that describe how these human-powered systems work and how to use them effectively in scientific domains. Results: Here, we provide a framework for understanding and applying several different types of crowdsourcing. The framework considers two broad classes: systems for solving large-volume ‘microtasks’ and systems for solving high-difficulty ‘megatasks’. Within these classes, we discuss system types, including volunteer labor, games with a purpose, microtask markets and open innovation contests. We illustrate each system type with successful examples in bioinformatics and conclude with a guide for matching problems to crowdsourcing solutions that highlights the positives and negatives of different approaches. Contact: ude.sppircs@doogb"
]
} |
1905.07791 | 2951998717 | Modern NLP systems require high-quality annotated data. In specialized domains, expert annotations may be prohibitively expensive. An alternative is to rely on crowdsourcing to reduce costs at the risk of introducing noise. In this paper we demonstrate that directly modeling instance difficulty can be used to improve model performance, and to route instances to appropriate annotators. Our difficulty prediction model combines two learned representations: a universal' encoder trained on out-of-domain data, and a task-specific encoder. Experiments on a complex biomedical information extraction task using expert and lay annotators show that: (i) simply excluding from the training data instances predicted to be difficult yields a small boost in performance; (ii) using difficulty scores to weight instances during training provides further, consistent gains; (iii) assigning instances predicted to be difficult to domain experts is an effective strategy for task routing. Our experiments confirm the expectation that for specialized tasks expert annotations are higher quality than crowd labels, and hence preferable to obtain if practical. Moreover, augmenting small amounts of expert data with a larger set of lay annotations leads to further improvements in model performance. | More relevant to this paper, prior work has investigated methods for task routing' in active learning scenarios in which supervision is provided by heterogeneous labelers with varying levels of expertise @cite_9 @cite_0 @cite_17 @cite_23 @cite_9 . The related question of whether effort is better spent collecting additional annotations for already labeled (but potentially noisily so) examples or novel instances has also been addressed @cite_15 . What distinguishes the work here is our focus on providing an operational definition of instance , showing that this can be predicted, and then using this to inform task routing. | {
"cite_N": [
"@cite_9",
"@cite_0",
"@cite_23",
"@cite_15",
"@cite_17"
],
"mid": [
"2181558882",
"2067760738",
"2404374285",
"2125943921",
"2174985112"
],
"abstract": [
"Obtaining labels can be expensive or time-consuming, but unlabeled data is often abundant and easier to obtain. Most learning tasks can be made more efficient, in terms of labeling cost, by intelligently choosing specific unlabeled instances to be labeled by an oracle. The general problem of optimally choosing these instances is known as active learning. As it is usually set in the context of supervised learning, active learning relies on a single oracle playing the role of a teacher. We focus on the multiple annotator scenario where an oracle, who knows the ground truth, no longer exists; instead, multiple labelers, with varying expertise, are available for querying. This paradigm posits new challenges to the active learning scenario. We can now ask which data sample should be labeled next and which annotator should be queried to benefit our learning model the most. In this paper, we employ a probabilistic model for learning from multiple annotators that can also learn the annotator expertise even when their expertise may not be consistently accurate across the task domain. We then focus on providing a criterion and formulation that allows us to select both a sample and the annotator s to query the labels from.",
"Proactive learning is a generalization of active learning designed to relax unrealistic assumptions and thereby reach practical applications. Active learning seeks to select the most informative unlabeled instances and ask an omniscient oracle for their labels, so as to retrain the learning algorithm maximizing accuracy. However, the oracle is assumed to be infallible (never wrong), indefatigable (always answers), individual (only one oracle), and insensitive to costs (always free or always charges the same). Proactive learning relaxes all four of these assumptions, relying on a decision-theoretic approach to jointly select the optimal oracle and instance, by casting the problem as a utility optimization problem subject to a budget constraint. Results on multi-oracle optimization over several data sets demonstrate the superiority of our approach over the single-imperfect-oracle baselines in most cases.",
"The active learning (AL) framework is an increasingly popular strategy for reducing the amount of human labeling effort required to induce a predictive model. Most work in AL has assumed that a single, infallible oracle provides labels requested by the learner at a fixed cost. However, real-world applications suitable for AL often include multiple domain experts who provide labels of varying cost and quality. We explore this multiple expert active learning (MEAL) scenario and develop a novel algorithm for instance allocation that exploits the meta-cognitive abilities of novice (cheap) experts in order to make the best use of the experienced (expensive) annotators. We demonstrate that this strategy outperforms strong baseline approaches to MEAL on both a sentiment analysis dataset and two datasets from our motivating application of biomedical citation screening. Furthermore, we provide evidence that novice labelers are often aware of which instances they are likely to mislabel.",
"This paper addresses the repeated acquisition of labels for data items when the labeling is imperfect. We examine the improvement (or lack thereof) in data quality via repeated labeling, and focus especially on the improvement of training labels for supervised induction. With the outsourcing of small tasks becoming easier, for example via Rent-A-Coder or Amazon's Mechanical Turk, it often is possible to obtain less-than-expert labeling at low cost. With low-cost labeling, preparing the unlabeled part of the data can become considerably more expensive than labeling. We present repeated-labeling strategies of increasing complexity, and show several main results. (i) Repeated-labeling can improve label quality and model quality, but not always. (ii) When labels are noisy, repeated labeling can be preferable to single labeling even in the traditional setting where labels are not particularly cheap. (iii) As soon as the cost of processing the unlabeled data is not free, even the simple strategy of labeling everything multiple times can give considerable advantage. (iv) Repeatedly labeling a carefully chosen set of points is generally preferable, and we present a robust technique that combines different notions of uncertainty to select data points for which quality should be improved. The bottom line: the results show clearly that when labeling is not perfect, selective acquisition of multiple labels is a strategy that data miners should have in their repertoire; for certain label-quality cost regimes, the benefit is substantial.",
"We consider a finite-pool data categorization scenario which requires exhaustively classifying a given set of examples with a limited budget. We adopt a hybrid human-machine approach which blends automatic machine learning with human labeling across a tiered workforce composed of domain experts and crowd workers. To effectively achieve high-accuracy labels over the instances in the pool at minimal cost, we develop a novel approach based on decision-theoretic active learning. On the important task of biomedical citation screening for systematic reviews, results on real data show that our method achieves consistent improvements over baseline strategies. To foster further research by others, we have made our data available online."
]
} |
1905.07705 | 2965927083 | We address the problem of verifying k-safety properties: properties that refer to k interacting executions of a program. A prominent way to verify k-safety properties is by self composition. In this approach, the problem of checking k-safety over the original program is reduced to checking an “ordinary” safety property over a program that executes k copies of the original program in some order. The way in which the copies are composed determines how complicated it is to verify the composed program. We view this composition as provided by a semantic self composition function that maps each state of the composed program to the copies that make a move. Since the “quality” of a self composition function is measured by the ability to verify the safety of the composed program, we formulate the problem of inferring a self composition function together with the inductive invariant needed to verify safety of the composed program, where both are restricted to a given language. We develop a property-directed inference algorithm that, given a set of predicates, infers composition-invariant pairs expressed by Boolean combinations of the given predicates, or determines that no such pair exists. We implemented our algorithm and demonstrate that it is able to find self compositions that are beyond reach of existing tools. | This paper addresses the problem of verifying k-safety properties (also called hyperproperties @cite_4 ) by means of self composition. Other approaches tackle the problem without self-composition, and often focus on more specific properties, most noticeably the @math -safety noninterference property (e.g. @cite_17 @cite_21 ). Below we focus on works that use self-composition. Previous work such as @cite_22 @cite_32 @cite_9 @cite_24 @cite_20 @cite_31 considered self composition (also called product programs) where the composition function is constant and set a-priori, using syntax-based hints. While useful in general, such self compositions may sometimes result in programs that are too complex to verify. This is in contrast to our approach, where the composition function is evolving during verification, and is adapted to the capabilities of the model checker. | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_9",
"@cite_21",
"@cite_32",
"@cite_24",
"@cite_31",
"@cite_20",
"@cite_17"
],
"mid": [
"",
"2095840868",
"5424990",
"2884937557",
"57185801",
"",
"2797766813",
"2139799388",
"2626217303"
],
"abstract": [
"",
"Non-interference is a high-level security property that guarantees the absence of illicit information leakages through executing programs. More precisely, non-interference for a program assumes a separation between secret inputs and public inputs on the one hand, and secret outputs and public outputs on the other hand, and requires that the value of public outputs does not depend on the value of secret inputs. A common means to enforce non-interference is to use an information flow type system. However, such type systems are inherently imprecise, and reject many secure programs, even for simple programming languages. The purpose of this paper is to investigate logical formulations of noninterference that allow a more precise analysis of programs. It appears that such formulations are often sound and complete, and also amenable to interactive or automated verification techniques, such as theorem-proving or model-checking. We illustrate the applicability of our method in several scenarios, including a simple imperative language, a non-deterministic language, and finally a language with shared mutable data structures.",
"Relational Hoare Logic is a generalization of Hoare logic that allows reasoning about executions of two programs, or two executions of the same program. It can be used to verify that a program is robust or (information flow) secure, and that two programs are observationally equivalent. Product programs provide a means to reduce verification of relational judgments to the verification of a (standard) Hoare judgment, and open the possibility of applying standard verification tools to relational properties. However, previous notions of product programs are defined for deterministic and structured programs. Moreover, these notions are symmetric, and cannot be applied to properties such as refinement, which are asymmetric and involve universal quantification on the traces of the first program and existential quantification on the traces of the second program.",
"The secure information flow problem, which checks whether low-security outputs of a program are influenced by high-security inputs, has many applications in verifying security properties in programs. In this paper we present lazy self-composition, an approach for verifying secure information flow. It is based on self-composition, where two copies of a program are created on which a safety property is checked. However, rather than an eager duplication of the given program, it uses duplication lazily to reduce the cost of verification. This lazy self-composition is guided by an interplay between symbolic taint analysis on an abstract (single copy) model and safety verification on a refined (two copy) model. We propose two verification methods based on lazy self-composition. The first is a CEGAR-style procedure, where the abstract model associated with taint analysis is refined, on demand, by using a model generated by lazy self-composition. The second is a method based on bounded model checking, where taint queries are generated dynamically during program unrolling to guide lazy self-composition and to conclude an adequate bound for correctness. We have implemented these methods on top of the SeaHorn verification platform and our evaluations show the effectiveness of lazy self-composition.",
"Relational program logics are formalisms for specifying and verifying properties about two programs or two runs of the same program. These properties range from correctness of compiler optimizations or equivalence between two implementations of an abstract data type, to properties like non-interference or determinism. Yet the current technology for relational verification remains underdeveloped. We provide a general notion of product program that supports a direct reduction of relational verification to standard verification. We illustrate the benefits of our method with selected examples, including non-interference, standard loop optimizations, and a state-of-the-art optimization for incremental computation. All examples have been verified using the Why tool.",
"",
"Many interesting program properties like determinism or information flow security are hyperproperties, that is, they relate multiple executions of the same program. Hyperproperties can be verified using relational logics, but these logics require dedicated tool support and are difficult to automate. Alternatively, constructions such as self-composition represent multiple executions of a program by one product program, thereby reducing hyperproperties of the original program to trace properties of the product. However, existing constructions do not fully support procedure specifications, for instance, to derive the determinism of a caller from the determinism of a callee, making verification non-modular.",
"The termination insensitive secure information flow problem can be reduced to solving a safety problem via a simple program transformation. Barthe, D'Argenio, and Rezk coined the term “self-composition” to describe this reduction. This paper generalizes the self-compositional approach with a form of information downgrading recently proposed by Li and Zdancewic. We also identify a problem with applying the self-compositional approach in practice, and we present a solution to this problem that makes use of more traditional type-based approaches. The result is a framework that combines the best of both worlds, i.e., better than traditional type-based approaches and better than the self-compositional approach.",
"We present a novel approach to proving the absence of timing channels. The idea is to partition the programâ s execution traces in such a way that each partition component is checked for timing attack resilience by a time complexity analysis and that per-component resilience implies the resilience of the whole program. We construct a partition by splitting the program traces at secret-independent branches. This ensures that any pair of traces with the same public input has a component containing both traces. Crucially, the per-component checks can be normal safety properties expressed in terms of a single execution. Our approach is thus in contrast to prior approaches, such as self-composition, that aim to reason about multiple (kâ � 2) executions at once. We formalize the above as an approach called quotient partitioning, generalized to any k-safety property, and prove it to be sound. A key feature of our approach is a demand-driven partitioning strategy that uses a regex-like notion called trails to identify sets of execution traces, particularly those influenced by tainted (or secret) data. We have applied our technique in a prototype implementation tool called Blazer, based on WALA, PPL, and the brics automaton library. We have proved timing-channel freedom of (or synthesized an attack specification for) 24 programs written in Java bytecode, including 6 classic examples from the literature and 6 examples extracted from the DARPA STAC challenge problems."
]
} |
1905.07705 | 2965927083 | We address the problem of verifying k-safety properties: properties that refer to k interacting executions of a program. A prominent way to verify k-safety properties is by self composition. In this approach, the problem of checking k-safety over the original program is reduced to checking an “ordinary” safety property over a program that executes k copies of the original program in some order. The way in which the copies are composed determines how complicated it is to verify the composed program. We view this composition as provided by a semantic self composition function that maps each state of the composed program to the copies that make a move. Since the “quality” of a self composition function is measured by the ability to verify the safety of the composed program, we formulate the problem of inferring a self composition function together with the inductive invariant needed to verify safety of the composed program, where both are restricted to a given language. We develop a property-directed inference algorithm that, given a set of predicates, infers composition-invariant pairs expressed by Boolean combinations of the given predicates, or determines that no such pair exists. We implemented our algorithm and demonstrate that it is able to find self compositions that are beyond reach of existing tools. | The work most closely related to ours is @cite_26 which introduces Cartesian Hoare Logic (CHL) for verification of @math -safety properties, and designs a verification framework for this logic. This work is further improved in @cite_16 . These works search for a proof in CHL, and in doing so, implicitly modify the composition. Our work infers the composition explicitly and can use off-the-shelf model checking tools. More importantly, when loops are involved both @cite_26 and @cite_16 use lock-step composition and align loops syntactically. Our algorithm, in contrast, does not rely on syntactic similarities, and can handle loops that cannot be aligned trivially. | {
"cite_N": [
"@cite_26",
"@cite_16"
],
"mid": [
"2418260908",
"2884840976"
],
"abstract": [
"Unlike safety properties which require the absence of a “bad” program trace, k-safety properties stipulate the absence of a “bad” interaction between k traces. Examples of k-safety properties include transitivity, associativity, anti-symmetry, and monotonicity. This paper presents a sound and relatively complete calculus, called Cartesian Hoare Logic (CHL), for verifying k-safety properties. We also present an automated verification algorithm based on CHL and implement it in a tool called DESCARTES. We use DESCARTES to analyze user-defined relational operators in Java and demonstrate that DESCARTES is effective at verifying (or finding violations of) multiple k-safety properties.",
"Relational safety specifications describe multiple runs of the same program or relate the behaviors of multiple programs. Approaches to automatic relational verification often compose the programs and analyze the result for safety, but a naively composed program can lead to difficult verification problems. We propose to exploit relational specifications for simplifying the generated verification subtasks. First, we maximize opportunities for synchronizing code fragments. Second, we compute symmetries in the specifications to reveal and avoid redundant subtasks. We have implemented these enhancements in a prototype for verifying k-safety properties on Java programs. Our evaluation confirms that our approach leads to a consistent performance speedup on a range of benchmarks."
]
} |
1905.07705 | 2965927083 | We address the problem of verifying k-safety properties: properties that refer to k interacting executions of a program. A prominent way to verify k-safety properties is by self composition. In this approach, the problem of checking k-safety over the original program is reduced to checking an “ordinary” safety property over a program that executes k copies of the original program in some order. The way in which the copies are composed determines how complicated it is to verify the composed program. We view this composition as provided by a semantic self composition function that maps each state of the composed program to the copies that make a move. Since the “quality” of a self composition function is measured by the ability to verify the safety of the composed program, we formulate the problem of inferring a self composition function together with the inductive invariant needed to verify safety of the composed program, where both are restricted to a given language. We develop a property-directed inference algorithm that, given a set of predicates, infers composition-invariant pairs expressed by Boolean combinations of the given predicates, or determines that no such pair exists. We implemented our algorithm and demonstrate that it is able to find self compositions that are beyond reach of existing tools. | Equivalence checking is another closely related research field, where a composition of several programs is considered. As an example, equivalence checking is applied to verify the correctness of compiler optimizations @cite_13 @cite_15 @cite_10 @cite_11 . @cite_15 the composition is determined by a brute-force search for possible synchronization points. While this brute-force search resembles our approach for finding the correct composition, it is not guided by the verification process. The works in @cite_10 @cite_11 identify possible synchronization points syntactically, and try to match them during the construction of a simulation relation between programs. | {
"cite_N": [
"@cite_10",
"@cite_15",
"@cite_13",
"@cite_11"
],
"mid": [
"2771024256",
"",
"1587844310",
"2810373436"
],
"abstract": [
"Equivalence checking is an important building block for program synthesis and verification. For a synthesis tool to compete with modern compilers, its equivalence checker should be able to verify the transformations produced by these compilers. We find that the transformations produced by compilers are much varied and the presence of undefined behaviour allows them to produce even more aggressive optimizations. Previous work on equivalence checking has been done in the context of translation validation, where either a pass-by-pass based approach was employed or a set of handpicked optimizations were proven. These settings are not suitable for a synthesis tool where a black-box approach is required.",
"",
"The paper presents a deductive framework for proving program equivalence and its application to automatic verification of transformations performed by optimizing compilers. To leverage existing program analysis techniques, we reduce the equivalence checking problem to analysis of one system --- a cross-product of the two input programs. We show how the approach can be effectively used for checking equivalence of consonant (i.e., structurally similar) programs. Finally, we report on the prototype tool that applies the developed methodology to verify that a compiler optimization run preserves the program semantics. Unlike existing frameworks, CoVaC accommodates absence of compiler annotations and handles most of the classical intraprocedural optimizations such as constant folding, reassociation, common subexpression elimination, code motion, dead code elimination, branch optimizations, and others.",
"Program equivalence checking is a fundamental problem in computer science with applications to translation validation and automatic synthesis of compiler optimizations. Contemporary equivalence checkers employ SMT solvers to discharge proof obligations generated by their equivalence checking algorithm. Equivalence checkers also involve algorithms to infer invariants that relate the intermediate states of the two programs being compared for equivalence. We present a new algorithm, called invariant-sketching, that allows the inference of the required invariants through the generation of counter-examples using SMT solvers. We also present an algorithm, called query-decomposition, that allows a more capable use of SMT solvers for application to equivalence checking. Both invariant-sketching and query-decomposition help us prove equivalence across program transformations that could not be handled by previous equivalence checking algorithms."
]
} |
1905.07705 | 2965927083 | We address the problem of verifying k-safety properties: properties that refer to k interacting executions of a program. A prominent way to verify k-safety properties is by self composition. In this approach, the problem of checking k-safety over the original program is reduced to checking an “ordinary” safety property over a program that executes k copies of the original program in some order. The way in which the copies are composed determines how complicated it is to verify the composed program. We view this composition as provided by a semantic self composition function that maps each state of the composed program to the copies that make a move. Since the “quality” of a self composition function is measured by the ability to verify the safety of the composed program, we formulate the problem of inferring a self composition function together with the inductive invariant needed to verify safety of the composed program, where both are restricted to a given language. We develop a property-directed inference algorithm that, given a set of predicates, infers composition-invariant pairs expressed by Boolean combinations of the given predicates, or determines that no such pair exists. We implemented our algorithm and demonstrate that it is able to find self compositions that are beyond reach of existing tools. | Regression verification also requires the ability to show equivalence between different versions of a program @cite_24 @cite_33 @cite_3 . The problem of synchronizing unbalanced loops appears in @cite_3 in the form of unbalanced recursive function calls. To allow synchronization in such cases, the user can specify different unrolling parameters for the different copies. In contrast, our approach relies only on user supplied predicates that are needed to establish correctness, while synchronization is handled automatically. | {
"cite_N": [
"@cite_24",
"@cite_33",
"@cite_3"
],
"mid": [
"",
"2067871120",
"2554831868"
],
"abstract": [
"",
"Regression verification is an approach complementing regression testing with formal verification. The goal is to formally prove that two versions of a program behave either equally or differently in a precisely specified way. In this paper, we present a novel automatic approach for regression verification that reduces the equivalence of two related imperative integer programs to Horn constraints over uninterpreted predicates. Subsequently, state-of-the-art SMT solvers are used to solve the constraints. We have implemented the approach, and our experiments show non-trivial integer programs that can now be proved equivalent without further user input.",
"We address the problem of proving the equivalence of two recursive functions that have different base-cases and or are not in lock-step. None of the existing software equivalence checkers (like reve, rvt, Symdiff), or general unbounded software model-checkers (like Seahorn, HSFC, Automizer) can prove such equivalences. We show a proof rule for the case of different base cases, based on separating the proof into two parts—inputs which result in the base case in at least one of the two compared functions, and all the rest. We also show how unbalanced unrolling of the functions can solve the case in which the functions are not in lock-step. In itself this type of unrolling may again introduce the problem of the different base cases, and we show a new proof rule for solving it. We implemented these rules in our regression-verification tool rvt. We conclude by comparing our approach to that of ’s counterexample-based refinement, which was implemented lately in their equivalence checker reve."
]
} |
1905.07841 | 2946574595 | Image captioning aims to automatically generate a natural language description of a given image, and most state-of-the-art models have adopted an encoder-decoder framework. The framework consists of a convolution neural network (CNN)-based image encoder that extracts region-based visual features from the input image, and an recurrent neural network (RNN)-based caption decoder that generates the output caption words based on the visual features with the attention mechanism. Despite the success of existing studies, current methods only model the co-attention that characterizes the inter-modal interactions while neglecting the self-attention that characterizes the intra-modal interactions. Inspired by the success of the Transformer model in machine translation, here we extend it to a Multimodal Transformer (MT) model for image captioning. Compared to existing image captioning approaches, the MT model simultaneously captures intra- and inter-modal interactions in a unified attention block. Due to the in-depth modular composition of such attention blocks, the MT model can perform complex multimodal reasoning and output accurate captions. Moreover, to further improve the image captioning performance, multi-view visual features are seamlessly introduced into the MT model. We quantitatively and qualitatively evaluate our approach using the benchmark MSCOCO image captioning dataset and conduct extensive ablation studies to investigate the reasons behind its effectiveness. The experimental results show that our method significantly outperforms the previous state-of-the-art methods. With an ensemble of seven models, our solution ranks the 1st place on the real-time leaderboard of the MSCOCO image captioning challenge at the time of the writing of this paper. | The research on image captioning can be categorized into the following three classes: template-based approaches @cite_0 @cite_8 @cite_39 , retrieval-based approaches @cite_21 @cite_32 @cite_47 , and generation-based approaches @cite_29 @cite_19 @cite_42 @cite_33 @cite_34 . | {
"cite_N": [
"@cite_33",
"@cite_8",
"@cite_29",
"@cite_21",
"@cite_42",
"@cite_32",
"@cite_39",
"@cite_0",
"@cite_19",
"@cite_47",
"@cite_34"
],
"mid": [
"2745461083",
"8316075",
"",
"2953276893",
"2552161745",
"1897761818",
"1858383477",
"1969616664",
"2575842049",
"2952782394",
"2890531016"
],
"abstract": [
"Top-down visual attention mechanisms have been used extensively in image captioning and visual question answering (VQA) to enable deeper image understanding through fine-grained analysis and even multiple steps of reasoning. In this work, we propose a combined bottom-up and top-down attention mechanism that enables attention to be calculated at the level of objects and other salient image regions. This is the natural basis for attention to be considered. Within our approach, the bottom-up mechanism (based on Faster R-CNN) proposes image regions, each with an associated feature vector, while the top-down mechanism determines feature weightings. Applying this approach to image captioning, our results on the MSCOCO test server establish a new state-of-the-art for the task, achieving CIDEr SPICE BLEU-4 scores of 117.9, 21.5 and 36.9, respectively. Demonstrating the broad applicability of the method, applying the same approach to VQA we obtain first place in the 2017 VQA Challenge.",
"This paper introduces a novel generation system that composes humanlike descriptions of images from computer vision detections. By leveraging syntactically informed word co-occurrence statistics, the generator filters and constrains the noisy detections output from a vision system to generate syntactic trees that detail what the computer vision system sees. Results show that the generation system outperforms state-of-the-art systems, automatically generating some of the most natural image descriptions to date.",
"",
"We introduce a model for bidirectional retrieval of images and sentences through a multi-modal embedding of visual and natural language data. Unlike previous models that directly map images or sentences into a common embedding space, our model works on a finer level and embeds fragments of images (objects) and fragments of sentences (typed dependency tree relations) into a common space. In addition to a ranking objective seen in previous work, this allows us to add a new fragment alignment objective that learns to directly associate these fragments across modalities. Extensive experimental evaluation shows that reasoning on both the global level of images and sentences and the finer level of their respective fragments significantly improves performance on image-sentence retrieval tasks. Additionally, our model provides interpretable predictions since the inferred inter-modal fragment alignment is explicit.",
"Automatically describing an image with a natural language has been an emerging challenge in both fields of computer vision and natural language processing. In this paper, we present Long Short-Term Memory with Attributes (LSTM-A) - a novel architecture that integrates attributes into the successful Convolutional Neural Networks (CNNs) plus Recurrent Neural Networks (RNNs) image captioning framework, by training them in an end-to-end manner. Particularly, the learning of attributes is strengthened by integrating inter-attribute correlations into Multiple Instance Learning (MIL). To incorporate attributes into captioning, we construct variants of architectures by feeding image representations and attributes into RNNs in different ways to explore the mutual but also fuzzy relationship between them. Extensive experiments are conducted on COCO image captioning dataset and our framework shows clear improvements when compared to state-of-the-art deep models. More remarkably, we obtain METEOR CIDEr-D of 25.5 100.2 on testing data of widely used and publicly available splits in [10] when extracting image representations by GoogleNet and achieve superior performance on COCO captioning Leaderboard.",
"Humans can prepare concise descriptions of pictures, focusing on what they find important. We demonstrate that automatic methods can do so too. We describe a system that can compute a score linking an image to a sentence. This score can be used to attach a descriptive sentence to a given image, or to obtain images that illustrate a given sentence. The score is obtained by comparing an estimate of meaning obtained from the image to one obtained from the sentence. Each estimate of meaning comes from a discriminative procedure that is learned us-ingdata. We evaluate on a novel dataset consisting of human-annotated images. While our underlying estimate of meaning is impoverished, it is sufficient to produce very good quantitative results, evaluated with a novel score that can account for synecdoche.",
"We propose a sentence generation strategy that describes images by predicting the most likely nouns, verbs, scenes and prepositions that make up the core sentence structure. The input are initial noisy estimates of the objects and scenes detected in the image using state of the art trained detectors. As predicting actions from still images directly is unreliable, we use a language model trained from the English Gigaword corpus to obtain their estimates; together with probabilities of co-located nouns, scenes and prepositions. We use these estimates as parameters on a HMM that models the sentence generation process, with hidden nodes as sentence components and image detections as the emissions. Experimental results show that our strategy of combining vision and language produces readable and descriptive sentences compared to naive strategies that use vision alone.",
"We present a system to automatically generate natural language descriptions from images. This system consists of two parts. The first part, content planning, smooths the output of computer vision-based detection and recognition algorithms with statistics mined from large pools of visually descriptive text to determine the best content words to use to describe an image. The second step, surface realization, chooses words to construct natural language sentences based on the predicted content and general statistics from natural language. We present multiple approaches for the surface realization step and evaluate each using automatic measures of similarity to human generated reference descriptions. We also collect forced choice human evaluations between descriptions from the proposed generation system and descriptions from competing approaches. The proposed system is very effective at producing relevant sentences for images. It also generates descriptions that are notably more true to the specific image content than previous work.",
"Attention-based neural encoder-decoder frameworks have been widely adopted for image captioning. Most methods force visual attention to be active for every generated word. However, the decoder likely requires little to no visual information from the image to predict non-visual words such as the and of. Other words that may seem visual can often be predicted reliably just from the language model e.g., sign after behind a red stop or phone following talking on a cell. In this paper, we propose a novel adaptive attention model with a visual sentinel. At each time step, our model decides whether to attend to the image (and if so, to which regions) or to the visual sentinel. The model decides whether to attend to the image and where, in order to extract meaningful information for sequential word generation. We test our method on the COCO image captioning 2015 challenge dataset and Flickr30K. Our approach sets the new state-of-the-art by a significant margin.",
"Two recent approaches have achieved state-of-the-art results in image captioning. The first uses a pipelined process where a set of candidate words is generated by a convolutional neural network (CNN) trained on images, and then a maximum entropy (ME) language model is used to arrange these words into a coherent sentence. The second uses the penultimate activation layer of the CNN as input to a recurrent neural network (RNN) that then generates the caption sequence. In this paper, we compare the merits of these different language modeling approaches for the first time by using the same state-of-the-art CNN as input. We examine issues in the different approaches, including linguistic irregularities, caption repetition, and data set overlap. By combining key aspects of the ME and RNN methods, we achieve a new record performance over previously published results on the benchmark COCO dataset. However, the gains we see in BLEU do not translate to human judgments.",
"It is always well believed that modeling relationships between objects would be helpful for representing and eventually describing an image. Nevertheless, there has not been evidence in support of the idea on image description generation. In this paper, we introduce a new design to explore the connections between objects for image captioning under the umbrella of attention-based encoder-decoder framework. Specifically, we present Graph Convolutional Networks plus Long Short-Term Memory (dubbed as GCN-LSTM) architecture that novelly integrates both semantic and spatial object relationships into image encoder. Technically, we build graphs over the detected objects in an image based on their spatial and semantic connections. The representations of each region proposed on objects are then refined by leveraging graph structure through GCN. With the learnt region-level features, our GCN-LSTM capitalizes on LSTM-based captioning framework with attention mechanism for sentence generation. Extensive experiments are conducted on COCO image captioning dataset, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, GCN-LSTM increases CIDEr-D performance from 120.1 to 128.7 on COCO testing set."
]
} |
1905.07841 | 2946574595 | Image captioning aims to automatically generate a natural language description of a given image, and most state-of-the-art models have adopted an encoder-decoder framework. The framework consists of a convolution neural network (CNN)-based image encoder that extracts region-based visual features from the input image, and an recurrent neural network (RNN)-based caption decoder that generates the output caption words based on the visual features with the attention mechanism. Despite the success of existing studies, current methods only model the co-attention that characterizes the inter-modal interactions while neglecting the self-attention that characterizes the intra-modal interactions. Inspired by the success of the Transformer model in machine translation, here we extend it to a Multimodal Transformer (MT) model for image captioning. Compared to existing image captioning approaches, the MT model simultaneously captures intra- and inter-modal interactions in a unified attention block. Due to the in-depth modular composition of such attention blocks, the MT model can perform complex multimodal reasoning and output accurate captions. Moreover, to further improve the image captioning performance, multi-view visual features are seamlessly introduced into the MT model. We quantitatively and qualitatively evaluate our approach using the benchmark MSCOCO image captioning dataset and conduct extensive ablation studies to investigate the reasons behind its effectiveness. The experimental results show that our method significantly outperforms the previous state-of-the-art methods. With an ensemble of seven models, our solution ranks the 1st place on the real-time leaderboard of the MSCOCO image captioning challenge at the time of the writing of this paper. | The template-based approaches address the task using a two-stage strategy: 1) align the sentence fragments (, subject, object, and verb) with the predicted labels from the image; and 2) generate the sentence from the segments using pre-defined language templates. Kulkarni use the conditional random field (CRF) model to predict labels based on the detected objects, attributes, and prepositions, and then generate caption sentences with a template by filling in the blanks with the most likely labels @cite_0 . Yang employ the HMM model to select the best objects, verbs, and prepositions with respect to the log-likelihood for segments generation @cite_39 . Intuitively, the captions that are generated by the template-based approaches highly depend on the quality of the templates and usually follow the syntactical structures. However, the diversity of the generated captions is severely restricted. | {
"cite_N": [
"@cite_0",
"@cite_39"
],
"mid": [
"1969616664",
"1858383477"
],
"abstract": [
"We present a system to automatically generate natural language descriptions from images. This system consists of two parts. The first part, content planning, smooths the output of computer vision-based detection and recognition algorithms with statistics mined from large pools of visually descriptive text to determine the best content words to use to describe an image. The second step, surface realization, chooses words to construct natural language sentences based on the predicted content and general statistics from natural language. We present multiple approaches for the surface realization step and evaluate each using automatic measures of similarity to human generated reference descriptions. We also collect forced choice human evaluations between descriptions from the proposed generation system and descriptions from competing approaches. The proposed system is very effective at producing relevant sentences for images. It also generates descriptions that are notably more true to the specific image content than previous work.",
"We propose a sentence generation strategy that describes images by predicting the most likely nouns, verbs, scenes and prepositions that make up the core sentence structure. The input are initial noisy estimates of the objects and scenes detected in the image using state of the art trained detectors. As predicting actions from still images directly is unreliable, we use a language model trained from the English Gigaword corpus to obtain their estimates; together with probabilities of co-located nouns, scenes and prepositions. We use these estimates as parameters on a HMM that models the sentence generation process, with hidden nodes as sentence components and image detections as the emissions. Experimental results show that our strategy of combining vision and language produces readable and descriptive sentences compared to naive strategies that use vision alone."
]
} |
1905.07841 | 2946574595 | Image captioning aims to automatically generate a natural language description of a given image, and most state-of-the-art models have adopted an encoder-decoder framework. The framework consists of a convolution neural network (CNN)-based image encoder that extracts region-based visual features from the input image, and an recurrent neural network (RNN)-based caption decoder that generates the output caption words based on the visual features with the attention mechanism. Despite the success of existing studies, current methods only model the co-attention that characterizes the inter-modal interactions while neglecting the self-attention that characterizes the intra-modal interactions. Inspired by the success of the Transformer model in machine translation, here we extend it to a Multimodal Transformer (MT) model for image captioning. Compared to existing image captioning approaches, the MT model simultaneously captures intra- and inter-modal interactions in a unified attention block. Due to the in-depth modular composition of such attention blocks, the MT model can perform complex multimodal reasoning and output accurate captions. Moreover, to further improve the image captioning performance, multi-view visual features are seamlessly introduced into the MT model. We quantitatively and qualitatively evaluate our approach using the benchmark MSCOCO image captioning dataset and conduct extensive ablation studies to investigate the reasons behind its effectiveness. The experimental results show that our method significantly outperforms the previous state-of-the-art methods. With an ensemble of seven models, our solution ranks the 1st place on the real-time leaderboard of the MSCOCO image captioning challenge at the time of the writing of this paper. | To ease the diversity problem, retrieval-based approaches are proposed to the most relevant captions from a large-scale caption database with respect to their cross-modal similarities to the given image. Karpathy propose a deep fragment embedding approach to match the image-caption pairs based on the alignment of visual segments (the detected objects) and caption segments (subjects, objects, and verbs) @cite_21 . In the testing stage, the cross-modal matching over the whole caption database (usually the captions from the training set) is performed to generate the caption for one image. Other methods such as @cite_32 @cite_47 use different metrics or loss functions to learn the cross-modal matching model. However, the retrieval efficiency becomes a bottleneck for these approaches when the caption database is large and restricting the size of the database may reduce the caption diversity. Moreover, retrieval-based approaches cannot generate novel captions beyond the database, which means the diversity problem has not been completely resolved. | {
"cite_N": [
"@cite_47",
"@cite_21",
"@cite_32"
],
"mid": [
"2952782394",
"2953276893",
"1897761818"
],
"abstract": [
"Two recent approaches have achieved state-of-the-art results in image captioning. The first uses a pipelined process where a set of candidate words is generated by a convolutional neural network (CNN) trained on images, and then a maximum entropy (ME) language model is used to arrange these words into a coherent sentence. The second uses the penultimate activation layer of the CNN as input to a recurrent neural network (RNN) that then generates the caption sequence. In this paper, we compare the merits of these different language modeling approaches for the first time by using the same state-of-the-art CNN as input. We examine issues in the different approaches, including linguistic irregularities, caption repetition, and data set overlap. By combining key aspects of the ME and RNN methods, we achieve a new record performance over previously published results on the benchmark COCO dataset. However, the gains we see in BLEU do not translate to human judgments.",
"We introduce a model for bidirectional retrieval of images and sentences through a multi-modal embedding of visual and natural language data. Unlike previous models that directly map images or sentences into a common embedding space, our model works on a finer level and embeds fragments of images (objects) and fragments of sentences (typed dependency tree relations) into a common space. In addition to a ranking objective seen in previous work, this allows us to add a new fragment alignment objective that learns to directly associate these fragments across modalities. Extensive experimental evaluation shows that reasoning on both the global level of images and sentences and the finer level of their respective fragments significantly improves performance on image-sentence retrieval tasks. Additionally, our model provides interpretable predictions since the inferred inter-modal fragment alignment is explicit.",
"Humans can prepare concise descriptions of pictures, focusing on what they find important. We demonstrate that automatic methods can do so too. We describe a system that can compute a score linking an image to a sentence. This score can be used to attach a descriptive sentence to a given image, or to obtain images that illustrate a given sentence. The score is obtained by comparing an estimate of meaning obtained from the image to one obtained from the sentence. Each estimate of meaning comes from a discriminative procedure that is learned us-ingdata. We evaluate on a novel dataset consisting of human-annotated images. While our underlying estimate of meaning is impoverished, it is sufficient to produce very good quantitative results, evaluated with a novel score that can account for synecdoche."
]
} |
1905.07841 | 2946574595 | Image captioning aims to automatically generate a natural language description of a given image, and most state-of-the-art models have adopted an encoder-decoder framework. The framework consists of a convolution neural network (CNN)-based image encoder that extracts region-based visual features from the input image, and an recurrent neural network (RNN)-based caption decoder that generates the output caption words based on the visual features with the attention mechanism. Despite the success of existing studies, current methods only model the co-attention that characterizes the inter-modal interactions while neglecting the self-attention that characterizes the intra-modal interactions. Inspired by the success of the Transformer model in machine translation, here we extend it to a Multimodal Transformer (MT) model for image captioning. Compared to existing image captioning approaches, the MT model simultaneously captures intra- and inter-modal interactions in a unified attention block. Due to the in-depth modular composition of such attention blocks, the MT model can perform complex multimodal reasoning and output accurate captions. Moreover, to further improve the image captioning performance, multi-view visual features are seamlessly introduced into the MT model. We quantitatively and qualitatively evaluate our approach using the benchmark MSCOCO image captioning dataset and conduct extensive ablation studies to investigate the reasons behind its effectiveness. The experimental results show that our method significantly outperforms the previous state-of-the-art methods. With an ensemble of seven models, our solution ranks the 1st place on the real-time leaderboard of the MSCOCO image captioning challenge at the time of the writing of this paper. | Different from template-based and retrieval-based models, generation-based models aim to learn a language model that can generate novel captions with more flexible syntactical structures. With this purpose, recent works explore this direction by introducing the neural networks for image captioning. Vinyals propose an encoder-decoder architecture by utilizing the GoogLeNet @cite_5 and LSTM networks @cite_11 as its backbones. Similar architectures are also proposed by Donahue @cite_25 and Karpathy @cite_22 . Due to the flexibility and excellent performance, generation-based models have become the mainstream for image captioning. | {
"cite_N": [
"@cite_5",
"@cite_22",
"@cite_25",
"@cite_11"
],
"mid": [
"2097117768",
"2951805548",
"2951183276",
""
],
"abstract": [
"We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.",
"We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.",
"Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized.",
""
]
} |
1905.07841 | 2946574595 | Image captioning aims to automatically generate a natural language description of a given image, and most state-of-the-art models have adopted an encoder-decoder framework. The framework consists of a convolution neural network (CNN)-based image encoder that extracts region-based visual features from the input image, and an recurrent neural network (RNN)-based caption decoder that generates the output caption words based on the visual features with the attention mechanism. Despite the success of existing studies, current methods only model the co-attention that characterizes the inter-modal interactions while neglecting the self-attention that characterizes the intra-modal interactions. Inspired by the success of the Transformer model in machine translation, here we extend it to a Multimodal Transformer (MT) model for image captioning. Compared to existing image captioning approaches, the MT model simultaneously captures intra- and inter-modal interactions in a unified attention block. Due to the in-depth modular composition of such attention blocks, the MT model can perform complex multimodal reasoning and output accurate captions. Moreover, to further improve the image captioning performance, multi-view visual features are seamlessly introduced into the MT model. We quantitatively and qualitatively evaluate our approach using the benchmark MSCOCO image captioning dataset and conduct extensive ablation studies to investigate the reasons behind its effectiveness. The experimental results show that our method significantly outperforms the previous state-of-the-art methods. With an ensemble of seven models, our solution ranks the 1st place on the real-time leaderboard of the MSCOCO image captioning challenge at the time of the writing of this paper. | Within the encoder-decoder framework, one of the most important improvements for generation-based models is the attention mechanism. Xu introduce the soft and hard attention models to mimic the human eye focusing on different regions in an image when generating different caption words. The attention model is a module that can be seamlessly inserted into previous approaches to remarkably improve the caption quality. The attention model is further improved in @cite_33 @cite_1 @cite_19 @cite_29 . Anderson introduce a bottom-up module, that uses a pre-trained object detector to extract region-based image features, and a top-down module that utilizes soft attention to dynamically attend to these object @cite_33 . Chen propose a spatial- and channel-wise attention model to attend to visual features @cite_1 . Lu present an adaptive attention encoder-decoder model for automatically deciding when to rely on visual or language signals @cite_19 . Rennie design a FC model and an Att2in model that achieve good performance @cite_29 . | {
"cite_N": [
"@cite_19",
"@cite_29",
"@cite_1",
"@cite_33"
],
"mid": [
"2575842049",
"",
"2550553598",
"2745461083"
],
"abstract": [
"Attention-based neural encoder-decoder frameworks have been widely adopted for image captioning. Most methods force visual attention to be active for every generated word. However, the decoder likely requires little to no visual information from the image to predict non-visual words such as the and of. Other words that may seem visual can often be predicted reliably just from the language model e.g., sign after behind a red stop or phone following talking on a cell. In this paper, we propose a novel adaptive attention model with a visual sentinel. At each time step, our model decides whether to attend to the image (and if so, to which regions) or to the visual sentinel. The model decides whether to attend to the image and where, in order to extract meaningful information for sequential word generation. We test our method on the COCO image captioning 2015 challenge dataset and Flickr30K. Our approach sets the new state-of-the-art by a significant margin.",
"",
"Visual attention has been successfully applied in structural prediction tasks such as visual captioning and question answering. Existing visual attention models are generally spatial, i.e., the attention is modeled as spatial probabilities that re-weight the last conv-layer feature map of a CNN encoding an input image. However, we argue that such spatial attention does not necessarily conform to the attention mechanism — a dynamic feature extractor that combines contextual fixations over time, as CNN features are naturally spatial, channel-wise and multi-layer. In this paper, we introduce a novel convolutional neural network dubbed SCA-CNN that incorporates Spatial and Channel-wise Attentions in a CNN. In the task of image captioning, SCA-CNN dynamically modulates the sentence generation context in multi-layer feature maps, encoding where (i.e., attentive spatial locations at multiple layers) and what (i.e., attentive channels) the visual attention is. We evaluate the proposed SCA-CNN architecture on three benchmark image captioning datasets: Flickr8K, Flickr30K, and MSCOCO. It is consistently observed that SCA-CNN significantly outperforms state-of-the-art visual attention-based image captioning methods.",
"Top-down visual attention mechanisms have been used extensively in image captioning and visual question answering (VQA) to enable deeper image understanding through fine-grained analysis and even multiple steps of reasoning. In this work, we propose a combined bottom-up and top-down attention mechanism that enables attention to be calculated at the level of objects and other salient image regions. This is the natural basis for attention to be considered. Within our approach, the bottom-up mechanism (based on Faster R-CNN) proposes image regions, each with an associated feature vector, while the top-down mechanism determines feature weightings. Applying this approach to image captioning, our results on the MSCOCO test server establish a new state-of-the-art for the task, achieving CIDEr SPICE BLEU-4 scores of 117.9, 21.5 and 36.9, respectively. Demonstrating the broad applicability of the method, applying the same approach to VQA we obtain first place in the 2017 VQA Challenge."
]
} |
1905.07841 | 2946574595 | Image captioning aims to automatically generate a natural language description of a given image, and most state-of-the-art models have adopted an encoder-decoder framework. The framework consists of a convolution neural network (CNN)-based image encoder that extracts region-based visual features from the input image, and an recurrent neural network (RNN)-based caption decoder that generates the output caption words based on the visual features with the attention mechanism. Despite the success of existing studies, current methods only model the co-attention that characterizes the inter-modal interactions while neglecting the self-attention that characterizes the intra-modal interactions. Inspired by the success of the Transformer model in machine translation, here we extend it to a Multimodal Transformer (MT) model for image captioning. Compared to existing image captioning approaches, the MT model simultaneously captures intra- and inter-modal interactions in a unified attention block. Due to the in-depth modular composition of such attention blocks, the MT model can perform complex multimodal reasoning and output accurate captions. Moreover, to further improve the image captioning performance, multi-view visual features are seamlessly introduced into the MT model. We quantitatively and qualitatively evaluate our approach using the benchmark MSCOCO image captioning dataset and conduct extensive ablation studies to investigate the reasons behind its effectiveness. The experimental results show that our method significantly outperforms the previous state-of-the-art methods. With an ensemble of seven models, our solution ranks the 1st place on the real-time leaderboard of the MSCOCO image captioning challenge at the time of the writing of this paper. | Beyond the image captioning tasks, attention mechanisms are widely used in other multi-modal learning tasks such as visual question answering (VQA). Lu propose a co-attention learning framework to alternately learn the image attention and question attention @cite_40 . Yu reduce the co-attention method into two steps, self-attention for a question embedding and the question-conditioned attention for a visual embedding @cite_15 . Nam propose a multi-stage co-attention learning model to refine the attentions based on the memory of previous attentions @cite_49 . However, these co-attention models learn separate attention distributions for each modality (image or question) and neglect the dense interaction between each question word and each image region, which becomes a bottleneck for understanding the fine-grained relationships of multimodal features. To address this issue, dense co-attention models have been proposed, which establish the complete interaction between each question word and each image region @cite_13 @cite_31 . Compared to the previous co-attention models with coarse interactions, the dense co-attention models deliver significantly better VQA performance. | {
"cite_N": [
"@cite_15",
"@cite_40",
"@cite_49",
"@cite_31",
"@cite_13"
],
"mid": [
"2962980263",
"2963668159",
"2951690276",
"2963521239",
"2963176022"
],
"abstract": [
"Visual question answering (VQA) is challenging, because it requires a simultaneous understanding of both visual content of images and textual content of questions. To support the VQA task, we need to find good solutions for the following three issues: 1) fine-grained feature representations for both the image and the question; 2) multimodal feature fusion that is able to capture the complex interactions between multimodal features; and 3) automatic answer prediction that is able to consider the complex correlations between multiple diverse answers for the same question. For fine-grained image and question representations, a “coattention” mechanism is developed using a deep neural network (DNN) architecture to jointly learn the attentions for both the image and the question, which can allow us to reduce the irrelevant features effectively and obtain more discriminative features for image and question representations. For multimodal feature fusion, a generalized multimodal factorized high-order pooling approach (MFH) is developed to achieve more effective fusion of multimodal features by exploiting their correlations sufficiently, which can further result in superior VQA performance as compared with the state-of-the-art approaches. For answer prediction, the Kullback–Leibler divergence is used as the loss function to achieve precise characterization of the complex correlations between multiple diverse answers with the same or similar meaning, which can allow us to achieve faster convergence rate and obtain slightly better accuracy on answer prediction. A DNN architecture is designed to integrate all these aforementioned modules into a unified model for achieving superior VQA performance. With an ensemble of our MFH models, we achieve the state-of-the-art performance on the large-scale VQA data sets and win the runner-up in VQA Challenge 2017.",
"A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling \"where to look\" or visual attention, it is equally important to model \"what words to listen to\" or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3 to 60.5 , and from 61.6 to 63.3 on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1 for VQA and 65.4 for COCO-QA.",
"We propose Dual Attention Networks (DANs) which jointly leverage visual and textual attention mechanisms to capture fine-grained interplay between vision and language. DANs attend to specific regions in images and words in text through multiple steps and gather essential information from both modalities. Based on this framework, we introduce two types of DANs for multimodal reasoning and matching, respectively. The reasoning model allows visual and textual attentions to steer each other during collaborative inference, which is useful for tasks such as Visual Question Answering (VQA). In addition, the matching model exploits the two attention mechanisms to estimate the similarity between images and sentences by focusing on their shared semantics. Our extensive experiments validate the effectiveness of DANs in combining vision and language, achieving the state-of-the-art performance on public benchmarks for VQA and image-text matching.",
"Attention networks in multimodal learning provide an efficient way to utilize given visual information selectively. However, the computational cost to learn attention distributions for every pair of multimodal input channels is prohibitively expensive. To solve this problem, co-attention builds two separate attention distributions for each modality neglecting the interaction between multimodal inputs. In this paper, we propose bilinear attention networks (BAN) that find bilinear attention distributions to utilize given vision-language information seamlessly. BAN considers bilinear interactions among two groups of input channels, while low-rank bilinear pooling extracts the joint representations for each pair of channels. Furthermore, we propose a variant of multimodal residual networks to exploit eight-attention maps of the BAN efficiently. We quantitatively and qualitatively evaluate our model on visual question answering (VQA 2.0) and Flickr30k Entities datasets, showing that BAN significantly outperforms previous methods and achieves new state-of-the-arts on both datasets.",
"A key solution to visual question answering (VQA) exists in how to fuse visual and language features extracted from an input image and question. We show that an attention mechanism that enables dense, bi-directional interactions between the two modalities contributes to boost accuracy of prediction of answers. Specifically, we present a simple architecture that is fully symmetric between visual and language representations, in which each question word attends on image regions and each image region attends on question words. It can be stacked to form a hierarchy for multi-step interactions between an image-question pair. We show through experiments that the proposed architecture achieves a new state-of-the-art on VQA and VQA 2.0 despite its small size. We also present qualitative evaluation, demonstrating how the proposed attention mechanism can generate reasonable attention maps on images and questions, which leads to the correct answer prediction."
]
} |
1905.07659 | 2945209551 | We extend the feature selection methodology to dependent data and propose a novel time series predictor selection scheme that accommodates statistical dependence in a more typical i.i.d sub-sampling based framework. Furthermore, the machinery of mixing stationary processes allows us to quantify the improvements of our approach over any base predictor selection method (such as lasso) even in a finite sample setting. Using the lasso as a base procedure we demonstrate the applicability of our methods to simulated and several real time series datasets. | Most existing predictor selection methods in time series are largely based on heuristics, @cite_22 or simply use plain lasso @cite_8 @cite_14 on the entire data and it is non-trivial to provide guarantees for such methods. For the specific case of vector auto regression (VAR) models, propose a grouped penalty based approach that provably identifies relevant lags and predictors in the asymptotic @math (number of time series) and @math (number of lags) regime. Our method is of a fundamentally distinct flavor in that we provide quantifiable improvement over any base predictor selection method, including the method in @cite_7 , even in the finite data (sample or dimension) setting. Moreover our approach also works for the more general VAR-X (VAR with exogenous) model and in general is independent of the base predictor selection mechanism. | {
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_7",
"@cite_8"
],
"mid": [
"2266590724",
"2273181118",
"2155350940",
"2130094219"
],
"abstract": [
"(2014) have recently demonstrated how to efficiently combine information from a set of popular technical indicators together with the standard Goyal and Welch (2008) predictor variables widely used in the equity premium forecasting literature to improve out-of-sample forecasts of the equity premium using a small number of principal components. We show that forecasts of the equity premium can be further improved by, first, incorporating broader macroeconomic data into the information set, second, improving the selection of the most relevant factors and combining the most relevant factors by means of a forecast combination regression, and third, imposing theoretically motivated positivity constraints on the forecasts of the equity premium. We find that in particular our proposed forecast combination approach, which combines forecasts of the most relevant (2014) and macroeconomic factors and further imposes positivity constraints on the equity premium forecasts, generates statistically significant and economically sizeable improvements over the best performing model of (2014).",
"This chapter reviews methods for selecting empirically relevant predictors from a set of N potentially relevant ones for the purpose of forecasting a scalar time series. First, criterion-based procedures in the conventional case when N is small relative to the sample size, T, are reviewed. Then the large N case is covered. Regularization and dimension reduction methods are then discussed. Irrespective of the model size, there is an unavoidable tension between prediction accuracy and consistent model determination. Simulations are used to compare selected methods from the perspective of relative risk in one period ahead forecasts.",
"One popular approach for nonstructural economic and financial forecasting is to include a large number of economic and financial variables, which has been shown to lead to significant improvements for forecasting, for example, by the dynamic factor models. A challenging issue is to determine which variables and (their) lags are relevant, especially when there is a mixture of serial correlation (temporal dynamics), high dimensional (spatial) dependence structure and moderate sample size (relative to dimensionality and lags). To this end, an integrated solution that addresses these three challenges simultaneously is appealing. We study the large vector auto regressions here with three types of estimates. We treat each variable's own lags different from other variables' lags, distinguish various lags over time, and is able to select the variables and lags simultaneously. We first show the consequences of using Lasso type estimate directly for time series without considering the temporal dependence. In contrast, our proposed method can still produce an estimate as efficient as an oracle under such scenarios. The tuning parameters are chosen via a data driven \"rolling scheme\" method to optimize the forecasting performance. A macroeconomic and financial forecasting problem is considered to illustrate its superiority over existing estimators.",
"Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search–based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people’s online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions."
]
} |
1905.07825 | 2946562100 | We solve the classic albedo and Milne problems of plane-parallel illumination of an isotropically-scattering half-space when generalized to a Euclidean domain @math for arbitrary @math . A continuous family of pseudo-problems and related @math functions arises and includes the classical 3D solutions, as well as 2D "Flatland" and rod-model solutions, as special cases. The Case-style eigenmode method is applied to the general problem and the internal scalar densities, emerging distributions, and their respective moments are expressed in closed-form. Universal properties invariant to dimension @math are highlighted and we find that a discrete diffusion mode is not universal for @math in absorbing media. We also find unexpected correspondences between differing dimensions and between anisotropic 3D scattering and isotropic scattering in high dimension. | Transport in both the rod and Flatland settings finds numerous application in practice. The rod model is equivalent to the two-stream approximation in plane-parallel atmospheric scattering @cite_85 @cite_22 , which is still a common method of solution for radiation budgets @cite_54 . Transport in Flatland also has many real-world applications, such as sea echo @cite_71 , seismology @cite_96 , animal migration @cite_37 @cite_74 , and wave propagation and diffraction in plates and ice @cite_15 @cite_62 @cite_39 . Also, planar waveguides comprised of dielectric plates with controlled or random patterns of holes lead to 2D transport and have proven useful for studying engineered disorder @cite_35 . Similarly, bundles of aligned dielectric fibers, such as clumps of hair or fur, can also be treated with a Flatland approach @cite_97 @cite_79 @cite_88 @cite_9 , where it is common to employ an approximate separable product of 1D and 2D solutions @cite_75 . Reactor design also makes use of such 2D 1D decompositions @cite_89 . | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_62",
"@cite_22",
"@cite_97",
"@cite_54",
"@cite_85",
"@cite_96",
"@cite_9",
"@cite_39",
"@cite_89",
"@cite_79",
"@cite_71",
"@cite_74",
"@cite_88",
"@cite_15",
"@cite_75"
],
"mid": [
"2764320753",
"2147673836",
"2078604327",
"",
"",
"1965527585",
"2088612131",
"1647526265",
"2963630503",
"1998357033",
"1953611007",
"",
"1996176293",
"1976744965",
"",
"",
"2128487064"
],
"abstract": [
"There are a number of approaches to coupling light with thin-film devices such as solar cells. The demonstration now that multiple scattering processes in two-dimensional random media enable efficient light trapping suggests new possibilities for photon management with the benefit of broad spectral and angular operation.",
"Aggregation is widespread in invertebrate societies and can appear in response to environmental heterogeneities or by attraction between individuals. We performed experiments with cockroach, Blattella germanica, larvae in a homogeneous environment to investigate the influence of interactions between individuals on aggregations. Different densities were tested. A first phase led to radial dispersion of larvae in relation to wall-following behaviours; the consequence of this process was a homogeneous distribution of larvae around the periphery of the arena. A second phase corresponded to angular reorganization of larvae leading to the formation of aggregates. The phenomenon was analysed both at the individual and collective levels. Individual cockroaches modulated their behaviour depending on the presence of other larvae in their vicinity: probabilities of stopping and resting times were both higher when the numbers of larvae were greater. We then developed an agent-based model implementing individual behavioural rules, all derived from experiments, to explain the aggregation dynamics at the collective level. This study supports evidence that aggregation relies on mechanisms of amplification, supported by interactions between individuals that follow simple rules based on local information and without knowledge of the global structure.",
"We study transport and diffusion of classical waves in two-dimensional disordered systems and in particular surface waves on a flat surface with randomly fluctuating impedance. We derive from first principles a radiative transport equation for the angularly resolved energy density of the surface waves. This equation accounts for multiple scattering of surface waves as well as for their decay because of leakage into volume waves. We analyze the dependence of the scattering mean free path and of the decay rate on the power spectrum of fluctuations. We also consider the diffusion approximation of the surface radiative transport equation and calculate the angular distribution of the energy transmitted by a strip of random surface impedance.",
"",
"",
"Abstract Existing two-stream approximations to radiative transfer theory for particulate media are shown to be represented by identical forms of coupled differential equations if the intensity is replaced by integrals of the intensity over hemispheres. One set of solutions thus suffices for all methods and provides convenient analytical comparisons. The equations also suggest modifications of the standard techniques so as to duplicate exact solutions for thin atmospheres and thus permit accurate determinations of the effects of typical aerosol layers. Numerical results for the plane albedos of plane-parallel atmospheres (single-scattering albedo = 0.8, 1.0; optical thickness = 0.25, 1, 4. 16; Henyey-Greenstein phase function with asymmetry factor 0.75) are given for conventional and modified Eddington approximations, conventional and modified two-point quadrature schemes, the hemispheric-constant method and the delta-function method, all for comparison with accurate discrete-ordinate solutions. A new two-...",
"In a gear tooth system for the pumping wheels of a gear pump, having an auxiliary driving transmission, the tooth profiles of the pumping gearwheels are of involute shape, the operational or working pressure angle is greater than 40 DEG , and the transverse contact ratio is approximately 0.5.",
"Introduction.- Heterogeneity in the Lithosphere.- Phenomenological Approaches to Seismogram Envelopes in short-periods.- Born approximation for Wave Scattering in Random Media.- Attenuation of High-Frequency Seismic Waves.- Synthesis of Three-Component Seismogram Envelopes for Earthquakes Using Scattering Amplitudes from the Born Approximation.- Envelope Synthesis Based on the Radiative Transfer Theory: Multiple Scattering Models.- Parabolic approximation and Envelope Synthesis based on the Markov Approximation. Summary and Epilogue.",
"A growing acceptance of fiber-reinforced composite materials imparts some relevance to exploring the effects which a predominantly linear scattering lattice may have upon interior radiative transport. Indeed, a central feature of electromagnetic wave propagation within such a lattice, if sufficiently dilute, is ray confinement to cones whose half-angles are set by that between lattice and the incident ray. When such propagation is subordinated to a viewpoint of an unpolarized intensity transport, one arrives at a somewhat simplified variant of the Boltzmann equation with spherical scattering demoted to its cylindrical counterpart. With a view to initiating a hopefully wider discussion of such phenomena, we follow through in detail the half-space albedo problem. This is done first along canonical lines that harness the Wiener-Hopf technique, and then once more in a discrete ordinates setting via flux decomposition along the eigenbasis of the underlying attenuation scattering matrix. Good agreement is seen to prevail. We further suggest that the Case singular eigenfunction apparatus could likewise be evolved here in close analogy to its original, spherical scattering model. A cursory contact with related problems in the astrophysical literature suggests, in addition, that the basic physical fidelity of our scalar radiative transfer equation (RTE) remains open to improvement by passage to a (4×1) Stokes vector, (4×4) matricial setting.",
"Abstract We present a linear Boltzmann equation to model wave scattering in the Marginal Ice Zone (the region of ocean which consists of broken ice floes). The equation is derived by two methods, the first based on [Meylan, M.H., Squire, V.A., Fox, C., 1997. Towards realism in modeling ocean wave behavior in marginal ice zones. J. Geophys. Res. 102 (C10), 22981–22991] and second based on Masson and LeBlond [Masson, D., LeBlond, P., 1989. Spectral evolution of wind-generated surface gravity waves in a dispersed ice field. J. Fluid Mech. 202, 111–136]. This linear Boltzmann equation, we believe, is more suitable than the equation presented in Masson and LeBlond [Masson, D., LeBlond, P., 1989. Spectral evolution of wind-generated surface gravity waves in a dispersed ice field. J. Fluid Mech. 202, 111–136] because of its simpler form, because it is a differential rather than difference equation and because it does not depend on any assumptions about the ice floe geometry. However, the linear Boltzmann equation presented here is equivalent to the equation in Masson and LeBlond [Masson, D., LeBlond, P., 1989. Spectral evolution of wind-generated surface gravity waves in a dispersed ice field. J. Fluid Mech. 202, 111–136] since it is derived from their equation. Furthermore, the linear Boltzmann equation is also derived independently using the argument in [Meylan, M.H., Squire, V.A., Fox, C., 1997. Towards realism in modeling ocean wave behavior in marginal ice zones. J. Geophys. Res. 102 (C10), 22981–22991]. We also present details of how the scattering kernel in the linear Boltzmann equation is found from the scattering by an individual ice floe and show how the linear Boltzmann equation can be solved straightforwardly in certain cases.",
"Abstract A new “2D 1D” equation is proposed to approximate the 3D linear Boltzmann equation. The approximate 2D 1D equation preserves the exact transport physics in the radial directions x and y but employs diffusion physics in the axial direction z . The 2D 1D equation can be systematically discretized, yielding accurate simulation methods for 3D reactor core problems. The resulting 2D 1D solutions are more accurate than 3D diffusion solutions, and are less expensive to calculate than standard 3D transport solutions. In this paper, we (i) show that the simplest 2D 1D equation has certain desirable properties, (ii) systematically discretize this equation, (iii) derive stable iteration schemes for solving the discrete system of equations, and (iv) give numerical results for simple problems that confirm the theoretical predictions of accuracy and iterative stability.",
"",
"A mathematical model for non-Rayleigh microwave sea echo is developed which describes explicitly the dependence of statistical properties of the radar cross section on the area of sea surface illuminated by the radar. In addition to the first probability distribution of the scattered radiation, its temporal and spatial correlation functions are also considered. It is shown that, in general, these correlation functions decay on at least two scales, the second, non-Rayleigh, contributions being strongly dependent on the properties of a \"single scatterer.\" Predictions of the model are found to be in qualitative agreement with existing experimental data. A new class of probability distributions, the \" K -distributions,\" is introduced, which may prove useful for fitting such data.",
"In this paper, we intend to formulate a new meta-heuristic algorithm, called Cuckoo Search (CS), for solving optimization problems. This algorithm is based on the obligate brood parasitic behaviour of some cuckoo species in combination with the Levy flight behaviour of some birds and fruit flies. We validate the proposed algorithm against test functions and then compare its performance with those of genetic algorithms and particle swarm optimization. Finally, we discuss the implication of the results and suggestion for further research.",
"",
"",
"Light scattering from hair is normally simulated in computer graphics using Kajiya and Kay's classic phenomenological model. We have made new measurements of scattering from individual hair fibers that exhibit visually significant effects not predicted by Kajiya and Kay's model. Our measurements go beyond previous hair measurements by examining out-of-plane scattering, and together with this previous work they show a multiple specular highlight and variation in scattering with rotation about the fiber axis. We explain the sources of these effects using a model of a hair fiber as a transparent elliptical cylinder with an absorbing interior and a surface covered with tilted scales. Based on an analytical scattering function for a circular cylinder, we propose a practical shading model for hair that qualitatively matches the scattering behavior shown in the measurements. In a comparison between a photograph and rendered images, we demonstrate the new model's ability to match the appearance of real hair."
]
} |
1905.07825 | 2946562100 | We solve the classic albedo and Milne problems of plane-parallel illumination of an isotropically-scattering half-space when generalized to a Euclidean domain @math for arbitrary @math . A continuous family of pseudo-problems and related @math functions arises and includes the classical 3D solutions, as well as 2D "Flatland" and rod-model solutions, as special cases. The Case-style eigenmode method is applied to the general problem and the internal scalar densities, emerging distributions, and their respective moments are expressed in closed-form. Universal properties invariant to dimension @math are highlighted and we find that a discrete diffusion mode is not universal for @math in absorbing media. We also find unexpected correspondences between differing dimensions and between anisotropic 3D scattering and isotropic scattering in high dimension. | Going beyond 3D, higher-order dimensions occasionally find application in practice. Exact time-dependent solutions in 2D and 4D have been combined to approximate the unknown 3D solution for the isotropic point source in infinite media @cite_63 , and later applied to a time-dependent searchlight problem using the method of images @cite_23 . In the study of cosmic microwave background radiation it has even been considered to change dimension over the course of a single random flight @cite_83 . | {
"cite_N": [
"@cite_23",
"@cite_63",
"@cite_83"
],
"mid": [
"2049719606",
"2156027049",
"2005071721"
],
"abstract": [
"The Green’s function of the time dependent radiative transfer equation for the semi-infinite medium is derived for the first time by a heuristic approach based on the extrapolated boundary condition and on an almost exact solution for the infinite medium. Monte Carlo simulations performed both in the simple case of isotropic scattering and of an isotropic point-like source, and in the more realistic case of anisotropic scattering and pencil beam source, are used to validate the heuristic Green’s function. Except for the very early times, the proposed solution has an excellent accuracy (>98 for the isotropic case, and >97 for the anisotropic case) significantly better than the diffusion equation. The use of this solution could be extremely useful in the biomedical optics field where it can be directly employed in conditions where the use of the diffusion equation is limited, e.g. small volume samples, high absorption and or low scattering media, short source-receiver distances and early times. Also it represents a first step to derive tools for other geometries (e.g. slab and slab with inhomogeneities inside) of practical interest for noninvasive spectroscopy and diffuse optical imaging. Moreover the proposed solution can be useful to several research fields where the study of a transport process is fundamental.",
"The time-dependent Boltzmann equation, which describes the propagation of radiation from a point source in a random medium, is solved exactly in Fourier space. An explicit expression in real space is given in two and four dimensions. In three dimensions an accurate interpolation formula is found. The average intensity at a large distance @math from the source has two peaks, a ballistic peak at time @math and a diffusion peak at @math (with @math the velocity and @math the diffusion coefficient). We find that forward scattering adds a tail to the ballistic peak in two and three dimensions, @math and @math , respectively. Expressions in the literature do not contain this tail.",
"We shall study random flights that start in a space of one given dimension and, after performing a definite number of steps, continue to develop in a space of higher dimension. We show that if the difference of the dimension of spaces is even, then the probability density describing the composite flight can be expressed as marginalizations of the probability density associated to a random flight in the space of less dimensions. This dimensional reduction is a consequence of Gegenbauer addition theorem."
]
} |
1905.07825 | 2946562100 | We solve the classic albedo and Milne problems of plane-parallel illumination of an isotropically-scattering half-space when generalized to a Euclidean domain @math for arbitrary @math . A continuous family of pseudo-problems and related @math functions arises and includes the classical 3D solutions, as well as 2D "Flatland" and rod-model solutions, as special cases. The Case-style eigenmode method is applied to the general problem and the internal scalar densities, emerging distributions, and their respective moments are expressed in closed-form. Universal properties invariant to dimension @math are highlighted and we find that a discrete diffusion mode is not universal for @math in absorbing media. We also find unexpected correspondences between differing dimensions and between anisotropic 3D scattering and isotropic scattering in high dimension. | The study of transport problem in the case of general dimension can reveal how dimension @math impacts various aspects of the solution. New insights about 3D transport have been found by identifying correspondences between problems of differing configurations and dimensionalities @cite_52 @cite_31 @cite_30 , which often appear unexpectedly. Most investigations of this nature have considered only infinite media. In this work, we identify some new exact correspondences regarding anisotropic scattering in a 3D half space. | {
"cite_N": [
"@cite_30",
"@cite_31",
"@cite_52"
],
"mid": [
"2072616852",
"2106143926",
"2059725015"
],
"abstract": [
"",
"Two random-walk related problems which have been studied independently in the past, the expected maximum of a random walker in one dimension and the flux to a spherical trap of particles undergoing discrete jumps in three dimensions, are shown to be closely related to each other and are studied using a unified approach as a solution to a Wiener-Hopf problem. For the flux problem, this work shows that a constant c = 0.29795219 which appeared in the context of the boundary extrapolation length, and was previously found only numerically, can be derived analytically. The same constant enters in higher-order corrections to the expected-maximum asymptotics. As a byproduct, we also prove a new universal result in the context of the flux problem which is an analogue of the Sparre Andersen theorem proved in the context of the random walker's maximum.",
"also with probability 1. This result is proved in ?5. The difficulty in proving lower bounds like (1.2) is that one has to consider all possible coverings of the path by small convex sets in the definition of 4-measure. In the past the only successful method has been to use the connection between Hausdorff measures and generalized capacities. The first result of this kind appears in [16] where it is proved that for k >2, the Hausdorff measure with respect to ta is infinite for all a 3 we obtain a \"law of the iterated logarithm\" for the total time Tk(a, co) spent by the path X in a sphere of radius a as a-* 0+. We also determine the local behavior of the first passage time Pk(a, co) out of a sphere of radius a for k > 1. In order to obtain the required asymptotic results we required good estimates of the distribution functions for the random variables Tk(a, co), Pk(a, c). We use the method developed by Mark Kac to compute these dis-"
]
} |
1905.07825 | 2946562100 | We solve the classic albedo and Milne problems of plane-parallel illumination of an isotropically-scattering half-space when generalized to a Euclidean domain @math for arbitrary @math . A continuous family of pseudo-problems and related @math functions arises and includes the classical 3D solutions, as well as 2D "Flatland" and rod-model solutions, as special cases. The Case-style eigenmode method is applied to the general problem and the internal scalar densities, emerging distributions, and their respective moments are expressed in closed-form. Universal properties invariant to dimension @math are highlighted and we find that a discrete diffusion mode is not universal for @math in absorbing media. We also find unexpected correspondences between differing dimensions and between anisotropic 3D scattering and isotropic scattering in high dimension. | For infinite homogeneous media, Green's functions for monoenergetic linear transport with isotropic @cite_63 @cite_49 @cite_33 @cite_41 @cite_84 @cite_80 and anisotropic @cite_86 @cite_1 @cite_73 scattering are known in domains apart from 3D. The unidirectional point source was also considered in Flatland @cite_36 . In these infinite domains, the non-universal role of diffusion as a rigorous asymptote'' of the full solution for general dimension with absorption was observed @cite_14 @cite_84 . We expand on these findings for the isotropic scattering case, and find a simple algebraic condition for diffusion asymptotics to arise. | {
"cite_N": [
"@cite_14",
"@cite_33",
"@cite_41",
"@cite_36",
"@cite_1",
"@cite_84",
"@cite_86",
"@cite_49",
"@cite_63",
"@cite_80",
"@cite_73"
],
"mid": [
"2018747923",
"2044711402",
"2049691331",
"2510959238",
"2116625471",
"2107648085",
"2072111654",
"2065324197",
"2156027049",
"",
"2256032327"
],
"abstract": [
"",
"We present a general method of studying the transport process ( X(t) ) , t≥0, in the Euclidean space ℝ m , m≥2, based on the analysis of the integral transforms of its distributions. We show that the joint characteristic functions of ( X(t) ) are connected with each other by a convolution-type recurrent relation. This enables us to prove that the characteristic function (Fourier transform) of ( X(t) ) in any dimension m≥2 satisfies a convolution-type Volterra integral equation of second kind. We give its solution and obtain the characteristic function of ( X(t) ) in terms of the multiple convolutions of the kernel of the equation with itself. An explicit form of the Laplace transform of the characteristic function in any dimension is given. The complete solution of the problem of finding the initial conditions for the governing partial differential equations, is given.",
"In this paper we analyze some aspects of exponential flights , a stochastic process that governs the evolution of many random transport phenomena, such as neutron propagation, chemical biological species migration, or electron motion. We introduce a general framework for @math -dimensional setups, and emphasize that exponential flights represent a deceivingly simple system, where in most cases closed-form formulas can hardly be obtained. We derive a number of novel exact (where possible) or asymptotic results, among which the stationary probability density for 2d systems, a long-standing issue in Physics, and the mean residence time in a given volume. Bounded or unbounded, as well as scattering or absorbing domains are examined, and Monte Carlo simulations are performed so as to support our findings.",
"We compute the exact solutions of the radiative transfer equation in two dimensions for isotropic scattering. The intensity and the radiance are given in the space–time domain when the source is punctual and isotropic or unidirectional. These analytical results are compared to Monte-Carlo simulations in four particular situations.",
"In this study, Green’s function of the two-dimensional radiative transfer equation is derived for an infinitely extended anisotropically scattering medium, which is illuminated by a unidirectional source distribution. In the steady-state domain, the final results, which are based on eigenvalues and eigenvectors, are given analytically apart from the eigenvalues. For the time-dependent case an additional numerical inverse Fourier transform is required. The obtained solutions were successfully validated with another exact analytical solution in the time domain for isotropically scattering and with the Monte Carlo method for anisotropically scattering media.",
"We derive new diffusion solutions to the monoenergetic generalized linear Boltzmann transport equation for the stationary collision density and scalar flux about an isotropic point source in an infinite d-dimensional absorbing medium with isotropic scattering. We consider both classical transport theory with exponentially distributed free paths in arbitrary dimensions as well as a number of nonclassical transport theories (nonexponential random flights) that describe a broader class of transport processes within partially correlated random media. New rigorous asymptotic diffusion approximations are derived where possible. We also generalize Grosjean’s moment-preserving approach of separating the first (or uncollided) distribution from the collided portion and approximating only the latter using diffusion. We find that for any spatial dimension and for many free-path distributions Grosjean’s approach produces compact, analytic approximations that are, overall, more accurate for high absorption and for smal...",
"Synopsis This paper deals with the solution of the generalized k -dimensional random flight problem, where the paths are distributed according to given probability functions and where the scattering collisions are non-isotropic. This is achieved by means of a new method which is largely based on recurrence relations and therefore very different from the well-known procedure using discontinuous integrals. In the special case, when the collisions are isotropic and the paths have given lengths, the new results become identical to those, which were originally obtained by the older method.",
"We consider the planar random motion of a particle that moves with constant finite speed c and, at Poisson-distributed times, changes its direction 0 with uniform law in [0, 27r). This model represents the natural two-dimensional counterpart of the wellknown Goldstein-Kac telegraph process. For the particle's position (X (t), Y(t)), t > 0, we obtain the explicit conditional distribution when the number of changes of direction is fixed. From this, we derive the explicit probability law f(x, y, t) of (X(t), Y(t)) and show that the density p(x, y, t) of its absolutely continuous component is the fundamental solution to the planar wave equation with damping. We also show that, under the usual Kac condition on the velocity c and the intensity X of the Poisson process, the density p tends to the transition density of planar Brownian motion. Some discussions concerning the probabilistic structure of wave diffusion with damping are presented and some applications of the model are sketched.",
"The time-dependent Boltzmann equation, which describes the propagation of radiation from a point source in a random medium, is solved exactly in Fourier space. An explicit expression in real space is given in two and four dimensions. In three dimensions an accurate interpolation formula is found. The average intensity at a large distance @math from the source has two peaks, a ballistic peak at time @math and a diffusion peak at @math (with @math the velocity and @math the diffusion coefficient). We find that forward scattering adds a tail to the ballistic peak in two and three dimensions, @math and @math , respectively. Expressions in the literature do not contain this tail.",
"",
"The linear Boltzmann equation can be solved with separation of variables in one dimension, i.e., in three-dimensional space with planar symmetry. In this method, solutions are given by superpositions of eigenmodes which are sometimes called singular eigenfunctions. In this paper, we explore the singular-eigenfunction approach in flatland or two-dimensional space."
]
} |
1905.07825 | 2946562100 | We solve the classic albedo and Milne problems of plane-parallel illumination of an isotropically-scattering half-space when generalized to a Euclidean domain @math for arbitrary @math . A continuous family of pseudo-problems and related @math functions arises and includes the classical 3D solutions, as well as 2D "Flatland" and rod-model solutions, as special cases. The Case-style eigenmode method is applied to the general problem and the internal scalar densities, emerging distributions, and their respective moments are expressed in closed-form. Universal properties invariant to dimension @math are highlighted and we find that a discrete diffusion mode is not universal for @math in absorbing media. We also find unexpected correspondences between differing dimensions and between anisotropic 3D scattering and isotropic scattering in high dimension. | For the case of bounded domains, isotropic scattering in a Flatland half space has been solved in a number of works @cite_15 @cite_62 @cite_77 @cite_9 . Slab geometry @cite_79 and layered problems @cite_67 in Flatland have also been solved. The study of inverse problems in plane-parallel domains of general dimension has been considered in a number of works (see, for example, @cite_92 ). We expand on these solutions by considering general dimension and by producing the singular eigenfunctions, whose orthogonality properties allow derivation of the moments of the internal scalar flux and angular distributions. These moment derivations complement the mean, variance and general moments previously produced for 3D @cite_69 @cite_8 and are useful for forming approximate searchlight approximations @cite_56 and for guiding Monte Carlo estimators towards zero-variance @cite_17 . | {
"cite_N": [
"@cite_67",
"@cite_62",
"@cite_69",
"@cite_8",
"@cite_92",
"@cite_9",
"@cite_56",
"@cite_77",
"@cite_79",
"@cite_15",
"@cite_17"
],
"mid": [
"2086349958",
"2078604327",
"1996989134",
"2227626537",
"",
"2963630503",
"2047361484",
"2964349548",
"",
"",
""
],
"abstract": [
"Abstract This study presents an analytical approach for obtaining Green's function of the two-dimensional radiative transfer equation to the boundary-value problem of a layered medium. A conventional Fourier transform and a modified Fourier series which is defined in a rotated reference frame are applied to derive an analytical solution of the radiance in the transformed space. The Monte Carlo method was used for a successful validation of the derived solutions.",
"We study transport and diffusion of classical waves in two-dimensional disordered systems and in particular surface waves on a flat surface with randomly fluctuating impedance. We derive from first principles a radiative transport equation for the angularly resolved energy density of the surface waves. This equation accounts for multiple scattering of surface waves as well as for their decay because of leakage into volume waves. We analyze the dependence of the scattering mean free path and of the decay rate on the power spectrum of fluctuations. We also consider the diffusion approximation of the surface radiative transport equation and calculate the angular distribution of the energy transmitted by a strip of random surface impedance.",
"",
"Abstract Two classic problems of radiative transfer and neutron transport are solved for a spatially-uniform semi-infinite medium with isotropic scattering. General analytical equations are derived for (1) angular moments of the outward current and (2) spatial moments of the total flux within the half-space. Such moments, for example, can provide analytically explicit equations for determining the surface albedo of the medium as well as the mean depth and mean square distance of travel within the medium. The analysis is done with the Case-style eigenmode method as expressed in terms of the Chandrasekhar H -function and its moments.",
"",
"A growing acceptance of fiber-reinforced composite materials imparts some relevance to exploring the effects which a predominantly linear scattering lattice may have upon interior radiative transport. Indeed, a central feature of electromagnetic wave propagation within such a lattice, if sufficiently dilute, is ray confinement to cones whose half-angles are set by that between lattice and the incident ray. When such propagation is subordinated to a viewpoint of an unpolarized intensity transport, one arrives at a somewhat simplified variant of the Boltzmann equation with spherical scattering demoted to its cylindrical counterpart. With a view to initiating a hopefully wider discussion of such phenomena, we follow through in detail the half-space albedo problem. This is done first along canonical lines that harness the Wiener-Hopf technique, and then once more in a discrete ordinates setting via flux decomposition along the eigenbasis of the underlying attenuation scattering matrix. Good agreement is seen to prevail. We further suggest that the Case singular eigenfunction apparatus could likewise be evolved here in close analogy to its original, spherical scattering model. A cursory contact with related problems in the astrophysical literature suggests, in addition, that the basic physical fidelity of our scalar radiative transfer equation (RTE) remains open to improvement by passage to a (4×1) Stokes vector, (4×4) matricial setting.",
"",
"AbstractWe solve the Milne, constant-source, and albedo problems for isotropic scattering in a two-dimensional “Flatland” half-space via the Wiener–Hopf method. The Flatland H-function is derived a...",
"",
"",
""
]
} |
1905.07986 | 2945762252 | Online algorithms that allow a small amount of migration or recourse have been intensively studied in the last years. They are essential in the design of competitive algorithms for dynamic problems, where objects can also depart from the instance. In this work, we give a general framework to obtain so called robust online algorithms for these dynamic problems: these online algorithm achieve an asymptotic competitive ratio of @math with migration @math , where @math is the best known offline asymptotic approximation ratio. In order to use our framework, one only needs to construct a suitable online algorithm for the static online case, where items never depart. To show the usefulness of our approach, we improve upon the best known robust algorithms for the dynamic versions of generalizations of Strip Packing and Bin Packing, including the first robust algorithm for general @math -dimensional Bin Packing and Vector Packing. | The offline variants of the geometric packing problems have also been studied intensively. See e. ,g. the survey of Christensen @cite_23 and the references therein. | {
"cite_N": [
"@cite_23"
],
"mid": [
"2577702044"
],
"abstract": [
"Abstract The bin packing problem is a well-studied problem in combinatorial optimization. In the classical bin packing problem, we are given a list of real numbers in ( 0 , 1 ] and the goal is to place them in a minimum number of bins so that no bin holds numbers summing to more than 1. The problem is extremely important in practice and finds numerous applications in scheduling, routing and resource allocation problems. Theoretically the problem has rich connections with discrepancy theory, iterative methods, entropy rounding and has led to the development of several algorithmic techniques. In this survey we consider approximation and online algorithms for several classical generalizations of bin packing problem such as geometric bin packing, vector bin packing and various other related problems. There is also a vast literature on mathematical models and exact algorithms for bin packing. However, this survey does not address such exact algorithms. In two-dimensional geometric bin packing , we are given a collection of rectangular items to be packed into a minimum number of unit size square bins. This variant has a lot of applications in cutting stock, vehicle loading, pallet packing, memory allocation and several other logistics and robotics related problems. In d -dimensional vector bin packing , each item is a d -dimensional vector that needs to be packed into unit vector bins. This problem is of great significance in resource constrained scheduling and in recent virtual machine placement in cloud computing. We also consider several other generalizations of bin packing such as geometric knapsack, strip packing and other related problems such as vector scheduling, vector covering etc. We survey algorithms for these problems in offline and online setting, and also mention results for several important special cases. We briefly mention related techniques used in the design and analysis of these algorithms. In the end we conclude with a list of open problems."
]
} |
1905.07856 | 2953557370 | We study pragmatics in political campaign text, through analysis of speech acts and the target of each utterance. We propose a new annotation schema incorporating domain-specific speech acts, such as commissive-action, and present a novel annotated corpus of media releases and speech transcripts from the 2016 Australian election cycle. We show how speech acts and target referents can be modeled as sequential classification, and evaluate several techniques, exploiting contextualized word representations, semi-supervised learning, task dependencies and speaker meta-data. | The recent adoption of NLP methods has led to significant advances in the field of computational social science @cite_33 , including political science @cite_22 . With the increasing availability of datasets and computational resources, large-scale comparative political text analysis has gained the attention of political scientists @cite_34 . One task of particular importance is the analysis of the functional intent of utterances in political text. Though it has received notable attention from many political scientists (see intro ), the primary focus of almost all work has been to derive insights from manual annotations, and not to study computational approaches to automate the task. | {
"cite_N": [
"@cite_22",
"@cite_34",
"@cite_33"
],
"mid": [
"2095655043",
"2159544539",
"2417643204"
],
"abstract": [
"Politics and political conflict often occur in the written and spoken word. Scholars have long recognized this, but the massive costs of analyzing even moderately sized collections of texts have hindered their use in political science research. Here lies the promise of automated text analysis: it substantially reduces the costs of analyzing large collections of text. We provide a guide to this exciting new area of research and show how, in many instances, the methods have already obtained part of their promise. But there are pitfalls to using automated methods—they are no substitute for careful thought and close reading and require extensive and problem-specific validation. We survey a wide range of new methods, provide guidance on how to validate the output of the models, and clarify misconceptions and errors in the literature. To conclude, we argue that for automated text methods to become a standard tool for political scientists, methodologists must contribute new methods and new methods of validation. Language is the medium for politics and political conflict. Candidates debate and state policy positions during a campaign. Once elected, representatives write and debate legislation. After laws are passed, bureaucrats solicit comments before they issue regulations. Nations regularly negotiate and then sign peace treaties, with language that signals the motivations and relative power of the countries involved. News reports document the day-to-day affairs of international relations that provide a detailed picture of conflict and cooperation. Individual candidates and political parties articulate their views through party platforms and manifestos. Terrorist groups even reveal their preferences and goals through recruiting materials, magazines, and public statements. These examples, and many others throughout political science, show that to understand what politics is about we need to know what political actors are saying and writing. Recognizing that language is central to the study of politics is not new. To the contrary, scholars of politics have long recognized that much of politics is expressed in words. But scholars have struggled when using texts to make inferences about politics. The primary problem is volume: there are simply too many political texts. Rarely are scholars able to manually read all the texts in even moderately sized corpora. And hiring coders to manually read all documents is still very expensive. The result is that",
"Recent advances in research tools for the systematic analysis of textual data are enabling exciting new research throughout the social sciences. For comparative politics, scholars who are often interested in nonEnglish and possibly multilingual textual datasets, these advances may be difficult to access. This article discusses practical issues that arise in the processing, management, translation, and analysis of textual data with a particular focus on how procedures differ across languages. These procedures are combined in two applied examples of automated text analysis using the recently introduced Structural Topic Model. We also show how the model can be used to analyze data that have been translated into a single language via machine translation tools. All the methods we describe here are implemented in open-source software packages available from the authors.",
"We live life in the network. When we wake up in the morning, we check our e-mail, make a quick phone call, walk outside (our movements captured by a high definition video camera), get on the bus (swiping our RFID mass transit cards) or drive (using a transponder to zip through the tolls). We arrive at the airport, making sure to purchase a sandwich with a credit card before boarding the plane, and check our BlackBerries shortly before takeoff. Or we visit the doctor or the car mechanic, generating digital records of what our medical or automotive problems are. We post blog entries confiding to the world our thoughts and feelings, or maintain personal social network profiles revealing our friendships and our tastes. Each of these transactions leaves digital breadcrumbs which, when pulled together, offer increasingly comprehensive pictures of both individuals and groups, with the potential of transforming our understanding of our lives, organizations, and societies in a fashion that was barely conceivable just a few years ago."
]
} |
1905.07856 | 2953557370 | We study pragmatics in political campaign text, through analysis of speech acts and the target of each utterance. We propose a new annotation schema incorporating domain-specific speech acts, such as commissive-action, and present a novel annotated corpus of media releases and speech transcripts from the 2016 Australian election cycle. We show how speech acts and target referents can be modeled as sequential classification, and evaluate several techniques, exploiting contextualized word representations, semi-supervised learning, task dependencies and speaker meta-data. | Speech act theory is fundamental to study such discourse and pragmatics @cite_13 @cite_26 . A speech act is an illocutionary act of conversation and reflects shallow discourse structures of language. Due to its predominantly small-data setting, speech act classification approaches have generally relied on bag-of-words models @cite_24 @cite_9 , although recent approaches have used deep-learning models through data augmentation @cite_14 and learning word representations for the target domain @cite_23 , outperforming traditional bag-of-words approaches. | {
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_9",
"@cite_24",
"@cite_23",
"@cite_13"
],
"mid": [
"2517028602",
"",
"",
"2122491924",
"2889788424",
"1576632330"
],
"abstract": [
"This paper addresses the problem of speech act recognition in written asynchronous conversations (e.g., fora, emails). We propose a class of conditional structured models defined over arbitrary graph structures to capture the conversational dependencies between sentences. Our models use sentence representations encoded by a long short term memory (LSTM) recurrent neural model. Empirical evaluation shows the effectiveness of our approach over existing ones: (i) LSTMs provide better task-specific representations, and (ii) the global joint model improves over local models.",
"",
"",
"This research studies the text genre of message board forums, which contain a mixture of expository sentences that present factual information and conversational sentences that include communicative acts between the writer and readers. Our goal is to create sentence classifiers that can identify whether a sentence contains a speech act, and can recognize sentences containing four different speech act classes: Commissives, Directives, Expressives, and Representatives. We conduct experiments using a wide variety of features, including lexical and syntactic features, speech act word lists from external resources, and domain-specific semantic class features. We evaluate our results on a collection of message board posts in the domain of veterinary medicine.",
"Participants in an asynchronous conversation (e.g., forum, e-mail) interact with each other at different times, performing certain communicative acts, called speech acts (e.g., question, request). In this article, we propose a hybrid approach to speech act recognition in asynchronous conversations. Our approach works in two main steps: a long short-term memory recurrent neural network (LSTM-RNN) first encodes each sentence separately into a task-specific distributed representation, and this is then used in a conditional random field (CRF) model to capture the conversational dependencies between sentences. The LSTM-RNN model uses pretrained word embeddings learned from a large conversational corpus and is trained to classify sentences into speech act types. The CRF model can consider arbitrary graph structures to model conversational dependencies in an asynchronous conversation. In addition, to mitigate the problem of limited annotated data in the asynchronous domains, we adapt the LSTM-RNN model to learn ...",
"* Lecture I * Lecture II * Lecture III * Lecture IV * Lecture V * Lecture VI * Lecture VII * Lecture VIII * Lecture IX * Lecture X * Lecture XI * Lecture XII"
]
} |
1905.07856 | 2953557370 | We study pragmatics in political campaign text, through analysis of speech acts and the target of each utterance. We propose a new annotation schema incorporating domain-specific speech acts, such as commissive-action, and present a novel annotated corpus of media releases and speech transcripts from the 2016 Australian election cycle. We show how speech acts and target referents can be modeled as sequential classification, and evaluate several techniques, exploiting contextualized word representations, semi-supervised learning, task dependencies and speaker meta-data. | Another technique that has been applied to compensate for the sparsity of labeled data is semi-supervised learning, making use of auxiliary unlabeled data, as done previously for speech act classification in e-mail and forum text @cite_7 . also used semi-supervised methods for speech act classification over Twitter data. They used transductive SVM and graph-based label propagation approaches to annotate unlabeled data using a small seed training set. leveraged out-of-domain labeled data based on a domain adversarial learning approach. In this work, we focus on target based speech act analysis (with a custom class-set) for political campaign text and use a deep-learning approach by incorporating contextualized word representations @cite_27 and a cross-view training framework @cite_3 to leverage in-domain unlabeled text. | {
"cite_N": [
"@cite_27",
"@cite_3",
"@cite_7"
],
"mid": [
"2962739339",
"2891602716",
"2089285937"
],
"abstract": [
"We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pretrained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pretrained network is crucial, allowing downstream models to mix different types of semi-supervision signals.",
"",
"In this paper, we present a semi-supervised method for automatic speech act recognition in email and forums. The major challenge of this task is due to lack of labeled data in these two genres. Our method leverages labeled data in the Switchboard-DAMSL and the Meeting Recorder Dialog Act database and applies simple domain adaptation techniques over a large amount of unlabeled email and forum data to address this problem. Our method uses automatically extracted features such as phrases and dependency trees, called subtree features, for semi-supervised learning. Empirical results demonstrate that our model is effective in email and forum speech act recognition."
]
} |
1905.07518 | 2946790301 | Computing a quasi-developable strip surface bounded by design curves finds wide industrial applications. Existing methods compute discrete surfaces composed of developable lines connecting sampling points on input curves which are not adequate for generating smooth quasi-developable surfaces. We propose the first method which is capable of exploring the full solution space of continuous input curves to compute a smooth quasi-developable ruled surface with as large developability as possible. The resulting surface is exactly bounded by the input smooth curves and is guaranteed to have no self-intersections. The main contribution is a variational approach to compute a continuous mapping of parameters of input curves by minimizing a function evaluating surface developability. Moreover, we also present an algorithm to represent a resulting surface as a B-spline surface when input curves are B-spline curves. | Developable surface modeling has been widely studied in various disciplines. In developable object simulation, the triangular meshes are the most frequently used representations. proposed a physically-based method for animating developable surfaces with nonconformal faces @cite_28 . encoded a developable surface as a set of ruling lines and found positions of ruling lines by relaxing mean curvature bending energy @cite_25 . proposed a local operator to modify a triangular mesh into piecewise developales @cite_9 . A planar quadrilateral mesh is another type of discrete developable surface which is based on a solid theoretical foundation. proposed a method for modeling a developable surface with a quadrilateral mesh by optimizing face planarization and performing mesh subdivision in an alternative manner @cite_4 . make use of quadrilateral meshes and define a discrete orthogonal geodesic net to model developable surfaces @cite_19 . | {
"cite_N": [
"@cite_4",
"@cite_28",
"@cite_9",
"@cite_19",
"@cite_25"
],
"mid": [
"2055410695",
"2006212003",
"2810303719",
"2963590369",
"1989961031"
],
"abstract": [
"In architectural freeform design, the relation between shape and fabrication poses new challenges and requires more sophistication from the underlying geometry. The new concept of conical meshes satisfies central requirements for this application: They are quadrilateral meshes with planar faces, and therefore particularly suitable for the design of freeform glass structures. Moreover, they possess a natural offsetting operation and provide a support structure orthogonal to the mesh. Being a discrete analogue of the network of principal curvature lines, they represent fundamental shape characteristics. We show how to optimize a quad mesh such that its faces become planar, or the mesh becomes even conical. Combining this perturbation with subdivision yields a powerful new modeling tool for all types of quad meshes with planar faces, making subdivision attractive for architecture design and providing an elegant way of modeling developable surfaces.",
"We present a new discretization for the physics-based animation of developable surfaces. Constrained to not deform at all in-plane but free to bend out-of-plane, these are an excellent approximation for many materials, including most cloth, paper, and stiffer materials. Unfortunately the conforming (geometrically continuous) discretizations used in graphics break down in this limit. Our nonconforming approach solves this problem, allowing us to simulate surfaces with zero in-plane deformation as a hard constraint. However, it produces discontinuous meshes, so we further couple this with a \"ghost\" conforming mesh for collision processing and rendering. We also propose a new second order accurate constrained mechanics time integration method that greatly reduces the numerical damping present in the usual first order methods used in graphics, for virtually no extra cost and sometimes significant speed-up.",
"This paper introduces a geometric flow that evolves a given arbitrary surface to a piecewise developable surface. Such surfaces are desirable for milling and fabrication from flat materials.",
"We present a discrete theory for modeling developable surfaces as quadrilateral meshes satisfying simple angle constraints. We demonstrate the effectiveness of our discrete model in a developable surface editing system.",
"We introduce a discrete paradigm for developable surface modeling. Unlike previous attempts at interactive developable surface modeling, our system is able to enforce exact developability at every step, ensuring that users do not inadvertently suggest configurations that leave the manifold of admissible folds of a flat two-dimensional sheet. With methods for navigation of this highly nonlinear constraint space in place, we show how to formulate a discrete mean curvature bending energy measuring how far a given discrete developable surface is from being flat. This energy enables relaxation of user-generated configurations and suggests a straightforward subdivision scheme that produces admissible smoothed versions of bent regions of our discrete developable surfaces. © 2012 Wiley Periodicals, Inc."
]
} |
1905.07518 | 2946790301 | Computing a quasi-developable strip surface bounded by design curves finds wide industrial applications. Existing methods compute discrete surfaces composed of developable lines connecting sampling points on input curves which are not adequate for generating smooth quasi-developable surfaces. We propose the first method which is capable of exploring the full solution space of continuous input curves to compute a smooth quasi-developable ruled surface with as large developability as possible. The resulting surface is exactly bounded by the input smooth curves and is guaranteed to have no self-intersections. The main contribution is a variational approach to compute a continuous mapping of parameters of input curves by minimizing a function evaluating surface developability. Moreover, we also present an algorithm to represent a resulting surface as a B-spline surface when input curves are B-spline curves. | Triangular meshes are also often employed in industrial design of developable shapes. propose to approximate a parametric surface with a minimum set of triangle strips with @math continuity @cite_6 @cite_1 . compute piecewise developable triangular meshes from arbitrary design curves @cite_23 . invented a system for interactive design of developable triangular meshes from sketched curves using a touching panel device @cite_11 . These methods are not capable of creating smooth developable surfaces. | {
"cite_N": [
"@cite_23",
"@cite_1",
"@cite_6",
"@cite_11"
],
"mid": [
"1530069394",
"2115687056",
"2165741765",
"2157953584"
],
"abstract": [
"Developable surfaces are surfaces that can be unfolded into the plane with no distortion. Although ubiquitous in our everyday surroundings, modeling them using existing tools requires significant geometric expertise and time. Our paper simplifies the modeling process by introducing an intuitive sketch-based approach for modeling developables. We develop an algorithm that given an arbitrary, user specified 3D polyline boundary, constructed using a sketching interface, generates a smooth discrete developable surface that interpolates this boundary. Our method utilizes the connection between developable surfaces and the convex hulls of their boundaries. The method explores the space of possible interpolating surfaces searching for a developable surface with desirable shape characteristics such as fairness and predictability. The algorithm is not restricted to any particular subset of developable surfaces. We demonstrate the effectiveness of our method through a series of examples, from architectural design to garments.",
"Developable surfaces have many desired properties in the manufacturing process. Since most existing CAD systems utilize tensor-product parametric surfaces including B-splines as design primitives, there is a great demand in industry to convert a general free-form parametric surface within a prescribed global error bound into developable patches. In this paper, we propose a practical and efficient solution to approximate a rectangular parametric surface with a small set of C 0 -joint developable strips. The key contribution of the proposed algorithm is that, several optimization problems are elegantly solved in a sequence that offers a controllable global error bound on the developable surface approximation. Experimental results are presented to demonstrate the effectiveness and stability of the proposed algorithm.",
"Developable surfaces have many desired properties in manufacturing process. Since most existing CAD systems utilize parametric surfaces as the design primitive, there is a great demand in industry to convert a parametric surface within a prescribed global error bound into developable patches. In this work we propose a simple and efficient solution to approximate a general parametric surface with a minimum set of C0-joint developable strips. The key contribution of the proposed algorithm is that, several global optimization problems are elegantly solved in a sequence that offers a controllable global error bound on the developable surface approximation. Experimental results are presented to demonstrate the effectiveness and stability of the proposed algorithm.",
"Design using free-form developable surfaces plays an important role in the manufacturing industry. Currently most commercial systems can only support converting free-form surfaces into approximate developable surfaces. Direct design using developable surfaces by interpolating descriptive curves is much desired in industry. In this paper, by enforcing a propagation scheme and observing its nesting and recursive nature, a dynamic programming method is proposed for the design task of interpolating 3D boundary curves with a discrete developable surface. By using dynamic programming, the interpolatory discrete developable surface is obtained by globally optimizing an objective that minimizes tangent plane variations over a boundary triangulation. The proposed method is simple and effective when used in industry. Experimental results are presented that demonstrate its practicality and efficiency in industrial design."
]
} |
1905.07518 | 2946790301 | Computing a quasi-developable strip surface bounded by design curves finds wide industrial applications. Existing methods compute discrete surfaces composed of developable lines connecting sampling points on input curves which are not adequate for generating smooth quasi-developable surfaces. We propose the first method which is capable of exploring the full solution space of continuous input curves to compute a smooth quasi-developable ruled surface with as large developability as possible. The resulting surface is exactly bounded by the input smooth curves and is guaranteed to have no self-intersections. The main contribution is a variational approach to compute a continuous mapping of parameters of input curves by minimizing a function evaluating surface developability. Moreover, we also present an algorithm to represent a resulting surface as a B-spline surface when input curves are B-spline curves. | use a composition of B-spline developable strips to approximate freeform architectural models @cite_16 , which essentially solve a constrained B-spline surface fitting problem @cite_17 . proposed an interactive developable surface design approach to create composite B-spline developable surfaces @cite_2 , in which a shape is designed through an incremental procedure of user modification and surface optimization. make use of the rectifying developable and propose an algorithm for modeling smooth developable surfaces from geodesic curves on surfaces @cite_22 . Complex developable surfaces are modeled by composing cone patches with smooth transitions @cite_7 . These design methods cannot be applied directly to compute a developable surface bounded by specified curves. | {
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_2",
"@cite_16",
"@cite_17"
],
"mid": [
"1994563843",
"1994722502",
"2298775225",
"2042189371",
"2417765468"
],
"abstract": [
"We present a novel and effective method for modeling a developable surface to simulate paper bending in interactive and animation applications. The method exploits the representation of a developable surface as the envelope of rectifying planes of a curve in 3D, which is therefore necessarily a geodesic on the surface. We manipulate the geodesic to provide intuitive shape control for modeling paper bending. Our method ensures a natural continuous isometric deformation from a piece of bent paper to its flat state without any stretching. Test examples show that the new scheme is fast, accurate, and easy to use, thus providing an effective approach to interactive paper bending. We also show how to handle non-convex piecewise smooth developable surfaces.",
"We model developable surfaces by wrapping a planar figure around cones and cylinders. Complicated developables can be constructed by successive mappings using cones and cylinders of different sizes and shapes. We also propose an intuitive control mechanism, which allows a user to select an arbitrary point on the planar figure and move it to a new position. Numerical techniques are then used to find a cone or cylinder that produces the required mapping. Several examples demonstrate the effectiveness of our technique. An effective technique for modeling the bending of paper using cones and cylinders.A methodology for producing complicated developable surfaces from a planar figure.Interactive control of paper bending from a user-specified displacement.",
"We present a new approach to geometric modeling with developable surfaces and the design of curved-creased origami. We represent developables as splines and express the nonlinear conditions relating to developability and curved folds as quadratic equations. This allows us to utilize a constraint solver, which may be described as energy-guided projection onto the constraint manifold, and which is fast enough for interactive modeling. Further, a combined primal-dual surface representation enables us to robustly and quickly solve approximation problems.",
"Motivated by applications in architecture and manufacturing, we discuss the problem of covering a freeform surface by single curved panels. This leads to the new concept of semi-discrete surface representation, which constitutes a link between smooth and discrete surfaces. The basic entity we are working with is the developable strip model. It is the semi-discrete equivalent of a quad mesh with planar faces, or a conjugate parametrization of a smooth surface. We present a B-spline based optimization framework for efficient computing with D-strip models. In particular we study conical and circular models, which semi-discretize the network of principal curvature lines, and which enjoy elegant geometric properties. Together with geodesic models and cylindrical models they offer a rich source of solutions for surface panelization problems.",
"We study the performance of algorithms for freeform surface fitting when different error terms are used as quadratic approximations to the squared orthogonal distances from data points to the fitting surface. We review the TD error term and the SD error term in surface fitting to point clouds, present robust surface fitting algorithms using the TD error term and a new variant of the SD error term. We report experimental results on comparing them with the prevailing PD error term in the setting of fitting B-spline surfaces to point cloud data. We conclude that using the TD error term and the SD error term leads to surface fitting algorithms that converge much faster than using the PD error term."
]
} |
1905.07518 | 2946790301 | Computing a quasi-developable strip surface bounded by design curves finds wide industrial applications. Existing methods compute discrete surfaces composed of developable lines connecting sampling points on input curves which are not adequate for generating smooth quasi-developable surfaces. We propose the first method which is capable of exploring the full solution space of continuous input curves to compute a smooth quasi-developable ruled surface with as large developability as possible. The resulting surface is exactly bounded by the input smooth curves and is guaranteed to have no self-intersections. The main contribution is a variational approach to compute a continuous mapping of parameters of input curves by minimizing a function evaluating surface developability. Moreover, we also present an algorithm to represent a resulting surface as a B-spline surface when input curves are B-spline curves. | A particular problem in industrial design is modeling developable strips bounded by design curves. This problem is frequently encountered in fabrication with inextensible materials @cite_5 . One example is the ship-hull design with a few dominating feature curves as input and developable surface patches are computed to interpolate the feature curves @cite_20 . Other industrial applications include shoe design and garment design. When the input curves are polylines, a developable mesh surface is created. Tang and Wang @cite_12 simulated the unfolding process of a bendable sheet to generate a bridge triangulation of vertices of input polylines to form a developable surface. In order to find a global optimum in the solution space of all reasonable triangulations, a developable triangulation problem is formulated as a graph problem and the Dijkstra's algorithm is utilized to solve it @cite_18 . proposed a local-global method which allows a perturbation of mesh vertices to optimize surface developability @cite_0 . | {
"cite_N": [
"@cite_18",
"@cite_0",
"@cite_5",
"@cite_20",
"@cite_12"
],
"mid": [
"1994365766",
"2023544486",
"2175924301",
"2019247722",
"1987836319"
],
"abstract": [
"We investigate how to define a triangulated ruled surface interpolating two polygonal directrices that will meet a variety of optimization objectives which originate from many CAD CAM and geometric modeling applications. This optimal triangulation problem is formulated as a combinatorial search problem whose search space however has the size tightly factorial to the numbers of points on the two directrices. To tackle this bound, we introduce a novel computational tool called multi-layer directed graph and establish an equivalence between the optimal triangulation and the single-source shortest path problem on the graph. Well known graph search algorithms such as the Dijkstra’s are then employed to solve the single-source shortest path problem, which effectively solves the optimal triangulation problem in O(mn) time, where n and m are the numbers of vertices on the two directrices respectively. Numerous experimental examples are provided to demonstrate the usefulness of the proposed optimal triangulation problem in a variety of engineering applications.",
"AbstractSurface development is used in many manufacturing planning operations, e.g. for garments, ships and automobiles. However, most freeform surfaces used in design are not developable, and therefore the developed patterns are not isometric to the designed surface. In some domains, the CAD model is created by skinning operations that interpolate smooth strips between a specified set of skeleton curves. In this paper, we propose a method to approximate a strip with a developable surface between the two space curves bounding it. We allow one of the bounding curves to be perturbed within a controllable tolerance and meet some other special engineering requirements. We formulate the problem as a combination of a discrete combinatorial optimization problem and a constrained nonlinear optimization problem, and propose an efficient iterative approach to solve the problem.",
"We present the first sketch-based modeling method for developable surfaces with pre-designed folds, such as garments or leather products. The main challenge we address for building folded surfaces from sketches is that silhouette strokes on the sketch correspond to discontinuous sets of non-planar curves on the 3D model. We introduce a new zippering algorithm for progressively identifying silhouette edges on the model and tying them to silhouette strokes. Our solution ensures that the strokes are fully covered and optimally sampled by the model. This new method, interleaved with developability optimization steps, is implemented in a multiview sketching system where the user can sketch the contours of internal folds in addition to the usual silhouettes, borders, and seam lines. All strokes are interpreted as hard constraints, while developability is only optimized. The developability error map we provide then enables users to add local seams or darts where needed and progressively improve their design. This makes our method robust, even to coarse input for which no fully developable solution exists.",
"Abstract The use of developable surfaces in ship design is of engineering importance because they can be easily manufactured without stretching or tearing, or without the use of heat treatment. In some cases, a ship hull can be entirely designed with the use of developable surfaces. In this paper, a method to create a quasi-developable B -spline surface between two limit curves is presented. The centreline, chines and sheer lines of a vessel are modelled as B -spline curves. Between each pair of these boundary curves or directrix lines, the generator lines or rulings are created and a quasi-developable B -spline surface containing the rulings is defined. A procedure based on multiconic development is used to modify the directrix lines in case the rulings intersect inside the boundary curves, avoiding non-developable portions of the surface. B -spline curves and surfaces are widely used today in practically all the design and naval architecture computer programs. Some examples of ship hulls entirely created with developable surfaces are presented.",
"A common operation in clothing and shoe design is to design a folding pattern over a narrow strip and then superimpose it with a smooth surface; the shape of the folding pattern is controlled by the boundary curve of the strip. Previous research results studying folds focused mostly on cloth modeling or in animations, which are driven more by visual realism, but allow large elastic deformations and usually completely ignore or avoid the surface developability issue. In reality, most materials used in garment and shoe industry are inextensible and uncompressible and hence any feasible folded surface must be developable, since it eventually needs to be flattened to its 2D pattern for manufacturing. Borrowing the classical boundary triangulation concept from descriptive geometry, this paper describes a computer-based method that automatically generates a specialized boundary triangulation approximation of a developable surface that interpolates a given strip. The development is achieved by geometrically simulating the folding process of the sheet as it would occur when rolled from one end of the strip to the other. Ample test examples are presented to validate the feasibility of the proposed method."
]
} |
1905.07542 | 2945305363 | There has been tremendous research progress in estimating the depth of a scene from a monocular camera image. Existing methods for single-image depth prediction are exclusively based on deep neural networks, and their training can be unsupervised using stereo image pairs, supervised using LiDAR point clouds, or semi-supervised using both stereo and LiDAR. In general, semi-supervised training is preferred as it does not suffer from the weaknesses of either supervised training, resulting from the difference in the cameras and the LiDARs field of view, or unsupervised training, resulting from the poor depth accuracy that can be recovered from a stereo pair. In this paper, we present our research in single image depth prediction using semi-supervised training that outperforms the state-of-the-art. We achieve this through a loss function that explicitly exploits left-right consistency in a stereo reconstruction, which has not been adopted in previous semi-supervised training. In addition, we describe the correct use of ground truth depth derived from LiDAR that can significantly reduce prediction error. The performance of our depth prediction model is evaluated on popular datasets, and the importance of each aspect of our semi-supervised training approach is demonstrated through experimental results. Our deep neural network model has been made publicly available. | Supervised methods use ground truth depth, usually from LiDAR in outdoor scenes, for training a network. Eigen et.al. @cite_11 was one of the first who used such a method to train a convolutional neural network. First, they generate the coarse prediction and then use another network to refine the coarse output to produce a more accurate depth map. Following @cite_11 , several techniques have been proposed to improve the accuracy of convolutional neural networks such as CRFs @cite_4 , inverse Huber loss as a more robust loss function @cite_17 , joint optimization of surface normal and depth in the loss function @cite_21 @cite_18 @cite_24 , fusion of multiple depths maps using Fourier transform @cite_10 , and formulation of depth estimation as a problem of classification fu2018deep . | {
"cite_N": [
"@cite_18",
"@cite_11",
"@cite_4",
"@cite_21",
"@cite_24",
"@cite_10",
"@cite_17"
],
"mid": [
"2963911235",
"2171740948",
"2124907686",
"1899309388",
"2798927139",
"2798727000",
"2963591054"
],
"abstract": [
"This paper considers the problem of single image depth estimation. The employment of convolutional neural networks (CNNs) has recently brought about significant advancements in the research of this problem. However, most existing methods suffer from loss of spatial resolution in the estimated depth maps; a typical symptom is distorted and blurry reconstruction of object boundaries. In this paper, toward more accurate estimation with a focus on depth maps with higher spatial resolution, we propose two improvements to existing approaches. One is about the strategy of fusing features extracted at different scales, for which we propose an improved network architecture consisting of four modules: an encoder, decoder, multi-scale feature fusion module, and refinement module. The other is about loss functions for measuring inference errors used in training. We show that three loss terms, which measure errors in depth, gradients and surface normals, respectively, contribute to improvement of accuracy in an complementary fashion. Experimental results show that these two improvements enable to attain higher accuracy than the current state-of-the-arts, which is given by finer resolution reconstruction, for example, with small objects and object boundaries.",
"Predicting depth is an essential component in understanding the 3D geometry of a scene. While for stereo images local correspondence suffices for estimation, finding depth relations from a single image is less straightforward, requiring integration of both global and local information from various cues. Moreover, the task is inherently ambiguous, with a large source of uncertainty coming from the overall scale. In this paper, we present a new method that addresses this task by employing two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally. We also apply a scale-invariant error to help measure depth relations rather than scale. By leveraging the raw datasets as large sources of training data, our method achieves state-of-the-art results on both NYU Depth and KITTI, and matches detailed depth boundaries without the need for superpixelation.",
"Predicting the depth (or surface normal) of a scene from single monocular color images is a challenging task. This paper tackles this challenging and essentially underdetermined problem by regression on deep convolutional neural network (DCNN) features, combined with a post-processing refining step using conditional random fields (CRF). Our framework works at two levels, super-pixel level and pixel level. First, we design a DCNN model to learn the mapping from multi-scale image patches to depth or surface normal values at the super-pixel level. Second, the estimated super-pixel depth or surface normal is refined to the pixel level by exploiting various potentials on the depth or surface normal map, which includes a data term, a smoothness term among super-pixels and an auto-regression term characterizing the local structure of the estimation map. The inference problem can be efficiently solved because it admits a closed-form solution. Experiments on the Make3D and NYU Depth V2 datasets show competitive results compared with recent state-of-the-art methods.",
"In the past few years, convolutional neural nets (CNN) have shown incredible promise for learning visual representations. In this paper, we use CNNs for the task of predicting surface normals from a single image. But what is the right architecture? We propose to build upon the decades of hard work in 3D scene understanding to design a new CNN architecture for the task of surface normal estimation. We show that incorporating several constraints (man-made, Manhattan world) and meaningful intermediate representations (room layout, edge labels) in the architecture leads to state of the art performance on surface normal estimation. We also show that our network is quite robust and show state of the art results on other datasets as well without any fine-tuning.",
"In this paper, we propose Geometric Neural Network (GeoNet) to jointly predict depth and surface normal maps from a single image. Building on top of two-stream CNNs, our GeoNet incorporates geometric relation between depth and surface normal via the new depth-to-normal and normal-to-depth networks. Depth-to-normal network exploits the least square solution of surface normal from depth and improves its quality with a residual module. Normal-to-depth network, contrarily, refines the depth map based on the constraints from the surface normal through a kernel regression module, which has no parameter to learn. These two networks enforce the underlying model to efficiently predict depth and surface normal for high consistency and corresponding accuracy. Our experiments on NYU v2 dataset verify that our GeoNet is able to predict geometrically consistent depth and normal maps. It achieves top performance on surface normal estimation and is on par with state-of-the-art depth estimation methods.",
"We propose a deep learning algorithm for single-image depth estimation based on the Fourier frequency domain analysis. First, we develop a convolutional neural network structure and propose a new loss function, called depth-balanced Euclidean loss, to train the network reliably for a wide range of depths. Then, we generate multiple depth map candidates by cropping input images with various cropping ratios. In general, a cropped image with a small ratio yields depth details more faithfully, while that with a large ratio provides the overall depth distribution more reliably. To take advantage of these complementary properties, we combine the multiple candidates in the frequency domain. Experimental results demonstrate that proposed algorithm provides the state-of-art performance. Furthermore, through the frequency domain analysis, we validate the efficacy of the proposed algorithm in most frequency bands.",
"This paper addresses the problem of estimating the depth map of a scene given a single RGB image. We propose a fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps. In order to improve the output resolution, we present a novel way to efficiently learn feature map up-sampling within the network. For optimization, we introduce the reverse Huber loss that is particularly suited for the task at hand and driven by the value distributions commonly present in depth maps. Our model is composed of a single architecture that is trained end-to-end and does not rely on post-processing techniques, such as CRFs or other additional refinement steps. As a result, it runs in real-time on images or videos. In the evaluation, we show that the proposed model contains fewer parameters and requires fewer training data than the current state of the art, while outperforming all approaches on depth estimation. Code and models are publicly available."
]
} |
1905.07542 | 2945305363 | There has been tremendous research progress in estimating the depth of a scene from a monocular camera image. Existing methods for single-image depth prediction are exclusively based on deep neural networks, and their training can be unsupervised using stereo image pairs, supervised using LiDAR point clouds, or semi-supervised using both stereo and LiDAR. In general, semi-supervised training is preferred as it does not suffer from the weaknesses of either supervised training, resulting from the difference in the cameras and the LiDARs field of view, or unsupervised training, resulting from the poor depth accuracy that can be recovered from a stereo pair. In this paper, we present our research in single image depth prediction using semi-supervised training that outperforms the state-of-the-art. We achieve this through a loss function that explicitly exploits left-right consistency in a stereo reconstruction, which has not been adopted in previous semi-supervised training. In addition, we describe the correct use of ground truth depth derived from LiDAR that can significantly reduce prediction error. The performance of our depth prediction model is evaluated on popular datasets, and the importance of each aspect of our semi-supervised training approach is demonstrated through experimental results. Our deep neural network model has been made publicly available. | To avoid laborious ground truth depth construction, unsupervised methods based on stereo image pairs have been proposed @cite_25 . @cite_3 demonstrated an unsupervised method in which the network is trained to minimize the stereo reconstruction loss; i.e., the loss is defined such that the reconstructed right image (i.e., obtained by warping the left image using the predicted disparity) matches the right image. Later on, @cite_22 extended the idea by enforcing a left-right consistency that makes the left-view disparity map consistent with the right-view disparity map. The unsupervised training of our model is based on @cite_22 . Given a left view as input, the model in @cite_22 outputs two disparities of the left view and the right view, while we are outputting only one depth map for one input image in the form of inverse depth instead of disparity. As a result, we treat both left and right images equivalently which allows us to eliminate the overhead of the post-processing step in @cite_22 . By making these changes, our unsupervised model outperforms @cite_22 as will be discussed in Section . | {
"cite_N": [
"@cite_22",
"@cite_25",
"@cite_3"
],
"mid": [
"2520707372",
"2336968928",
"2300779272"
],
"abstract": [
"Learning based methods have shown very promising results for the task of depth estimation in single images. However, most existing approaches treat depth prediction as a supervised regression problem and as a result, require vast quantities of corresponding ground truth depth data for training. Just recording quality depth data in a range of environments is a challenging problem. In this paper, we innovate beyond existing approaches, replacing the use of explicit depth data during training with easier-to-obtain binocular stereo footage. We propose a novel training objective that enables our convolutional neural network to learn to perform single image depth estimation, despite the absence of ground truth depth data. Ex-ploiting epipolar geometry constraints, we generate disparity images by training our network with an image reconstruction loss. We show that solving for image reconstruction alone results in poor quality depth images. To overcome this problem, we propose a novel training loss that enforces consistency between the disparities produced relative to both the left and right images, leading to improved performance and robustness compared to existing approaches. Our method produces state of the art results for monocular depth estimation on the KITTI driving dataset, even outperforming supervised methods that have been trained with ground truth depth.",
"As 3D movie viewing becomes mainstream and the Virtual Reality (VR) market emerges, the demand for 3D contents is growing rapidly. Producing 3D videos, however, remains challenging. In this paper we propose to use deep neural networks to automatically convert 2D videos and images to a stereoscopic 3D format. In contrast to previous automatic 2D-to-3D conversion algorithms, which have separate stages and need ground truth depth map as supervision, our approach is trained end-to-end directly on stereo pairs extracted from existing 3D movies. This novel training scheme makes it possible to exploit orders of magnitude more data and significantly increases performance. Indeed, Deep3D outperforms baselines in both quantitative and human subject evaluations.",
"A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manually labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth prediction, without requiring a pre-training stage or annotated ground-truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photometric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset gives comparable performance to that of the state-of-the-art supervised methods for single view depth estimation."
]
} |
1905.07366 | 2945353373 | As global greenhouse gas emissions continue to rise, the use of stratospheric aerosol injection (SAI), a form of solar geoengineering, is increasingly considered in order to artificially mitigate climate change effects. However, initial research in simulation suggests that naive SAI can have catastrophic regional consequences, which may induce serious geostrategic conflicts. Current geo-engineering research treats SAI control in low-dimensional approximation only. We suggest treating SAI as a high-dimensional control problem, with policies trained according to a context-sensitive reward function within the Deep Reinforcement Learning (DRL) paradigm. In order to facilitate training in simulation, we suggest to emulate HadCM3, a widely used General Circulation Model, using deep learning techniques. We believe this is the first application of DRL to the climate sciences. | General Circulation Models (GCMs), which simulate the earth's climate on a global scale, are inherently computationally intensive. Simple statistical methods are routinely used in order to estimate climate responses to slow forcings @cite_21 . Recently, the advent of deep learning has led to a number of successful emulation attempts of full GCMs used for weather prediction @cite_24 , as well as for sub-grid scale processes @cite_14 @cite_0 , including precipitation @cite_26 . This suggests that the emulation of the response of regional variables, such as precipitation and surface temperature, to aerosol injection forcings may now be within reach. | {
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_21",
"@cite_24",
"@cite_0"
],
"mid": [
"2808400960",
"2811423591",
"2025540709",
"2809789958",
"2974527409"
],
"abstract": [
"",
"The parameterization of moist convection contributes to uncertainty in climate modeling and numerical weather prediction. Machine learning (ML) can be used to learn new parameterizations directly from high-resolution model output, but it remains poorly understood how such parameterizations behave when fully coupled in a general circulation model (GCM) and whether they are useful for simulations of climate change or extreme events. Here, we focus on these issues using idealized tests in which an ML-based parameterization is trained on output from a conventional parameterization and its performance is assessed in simulations with a GCM. We use an ensemble of decision trees (random forest) as the ML algorithm, and this has the advantage that it automatically ensures conservation of energy and non-negativity of surface precipitation. The GCM with the ML convective parameterization runs stably and accurately captures important climate statistics including precipitation extremes without the need for special training on extremes. Climate change between a control climate and a warm climate is not captured if the ML parameterization is only trained on the control climate, but it is captured if the training includes samples from both climates. Remarkably, climate change is also captured when training only on the warm climate, and this is because the extratropics of the warm climate provides training samples for the tropics of the control climate. In addition to being potentially useful for the simulation of climate, we show that ML parameterizations can be interrogated to provide diagnostics of the interaction between convection and the large-scale environment.",
"AbstractThe authors describe a new approach for emulating the output of a fully coupled climate model under arbitrary forcing scenarios that is based on a small set of precomputed runs from the model. Temperature and precipitation are expressed as simple functions of the past trajectory of atmospheric CO2 concentrations, and a statistical model is fit using a limited set of training runs. The approach is demonstrated to be a useful and computationally efficient alternative to pattern scaling and captures the nonlinear evolution of spatial patterns of climate anomalies inherent in transient climates. The approach does as well as pattern scaling in all circumstances and substantially better in many; it is not computationally demanding; and, once the statistical model is fit, it produces emulated climate output effectively instantaneously. It may therefore find wide application in climate impacts assessments and other policy analyses requiring rapid climate projections.",
"Abstract. Can models that are based on deep learning and trained on atmospheric data compete with weather and climate models that are based on physical principles and the basic equations of motion? This question has been asked often recently due to the boom in deep-learning techniques. The question is valid given the huge amount of data that are available, the computational efficiency of deep-learning techniques and the limitations of today's weather and climate models in particular with respect to resolution and complexity. In this paper, the question will be discussed in the context of global weather forecasts. A toy model for global weather predictions will be presented and used to identify challenges and fundamental design choices for a forecast system based on neural networks.",
"The representation of nonlinear subgrid processes, especially clouds, has been a major source of uncertainty in climate models for decades. Cloud-resolving models better represent many of these processes and can now be run globally but only for short-term simulations of at most a few years because of computational limitations. Here we demonstrate that deep learning can be used to capture many advantages of cloud-resolving modeling at a fraction of the computational cost. We train a deep neural network to represent all atmospheric subgrid processes in a climate model by learning from a multiscale model in which convection is treated explicitly. The trained neural network then replaces the traditional subgrid parameterizations in a global general circulation model in which it freely interacts with the resolved dynamics and the surface-flux scheme. The prognostic multiyear simulations are stable and closely reproduce not only the mean climate of the cloud-resolving simulation but also key aspects of variability, including precipitation extremes and the equatorial wave spectrum. Furthermore, the neural network approximately conserves energy despite not being explicitly instructed to. Finally, we show that the neural network parameterization generalizes to new surface forcing patterns but struggles to cope with temperatures far outside its training manifold. Our results show the feasibility of using deep learning for climate model parameterization. In a broader context, we anticipate that data-driven Earth system model development could play a key role in reducing climate prediction uncertainty in the coming decade."
]
} |
1905.07366 | 2945353373 | As global greenhouse gas emissions continue to rise, the use of stratospheric aerosol injection (SAI), a form of solar geoengineering, is increasingly considered in order to artificially mitigate climate change effects. However, initial research in simulation suggests that naive SAI can have catastrophic regional consequences, which may induce serious geostrategic conflicts. Current geo-engineering research treats SAI control in low-dimensional approximation only. We suggest treating SAI as a high-dimensional control problem, with policies trained according to a context-sensitive reward function within the Deep Reinforcement Learning (DRL) paradigm. In order to facilitate training in simulation, we suggest to emulate HadCM3, a widely used General Circulation Model, using deep learning techniques. We believe this is the first application of DRL to the climate sciences. | Investigation of optimal SAI control within the climate community is currently constrained to low-dimensional injection pattern parametrisations @cite_10 or manual grid search over edge cases of interest @cite_2 . Even in simple settings, it has been shown that regional climate response is sensitive to the choice of SAI policy @cite_6 . In addition, super-regional impacts on El Nino Southern Oscillation have been demonstrated @cite_19 . This suggests that climate response to SAI is sensitive enough to warrant a high-dimensional treatment. | {
"cite_N": [
"@cite_19",
"@cite_10",
"@cite_6",
"@cite_2"
],
"mid": [
"2106187977",
"2161857173",
"2096693387",
"1857786860"
],
"abstract": [
"To examine the impact of proposed stratospheric geoengineering schemes on the amplitude and frequency of El Nino Southern Oscillation (ENSO) variations we ex- amine climate model simulations from the Geoengineering Model Intercomparison Project (GeoMIP) G1-G4 experi- ments. Here we compare tropical Pacific behavior under anthropogenic global warming (AGW) using several sce- narios: an instantaneous quadrupling of the atmosphere's CO2 concentration, a 1 annual increase in CO2 concentra- tion, and the representative concentration pathway resulting in 4.5 W m 2 radiative forcing at the end of the 21st cen- tury, the Representative Concentration Pathway 4.5 scenario, with that under G1-G4 and under historical model simu- lations. Climate models under AGW project relatively uni- form warming across the tropical Pacific over the next sev- eral decades. We find no statistically significant change in ENSO frequency or amplitude under stratospheric geoengi- neering as compared with those that would occur under on- going AGW, although the relative brevity of the G1-G4 sim- ulations may have limited detectability of such changes. We also find that the amplitude and frequency of ENSO events do not vary significantly under either AGW scenarios or G1- G4 from the variability found within historical simulations or observations going back to the mid-19th century. Finally, while warming of the Nino3.4 region in the tropical Pacific is fully offset in G1 and G2 during the 40-year simulations, the region continues to warm significantly in G3 and G4, which both start from a present-day climate.",
"There is increasing evidence that Earth's climate is currently warming, primarily due to emissions of greenhouse gases from human activities, and Earth has been projected to continue warming throughout this century. Scientists have begun to investigate the potential for geoengineering options for reducing surface temperatures and whether such options could possibly contribute to environmental risk reduction. One proposed method involves deliberately increasing aerosol loading in the stratosphere to scatter additional sunlight to space. Previous modeling studies have attempted to predict the climate consequences of hypothetical aerosol additions to the stratosphere. These studies have shown that this method could potentially reduce surface temperatures, but could not recreate a low-CO2 climate in a high-CO2 world. In this study, we attempt to determine the latitudinal distribution of stratospheric aerosols that would most closely achieve a low-CO2 climate despite high CO2 levels. Using the NCAR CAM3.1 general circulation model, we find that having a stratospheric aerosol loading in polar regions higher than that in tropical regions leads to a temperature distribution that is more similar to the low-CO2 climate than that yielded by a globally uniform loading. However, such polar weighting of stratospheric sulfate tends to degrade the degree to which the hydrological cycle is restored, and thus does not markedly contribute to improved recovery of a low-CO2 climate. In the model, the optimal latitudinally varying aerosol distributions diminished the rms zonal mean land temperature change from a doubling of CO2 by 94 and the rms zonal mean land precipitation minus evaporation change by 74 . It is important to note that this idealized study represents a first attempt at optimizing the engineering of climate using a general circulation model; uncertainties are high and not all processes that are important in reality are modeled.",
"Aerosols could be injected into the upper atmosphere to engineer the climate by scattering incident sunlight so as to produce a cooling tendency that may mitigate the risks posed by the accumulation of greenhouse gases. Analysis of climate engineering has focused on sulfate aerosols. Here I examine the possibility that engineered nanoparticles could exploit photophoretic forces, enabling more control over particle distribution and lifetime than is possible with sulfates, perhaps allowing climate engineering to be accomplished with fewer side effects. The use of electrostatic or magnetic materials enables a class of photophoretic forces not found in nature. Photophoretic levitation could loft particles above the stratosphere, reducing their capacity to interfere with ozone chemistry; and, by increasing particle lifetimes, it would reduce the need for continual replenishment of the aerosol. Moreover, particles might be engineered to drift poleward enabling albedo modification to be tailored to counter polar warming while minimizing the impact on equatorial climates.",
"In an assessment of how Arctic sea ice cover could be remediated in a warming world, we simulated the injection of SO2 into the Arctic stratosphere making annual adjustments to injection rates. We treated one climate model realization as a surrogate “real world” with imperfect “observations” and no rerunning or reference to control simulations. SO2 injection rates were proposed using a novel model predictive control regime which incorporated a second simpler climate model to forecast “optimal” decision pathways. Commencing the simulation in 2018, Arctic sea ice cover was remediated by 2043 and maintained until solar geoengineering was terminated. We found quantifying climate side effects problematic because internal climate variability hampered detection of regional climate changes beyond the Arctic. Nevertheless, through decision maker learning and the accumulation of at least 10 years time series data exploited through an annual review cycle, uncertainties in observations and forcings were successfully managed."
]
} |
1905.07366 | 2945353373 | As global greenhouse gas emissions continue to rise, the use of stratospheric aerosol injection (SAI), a form of solar geoengineering, is increasingly considered in order to artificially mitigate climate change effects. However, initial research in simulation suggests that naive SAI can have catastrophic regional consequences, which may induce serious geostrategic conflicts. Current geo-engineering research treats SAI control in low-dimensional approximation only. We suggest treating SAI as a high-dimensional control problem, with policies trained according to a context-sensitive reward function within the Deep Reinforcement Learning (DRL) paradigm. In order to facilitate training in simulation, we suggest to emulate HadCM3, a widely used General Circulation Model, using deep learning techniques. We believe this is the first application of DRL to the climate sciences. | Altering the injection altitude, latitude, season, or particle type - possibly even with the use of specially engineered photophoretic nanoparticles - may provide the ability to "tailor" fine-grained SAI. But, presently, stratospheric aerosol models have substantially different responses to identical injection strategies @cite_12 , suggesting directly simulating the implications of these strategies - and the range of aerosol distributions that can be attained - requires further model development. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2141753637"
],
"abstract": [
"Geoengineering with stratospheric sulfate aerosols has been proposed as a means of temporarily cooling the planet, alleviating some of the side effects of anthropogenic CO2 emissions. However, one of the known side effects of stratospheric injections of sulfate aerosols under present-day conditions is a general decrease in ozone concentrations. Here we present the results from two general circulation models and two coupled chemistry-climate models within the experiments G3 and G4 of the Geoengineering Model Intercomparison Project. On average, the models simulate in G4 an increase in sulfate aerosol surface area density similar to conditions a year after the Mount Pinatubo eruption and a decrease in globally averaged ozone by 1.1−2.1 DU (Dobson unit, 1 DU = 0.001 atm cm) during the central decade of the experiment (2040–2049). Enhanced heterogeneous chemistry on sulfate aerosols leads to an ozone increase in low and middle latitudes, whereas enhanced heterogeneous reactions in polar regions and increased tropical upwelling lead to a reduction of stratospheric ozone. The increase in UV-B radiation at the surface due to ozone depletion is offset by the screening due to the aerosols in the tropics and midlatitudes, while in polar regions the UV-B radiation is increased by 5 on average, with 12 peak increases during springtime. The contribution of ozone changes to the tropopause radiative forcing during 2040–2049 is found to be less than −0.1 W m−2. After 2050, because of decreasing ClOx concentrations, the suppression of the NOx cycle becomes more important than destruction of ozone by ClOx, causing an increase in total stratospheric ozone."
]
} |
1905.07318 | 2946159594 | We describe a new approach for mitigating risk in the Reinforcement Learning paradigm. Instead of reasoning about expected utility, we use second-order stochastic dominance (SSD) to directly compare the inherent risk of random returns induced by different actions. We frame the RL optimization within the space of probability measures to accommodate the SSD relation, treating Bellman's equation as a potential energy functional. This brings us to Wasserstein gradient flows, for which the optimality and convergence are well understood. We propose a discrete-measure approximation algorithm called the Dominant Particle Agent (DPA), and we demonstrate how safety and performance are better balanced with DPA than with existing baselines. | DP-WGF @cite_13 models stochastic policy inference as free-energy minimization. They too apply the JKO scheme to derive learning algorithms. And in the same way our formalism leads to a convergent algorithm for return distributions, so too does their approach for stochastic policies. DP-WGF couches their training procedure within the soft- @math learning paradigm @cite_18 @cite_42 . These algorithms train a deep parametric model to sample from a target Gibbs density using Stein Variational Gradient Descent @cite_22 . In contrast, our method fixes a set of particles and adopts the more traditional Sinkhorn algorithm from optimal transport to compute the Wasserstein distance. We complete the JKO step using gradient methods and auto-differentiation. | {
"cite_N": [
"@cite_18",
"@cite_42",
"@cite_13",
"@cite_22"
],
"mid": [
"2594103415",
"2962902376",
"2803213319",
"2963956018"
],
"abstract": [
"We propose a method for learning expressive energy-based policies for continuous states and actions, which has been feasible only in tabular domains before. We apply our method to learning maximum entropy policies, resulting into a new algorithm, called soft Q-learning, that expresses the optimal policy via a Boltzmann distribution. We use the recently proposed amortized Stein variational gradient descent to learn a stochastic sampling network that approximates samples from this distribution. The benefits of the proposed algorithm include improved exploration and compositionality that allows transferring skills between tasks, which we confirm in simulated experiments with swimming and walking robots. We also draw a connection to actor-critic methods, which can be viewed performing approximate inference on the corresponding energy-based model.",
"Model-free deep reinforcement learning (RL) algorithms have been demonstrated on a range of challenging decision making and control tasks. However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning. Both of these challenges severely limit the applicability of such methods to complex, real-world domains. In this paper, we propose soft actor-critic, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework. In this framework, the actor aims to maximize expected reward while also maximizing entropy - that is, succeed at the task while acting as randomly as possible. Prior deep RL methods based on this framework have been formulated as either off-policy Q-learning, or on-policy policy gradient methods. By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds.",
"",
"We propose a general purpose variational inference algorithm that forms a natural counterpart of gradient descent for optimization. Our method iteratively transports a set of particles to match the target distribution, by applying a form of functional gradient descent that minimizes the KL divergence. Empirical studies are performed on various real world models and datasets, on which our method is competitive with existing state-of-the-art methods. The derivation of our method is based on a new theoretical result that connects the derivative of KL divergence under smooth transforms with Stein’s identity and a recently proposed kernelized Stein discrepancy, which is of independent interest."
]
} |
1905.07504 | 2946545670 | Recent advances, such as GPT and BERT, have shown success in incorporating a pre-trained transformer language model and fine-tuning operation to improve downstream NLP systems. However, this framework still has some fundamental problems in effectively incorporating supervised knowledge from other related tasks. In this study, we investigate a transferable BERT (TransBERT) training framework, which can transfer not only general language knowledge from large-scale unlabeled data but also specific kinds of knowledge from various semantically related supervised tasks, for a target task. Particularly, we propose utilizing three kinds of transfer tasks, including natural language inference, sentiment classification, and next action prediction, to further train BERT based on a pre-trained model. This enables the model to get a better initialization for the target task. We take story ending prediction as the target task to conduct experiments. The final result, an accuracy of 91.8 , dramatically outperforms previous state-of-the-art baseline methods. Several comparative experiments give some helpful suggestions on how to select transfer tasks. Error analysis shows what are the strength and weakness of BERT-based models for story ending prediction. | STILTs @cite_16 fine-tuned a GPT model on some intermediate tasks to get better performance on the GLUE @cite_7 benchmark. However, they gave little analysis of this transfer mechanism. Take SCT as an example, we give some helpful suggestions and our insights on how to select transfer tasks. | {
"cite_N": [
"@cite_16",
"@cite_7"
],
"mid": [
"2898700502",
"2799054028"
],
"abstract": [
"Pretraining sentence encoders with language modeling and related unsupervised tasks has recently been shown to be very effective for language understanding tasks. By supplementing language model-style pretraining with further training on data-rich supervised tasks, such as natural language inference, we obtain additional performance improvements on the GLUE benchmark. Applying supplementary training on BERT (, 2018), we attain a GLUE score of 81.8---the state of the art (as of 02 24 2019) and a 1.4 point improvement over BERT. We also observe reduced variance across random restarts in this setting. Our approach yields similar improvements when applied to ELMo (, 2018a) and (2018)'s model. In addition, the benefits of supplementary training are particularly pronounced in data-constrained regimes, as we show in experiments with artificially limited training data.",
"For natural language understanding (NLU) technology to be maximally useful, both practically and as a scientific object of study, it must be general: it must be able to process language in a way that is not exclusively tailored to any one specific task or dataset. In pursuit of this objective, we introduce the General Language Understanding Evaluation benchmark (GLUE), a tool for evaluating and analyzing the performance of models across a diverse range of existing NLU tasks. GLUE is model-agnostic, but it incentivizes sharing knowledge across tasks because certain tasks have very limited training data. We further provide a hand-crafted diagnostic test suite that enables detailed linguistic analysis of NLU models. We evaluate baselines based on current methods for multi-task and transfer learning and find that they do not immediately give substantial improvements over the aggregate performance of training a separate model per task, indicating room for improvement in developing general and robust NLU systems."
]
} |
1905.07559 | 2946166375 | A tree cover of a metric space @math is a collection of trees, so that every pair @math has a low distortion path in one of the trees. If it has the stronger property that every point @math has a single tree with low distortion paths to all other points, we call this a Ramsey tree cover. Tree covers and Ramsey tree covers have been studied by BLMN03,GKR04,CGMZ05,GHR06,MN07 , and have found several important algorithmic applications, e.g. routing and distance oracles. The union of trees in a tree cover also serves as a special type of spanner, that can be decomposed into a few trees with low distortion paths contained in a single tree; Such spanners for Euclidean pointsets were presented by ADMSS95 . In this paper we devise efficient algorithms to construct tree covers and Ramsey tree covers for general, planar and doubling metrics. We pay particular attention to the desirable case of distortion close to 1, and study what can be achieved when the number of trees is small. In particular, our work shows a large separation between what can be achieved by tree covers vs. Ramsey tree covers. | We note that @cite_45 studied a related question of bounding the number of trees sufficient for probabilistic embedding. The result they obtain implies an exponentially weaker cover size than those that follow from @cite_5 and from our construction. | {
"cite_N": [
"@cite_5",
"@cite_45"
],
"mid": [
"2114493937",
"1952648253"
],
"abstract": [
"This paper provides a novel technique for the analysis of randomized algorithms for optimization problems on metric spaces, by relating the randomized performance ratio for any, metric space to the randomized performance ratio for a set of \"simple\" metric spaces. We define a notion of a set of metric spaces that probabilistically-approximates another metric space. We prove that any metric space can be probabilistically-approximated by hierarchically well-separated trees (HST) with a polylogarithmic distortion. These metric spaces are \"simple\" as being: (1) tree metrics; (2) natural for applying a divide-and-conquer algorithmic approach. The technique presented is of particular interest in the context of on-line computation. A large number of on-line algorithmic problems, including metrical task systems, server problems, distributed paging, and dynamic storage rearrangement are defined in terms of some metric space. Typically for these problems, there are linear lower bounds on the competitive ratio of deterministic algorithms. Although randomization against an oblivious adversary has the potential of overcoming these high ratios, very little progress has been made in the analysis. We demonstrate the use of our technique by obtaining substantially improved results for two different on-line problems.",
"Y. Bartal (1996, 1998) gave a randomized polynomial time algorithm that given any n point metric G, constructs a tree T such that the expected stretch (distortion) of any edge is at most O (log n log log n). His result has found several applications and in particular has resulted in approximation algorithms for many graph optimization problems. However approximation algorithms based on his result are inherently randomized. In this paper we derandomize the use of Bartal's algorithm in the design of approximation algorithms. We give an efficient polynomial time algorithm that given a finite n point metric G, constructs O(n log n) trees and a probability distribution spl mu on them such that the expected stretch of any edge of G in a tree chosen according to spl mu is at most O(log n log log n). Our result establishes that finite metrics can be probabilistically approximated by a small number of tree metrics. We obtain the first deterministic approximation algorithms for buy-at-bulk network design and vehicle routing; in addition we subsume results from our earlier work on derandomization. Our main result is obtained by a novel view of probabilistic approximation of metric spaces as a deterministic optimization problem via linear programming."
]
} |
1905.07559 | 2946166375 | A tree cover of a metric space @math is a collection of trees, so that every pair @math has a low distortion path in one of the trees. If it has the stronger property that every point @math has a single tree with low distortion paths to all other points, we call this a Ramsey tree cover. Tree covers and Ramsey tree covers have been studied by BLMN03,GKR04,CGMZ05,GHR06,MN07 , and have found several important algorithmic applications, e.g. routing and distance oracles. The union of trees in a tree cover also serves as a special type of spanner, that can be decomposed into a few trees with low distortion paths contained in a single tree; Such spanners for Euclidean pointsets were presented by ADMSS95 . In this paper we devise efficient algorithms to construct tree covers and Ramsey tree covers for general, planar and doubling metrics. We pay particular attention to the desirable case of distortion close to 1, and study what can be achieved when the number of trees is small. In particular, our work shows a large separation between what can be achieved by tree covers vs. Ramsey tree covers. | In the context of spanning trees, the problem of computing a spanning tree with low average stretch was first studied by @cite_11 . Following @cite_46 , @cite_2 @cite_42 obtained a nearly tight @math bound. | {
"cite_N": [
"@cite_46",
"@cite_42",
"@cite_2",
"@cite_11"
],
"mid": [
"2950128268",
"2072142932",
"2570705339",
"1981859328"
],
"abstract": [
"We prove that every weighted graph contains a spanning tree subgraph of average stretch O((log n log log n)^2). Moreover, we show how to construct such a tree in time O(m log^2 n).",
"We prove that any graph G=(V,E) with n points and m edges has a spanning tree T such that ∑(u,v)∈ E(G)dT(u,v) = O(m log n log log n). Moreover such a tree can be found in time O(m log n log log n). Our result is obtained using a new pet al-decomposition approach which guarantees that the radius of each cluster in the tree is at most 4 times the radius of the induced subgraph of the cluster in the original graph.",
"This paper addresses the basic question of how well a tree can approximate distances of a metric space or a graph. Given a graph, the problem of constructing a spanning tree in a graph which strongly preserves distances in the graph is a fundamental problem in network design. We present scaling distortion embeddings where the distortion scales as a function of @math , with the guarantee that for each @math simultaneously, the distortion of a fraction @math of all pairs is bounded accordingly. Quantitatively, we prove that any finite metric space embeds into an ultrametric with scaling distortion @math . For the graph setting, we prove that any weighted graph contains a spanning tree with scaling distortion @math . These bounds are tight even for embedding into arbitrary trees. These results imply that the average distortion of the embedding is constant and that the @math distortion is @math . For probabilistic embedding into spanning trees we prove a scaling distortion of @math , which implies constant @math -distortion for every fixed @math .",
"This paper investigates a zero-sum game played on a weighted connected graph @math between two players, the tree player and the edge player. At each play, the tree player chooses a spanning tree @math and the edge player chooses an edge @math . The payoff to the edge player is @math , defined as follows: If @math lies in the tree @math then @math ; if @math does not lie in the tree then @math , where @math is the weight of edge @math and @math is the weight of the unique cycle formed when edge @math is added to the tree @math . The main result is that the value of the game on any @math -vertex graph is bounded above by @math . It is conjectured that the value of the game is @math . The game arises in connection with the @math -server problem on a road network; i.e., a metric space that can be represented as a multigraph @math in which each edge @math represents a road of length @math . It is shown that, if the value of the game on @math is @math , then there is a randomized strategy that achieves a competitive ratio of @math against any oblivious adversary. Thus, on any @math -vertex road network, there is a randomized algorithm for the @math -server problem that is @math competitive against oblivious adversaries. At the heart of the analysis of the game is an algorithm that provides an approximate solution for the simple network design problem. Specifically, for any @math -vertex weighted, connected multigraph, the algorithm constructs a spanning tree @math such that the average, over all edges @math , of @math is less than or equal to @math . This result has potential application to the design of communication networks. It also improves substantially known estimates concerning the existence of a sparse basis for the cycle space of a graph."
]
} |
1905.07506 | 2964354703 | Deep learning has achieved remarkable results in 3D shape analysis by learning global shape features from the pixel-level over multiple views. Previous methods, however, compute low-level features for entire views without considering part-level information. In contrast, we propose a deep neural network, called Parts4Feature, to learn 3D global features from part-level information in multiple views. We introduce a novel definition of generally semantic parts, which Parts4Feature learns to detect in multiple views from different 3D shape segmentation benchmarks. A key idea of our architecture is that it transfers the ability to detect semantically meaningful parts in multiple views to learn 3D global features. Parts4Feature achieves this by combining a local part detection branch and a global feature learning branch with a shared region proposal module. The global feature learning branch aggregates the detected parts in terms of learned part patterns with a novel multi-attention mechanism, while the region proposal module enables locally and globally discriminative information to be promoted by each other. We demonstrate that Parts4Feature outperforms the state-of-the-art under three large-scale 3D shape benchmarks. | directly learn 3D features from 3D meshes, different novel concepts, such as circle convolution @cite_28 , mesh convolution @cite_21 were proposed to perform in deep learning models. These methods aim to learn global or local features from the geometry and spatial information on meshes to understand 3D shapes. | {
"cite_N": [
"@cite_28",
"@cite_21"
],
"mid": [
"2517836489",
"2461132349"
],
"abstract": [
"Extracting local features from 3D shapes is an important and challenging task that usually requires carefully designed 3D shape descriptors. However, these descriptors are hand-crafted and require intensive human intervention with prior knowledge. To tackle this issue, we propose a novel deep learning model, namely circle convolutional restricted Boltzmann machine (CCRBM), for unsupervised 3D local feature learning. CCRBM is specially designed to learn from raw 3D representations. It effectively overcomes obstacles such as irregular vertex topology, orientation ambiguity on the 3D surface, and rigid or slightly non-rigid transformation invariance in the hierarchical learning of 3D data that cannot be resolved by the existing deep learning models. Specifically, by introducing the novel circle convolution, CCRBM holds a novel ring-like multi-layer structure to learn 3D local features in a structure preserving manner. Circle convolution convolves across 3D local regions via rotating a novel circular sector convolution window in a consistent circular direction. In the process of circle convolution, extra points are sampled in each 3D local region and projected onto the tangent plane of the center of the region. In this way, the projection distances in each sector window are employed to constitute a novel local raw 3D representation called projection distance distribution (PDD). In addition, to eliminate the initial location ambiguity of a sector window, the Fourier transform modulus is used to transform the PDD into the Fourier domain, which is then conveyed to CCRBM. Experiments using the learned local features are conducted on three aspects: global shape retrieval, partial shape retrieval, and shape correspondence. The experimental results show that the learned local features outperform other state-of-the-art 3D shape descriptors.",
"Discriminative features of 3-D meshes are significant to many 3-D shape analysis tasks. However, handcrafted descriptors and traditional unsupervised 3-D feature learning methods suffer from several significant weaknesses: 1) the extensive human intervention is involved; 2) the local and global structure information of 3-D meshes cannot be preserved, which is in fact an important source of discriminability; 3) the irregular vertex topology and arbitrary resolution of 3-D meshes do not allow the direct application of the popular deep learning models; 4) the orientation is ambiguous on the mesh surface; and 5) the effect of rigid and nonrigid transformations on 3-D meshes cannot be eliminated. As a remedy, we propose a deep learning model with a novel irregular model structure, called mesh convolutional restricted Boltzmann machines (MCRBMs). MCRBM aims to simultaneously learn structure-preserving local and global features from a novel raw representation, local function energy distribution. In addition, multiple MCRBMs can be stacked into a deeper model, called mesh convolutional deep belief networks (MCDBNs). MCDBN employs a novel local structure preserving convolution (LSPC) strategy to convolve the geometry and the local structure learned by the lower MCRBM to the upper MCRBM. LSPC facilitates resolving the challenging issue of the orientation ambiguity on the mesh surface in MCDBN. Experiments using the proposed MCRBM and MCDBN were conducted on three common aspects: global shape retrieval, partial shape retrieval, and shape correspondence. Results show that the features learned by the proposed methods outperform the other state-of-the-art 3-D shape features."
]
} |
1905.07506 | 2964354703 | Deep learning has achieved remarkable results in 3D shape analysis by learning global shape features from the pixel-level over multiple views. Previous methods, however, compute low-level features for entire views without considering part-level information. In contrast, we propose a deep neural network, called Parts4Feature, to learn 3D global features from part-level information in multiple views. We introduce a novel definition of generally semantic parts, which Parts4Feature learns to detect in multiple views from different 3D shape segmentation benchmarks. A key idea of our architecture is that it transfers the ability to detect semantically meaningful parts in multiple views to learn 3D global features. Parts4Feature achieves this by combining a local part detection branch and a global feature learning branch with a shared region proposal module. The global feature learning branch aggregates the detected parts in terms of learned part patterns with a novel multi-attention mechanism, while the region proposal module enables locally and globally discriminative information to be promoted by each other. We demonstrate that Parts4Feature outperforms the state-of-the-art under three large-scale 3D shape benchmarks. | a series of pioneering work, PointNet++ @cite_1 inspired various supervised methods to understand point clouds. Through self-reconstruction, FoldingNet @cite_4 and LatentGAN @cite_11 learned global features with different unsupervised strategies. | {
"cite_N": [
"@cite_1",
"@cite_4",
"@cite_11"
],
"mid": [
"2963121255",
"2796426482",
"2784996692"
],
"abstract": [
"Few prior works study deep learning on point sets. PointNet is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds.",
"Recent deep networks that directly handle points in a point set, e.g., PointNet, have been state-of-the-art for supervised learning tasks on point clouds such as classification and segmentation. In this work, a novel end-to-end deep auto-encoder is proposed to address unsupervised learning challenges on point clouds. On the encoder side, a graph-based enhancement is enforced to promote local structures on top of PointNet. Then, a novel folding-based decoder deforms a canonical 2D grid onto the underlying 3D object surface of a point cloud, achieving low reconstruction errors even for objects with delicate structures. The proposed decoder only uses about 7 parameters of a decoder with fully-connected neural networks, yet leads to a more discriminative representation that achieves higher linear SVM classification accuracy than the benchmark. In addition, the proposed decoder structure is shown, in theory, to be a generic architecture that is able to reconstruct an arbitrary point cloud from a 2D grid. Our code is available at http: www.merl.com research license#FoldingNet",
"Three-dimensional geometric data offer an excellent domain for studying representation learning and generative modeling. In this paper, we look at geometric data represented as point clouds. We introduce a deep autoencoder (AE) network with excellent reconstruction quality and generalization ability. The learned representations outperform the state of the art in 3D recognition tasks and enable basic shape editing applications via simple algebraic manipulations, such as semantic part editing, shape analogies and shape interpolation. We also perform a thorough study of different generative models including GANs operating on the raw point clouds, significantly improved GANs trained in the fixed latent space our AEs and, Gaussian mixture models (GMM). Interestingly, GMMs trained in the latent space of our AEs produce samples of the best fidelity and diversity. To perform our quantitative evaluation of generative models, we propose simple measures of fidelity and diversity based on optimally matching between sets point clouds."
]
} |
1905.07458 | 2946515115 | Relation extraction (RE) is an indispensable information extraction task in several disciplines. RE models typically assume that named entity recognition (NER) is already performed in a previous step by another independent model. Several recent efforts, under the theme of end-to-end RE, seek to exploit inter-task correlations by modeling both NER and RE tasks jointly. Earlier work in this area commonly reduces the task to a table-filling problem wherein an additional expensive decoding step involving beam search is applied to obtain globally consistent cell labels. In efforts that do not employ table-filling, global optimization in the form of CRFs with Viterbi decoding for the NER component is still necessary for competitive performance. We introduce a novel neural architecture utilizing the table structure, based on repeated applications of 2D convolutions for pooling local dependency and metric-based features, that improves on the state-of-the-art without the need for global optimization. We validate our model on the ADE and CoNLL04 datasets for end-to-end RE and demonstrate @math gain (in F-score) over prior best results with training and testing times that are seven to ten times faster --- the latter highly advantageous for time-sensitive end user applications. | Other recent approaches not utilizing a table structure involve modeling the entity and relation extraction task jointly with shared parameters @cite_9 @cite_24 @cite_5 @cite_0 @cite_3 @cite_26 @cite_10 . katiyar2017going and bekoulis2018join specifically use attention mechanisms for the RE component without the need for dependency parse features. zheng2017joint_tag operate by reducing the problem to a sequence-labeling task that relies on a novel tagging scheme. zeng2018extracting use an encoder-decoder network such that the input sentence is encoded as fixed-length vector and decoded to relation triples directly. Most recently, bekoulis2018adversarial found that adversarial training (AT) is an effective regularization approach for performance. | {
"cite_N": [
"@cite_26",
"@cite_9",
"@cite_3",
"@cite_24",
"@cite_0",
"@cite_5",
"@cite_10"
],
"mid": [
"2799125718",
"2964167098",
"2741956709",
"2579356637",
"2600659824",
"2587809655",
"2798734500"
],
"abstract": [
"Abstract State-of-the-art models for joint entity recognition and relation extraction strongly rely on external natural language processing (NLP) tools such as POS (part-of-speech) taggers and dependency parsers. Thus, the performance of such joint models depends on the quality of the features obtained from these NLP tools. However, these features are not always accurate for various languages and contexts. In this paper, we propose a joint neural model which performs entity recognition and relation extraction simultaneously, without the need of any manually extracted features or the use of any external tool. Specifically, we model the entity recognition task using a CRF (Conditional Random Fields) layer and the relation extraction task as a multi-head selection problem (i.e., potentially identify multiple relations for each entity). We present an extensive experimental setup, to demonstrate the effectiveness of our method using datasets from various contexts (i.e., news, biomedical, real estate) and languages (i.e., English, Dutch). Our model outperforms the previous neural models that use automatically extracted features, while it performs within a reasonable margin of feature-based neural models, or even beats them.",
"We present a novel end-to-end neural model to extract entities and relations between them. Our recurrent neural network based model captures both word sequence and dependency tree substructure information by stacking bidirectional treestructured LSTM-RNNs on bidirectional sequential LSTM-RNNs. This allows our model to jointly represent both entities and relations with shared parameters in a single model. We further encourage detection of entities during training and use of entity information in relation extraction via entity pretraining and scheduled sampling. Our model improves over the stateof-the-art feature-based model on end-toend relation extraction, achieving 12.1 and 5.7 relative error reductions in F1score on ACE2005 and ACE2004, respectively. We also show that our LSTMRNN based model compares favorably to the state-of-the-art CNN based model (in F1-score) on nominal relation classification (SemEval-2010 Task 8). Finally, we present an extensive ablation analysis of several model components.",
"",
"Extracting adverse drug events receives much research attention in the biomedical community. Previous work adopts pipeline models, firstly recognizing drug disease entity mentions and then identifying adverse drug events from drug disease pairs. In this paper, we investigate joint models for simultaneously extracting drugs, diseases and adverse drug events. Compared with pipeline models, joint models have two main advantages. First, they make use of information integration to facilitate performance improvement; second, they reduce error propagation in pipeline methods. We compare a discrete model and a deep neural model for extracting drugs, diseases and adverse drug events jointly. Experimental results on a standard ADE corpus show that the discrete joint model outperforms a state-of-the-art baseline pipeline significantly. In addition, when discrete features are replaced by neural features, the recall is further improved.",
"Extracting biomedical entities and their relations from text has important applications on biomedical research. Previous work primarily utilized feature-based pipeline models to process this task. Many efforts need to be made on feature engineering when feature-based models are employed. Moreover, pipeline models may suffer error propagation and are not able to utilize the interactions between subtasks. Therefore, we propose a neural joint model to extract biomedical entities as well as their relations simultaneously, and it can alleviate the problems above. Our model was evaluated on two tasks, i.e., the task of extracting adverse drug events between drug and disease entities, and the task of extracting resident relations between bacteria and location entities. Compared with the state-of-the-art systems in these tasks, our model improved the F1 scores of the first task by 5.1 in entity recognition and 8.0 in relation extraction, and that of the second task by 9.2 in relation extraction. The proposed model achieves competitive performances with less work on feature engineering. We demonstrate that the model based on neural networks is effective for biomedical entity and relation extraction. In addition, parameter sharing is an alternative method for neural models to jointly process this task. Our work can facilitate the research on biomedical text mining.",
"Entity and relation extraction is a task that combines detecting entity mentions and recognizing entities' semantic relationships from unstructured text. We propose a hybrid neural network model to extract entities and their relationships without any handcrafted features. The hybrid neural network contains a novel bidirectional encoder-decoder LSTM module (BiLSTM-ED) for entity extraction and a CNN module for relation classification. The contextual information of entities obtained in BiLSTM-ED further pass though to CNN module to improve the relation classification. We conduct experiments on the public dataset ACE05 (Automatic Content Extraction program) to verify the effectiveness of our method. The method we proposed achieves the state-of-the-art results on entity and relation extraction task. (C) 2017 Elsevier B.V. All rights reserved.",
""
]
} |
1905.07529 | 2945638817 | Architectures obtained by Neural Architecture Search (NAS) have achieved highly competitive performance in various computer vision tasks. However, the prohibitive computation demand of forward-backward propagation in deep neural networks and searching algorithms makes it difficult to apply NAS in practice. In this paper, we propose a Multinomial Distribution Learning for extremely effective NAS,which considers the search space as a joint multinomial distribution, i.e., the operation between two nodes is sampled from this distribution, and the optimal network structure is obtained by the operations with the most likely probability in this distribution. Therefore, NAS can be transformed to a multinomial distribution learning problem, i.e., the distribution is optimized to have a high expectation of the performance. Besides, a hypothesis that the performance ranking is consistent in every training epoch is proposed and demonstrated to further accelerate the learning process. Experiments on CIFAR10 and ImageNet demonstrate the effectiveness of our method. On CIFAR-10, the structure searched by our method achieves 2.55 test error, while being 6.0x (only 4 GPU hours on GTX1080Ti) faster compared with state-of-the-art NAS algorithms. On ImageNet, our model achieves 75.2 top1 accuracy under MobileNet settings (MobileNet V1 V2), while being 1.2x faster with measured GPU latency. Test code with pre-trained models are available at this https URL | However, the above architecture search algorithms are still computation-intensive. Therefore some recent works are proposed to accelerate NAS by setting, where the network is sampled by a hyper representation graph, and the search process can be accelerated by parameter sharing @cite_28 . For instance, DARTS @cite_0 optimizes the weights within two node in the hyper-graph jointly with a continuous relaxation. Therefore, the parameters can be updated via standard gradient descend. However, methods suffer from the issue of large GPU memory consumption. To solve this problem, ProxylessNAS @cite_7 explores the search space without a specific agent with path binarization @cite_21 . However, since the search procedure of ProxylessNAS is still within the framework of methods, it may have the same complexity, the benefit gained in ProxylessNAS is a between exploration and exploitation. That is to say, more epochs are needed in the search procedure. Moreover, the search algorithm in @cite_7 is similar to previous work, either differential or RL based methods @cite_0 @cite_35 . | {
"cite_N": [
"@cite_35",
"@cite_7",
"@cite_28",
"@cite_21",
"@cite_0"
],
"mid": [
"",
"2902251695",
"2785366763",
"2963114950",
"2951104886"
],
"abstract": [
"",
"Neural architecture search (NAS) has a great impact by automatically designing effective neural network architectures. However, the prohibitive computational demand of conventional NAS algorithms (e.g. @math GPU hours) makes it difficult to search the architectures on large-scale tasks (e.g. ImageNet). Differentiable NAS can reduce the cost of GPU hours via a continuous representation of network architecture but suffers from the high GPU memory consumption issue (grow linearly w.r.t. candidate set size). As a result, they need to utilize tasks, such as training on a smaller dataset, or learning with only a few blocks, or training just for a few epochs. These architectures optimized on proxy tasks are not guaranteed to be optimal on the target task. In this paper, we present that can learn the architectures for large-scale target tasks and target hardware platforms. We address the high memory consumption issue of differentiable NAS and reduce the computational cost (GPU hours and GPU memory) to the same level of regular training while still allowing a large candidate set. Experiments on CIFAR-10 and ImageNet demonstrate the effectiveness of directness and specialization. On CIFAR-10, our model achieves 2.08 test error with only 5.7M parameters, better than the previous state-of-the-art architecture AmoebaNet-B, while using 6 @math fewer parameters. On ImageNet, our model achieves 3.1 better top-1 accuracy than MobileNetV2, while being 1.2 @math faster with measured GPU latency. We also apply ProxylessNAS to specialize neural architectures for hardware with direct hardware metrics (e.g. latency) and provide insights for efficient CNN architecture design.",
"We propose Efficient Neural Architecture Search (ENAS), a fast and inexpensive approach for automatic model design. In ENAS, a controller learns to discover neural network architectures by searching for an optimal subgraph within a large computational graph. The controller is trained with policy gradient to select a subgraph that maximizes the expected reward on the validation set. Meanwhile the model corresponding to the selected subgraph is trained to minimize a canonical cross entropy loss. Thanks to parameter sharing between child models, ENAS is fast: it delivers strong empirical performances using much fewer GPU-hours than all existing automatic model design approaches, and notably, 1000x less expensive than standard Neural Architecture Search. On the Penn Treebank dataset, ENAS discovers a novel architecture that achieves a test perplexity of 55.8, establishing a new state-of-the-art among all methods without post-training processing. On the CIFAR-10 dataset, ENAS designs novel architectures that achieve a test error of 2.89 , which is on par with NASNet (, 2018), whose test error is 2.65 .",
"Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and power-hungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.",
"This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, our method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent. Extensive experiments on CIFAR-10, ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques. Our implementation has been made publicly available to facilitate further research on efficient architecture search algorithms."
]
} |
1905.07447 | 2946222143 | Standardized evaluation measures have aided in the progress of machine learning approaches in disciplines such as computer vision and machine translation. In this paper, we make the case that robotic learning would also benefit from benchmarking, and present the "REPLAB" platform for benchmarking vision-based manipulation tasks. REPLAB is a reproducible and self-contained hardware stack (robot arm, camera, and workspace) that costs about 2000 USD, occupies a cuboid of size 70x40x60 cm, and permits full assembly within a few hours. Through this low-cost, compact design, REPLAB aims to drive wide participation by lowering the barrier to entry into robotics and to enable easy scaling to many robots. We envision REPLAB as a framework for reproducible research across manipulation tasks, and as a step in this direction, we define a template for a grasping benchmark consisting of a task definition, evaluation protocol, performance measures, and a dataset of 92k grasp attempts. We implement, evaluate, and analyze several previously proposed grasping approaches to establish baselines for this benchmark. Finally, we also implement and evaluate a deep reinforcement learning approach for 3D reaching tasks on our REPLAB platform. Project page with assembly instructions, code, and videos: this https URL. | REPLAB is also built with collective robotics in mind. Prior efforts in this direction include @cite_36 @cite_2 , where data collection for grasping was parallelized across many physically collocated robots. Rather than a such a collocated group, the Million Object Challenge (MOC) @cite_20 aims to crowdsource grasping data collection from 300 Baxter robots all over the world. REPLAB cells are designed to fit both these cases, since they are low-cost, low-volume, reproducible, and stackable: 20 REPLAB cells stacked to about 2m elevation occupy about the same floor space and cost less than two times as much as a single Baxter arm. The closest effort to this @cite_35 trains grasping policies for low-cost mobile manipulators by collecting data from several such manipulators under varying lighting conditions. Finally, previous efforts have also provided standardized and easily accessible full hardware stacks such as Duckietown for navigation @cite_24 and Robotarium for swarm robotics @cite_10 . We share their motivation of democratizing robotics and driving increased participation, and our focus is on manipulation tasks. | {
"cite_N": [
"@cite_35",
"@cite_36",
"@cite_24",
"@cite_2",
"@cite_10",
"@cite_20"
],
"mid": [
"2886380958",
"2962736495",
"2737347195",
"2810785043",
"2964138223",
""
],
"abstract": [
"Data-driven approaches to solving robotic tasks have gained a lot of traction in recent years. However, most existing policies are trained on large-scale datasets collected in curated lab settings. If we aim to deploy these models in unstructured visual environments like people's homes, they will be unable to cope with the mismatch in data distribution. In such light, we present the first systematic effort in collecting a large dataset for robotic grasping in homes. First, to scale and parallelize data collection, we built a low cost mobile manipulator assembled for under 3K USD. Second, data collected using low cost robots suffer from noisy labels due to imperfect execution and calibration errors. To handle this, we develop a framework which factors out the noise as a latent variable. Our model is trained on 28K grasps collected in several houses under an array of different environmental conditions. We evaluate our models by physically executing grasps on a collection of novel objects in multiple unseen homes. The models trained with our home dataset showed a marked improvement of 43.7 over a baseline model trained with data collected in lab. Our architecture which explicitly models the latent noise in the dataset also performed 10 better than one that did not factor out the noise. We hope this effort inspires the robotics community to look outside the lab and embrace learning based approaches to handle inaccurate cheap robots.",
"We describe a learning-based approach to hand-eye coordination for robotic grasping from monocular images. To learn hand-eye coordination for grasping, we trained a large convolutional neural network to predict the probability that task-space motion of the gripper will result in successful grasps, using only monocular camera images independent of camera calibration or the current robot pose. This requires the network to observe the spatial relationship between the gripper and objects in the scene, thus learning hand-eye coordination. We then use this network to servo the gripper in real time to achieve successful grasps. We describe two large-scale experiments that we conducted on two separate robotic platforms. In the first experiment, about 800,000 grasp attempts were collected over the course of two months, using between 6 and 14 robotic manipulators at any given time, with differences in camera placement and gripper wear and tear. In the second experiment, we used a different robotic platform and 8 ro...",
"Duckietown is an open, inexpensive and flexible platform for autonomy education and research. The platform comprises small autonomous vehicles (“Duckiebots”) built from off-the-shelf components, and cities (“Duckietowns”) complete with roads, signage, traffic lights, obstacles, and citizens (duckies) in need of transportation. The Duckietown platform offers a wide range of functionalities at a low cost. Duckiebots sense the world with only one monocular camera and perform all processing onboard with a Raspberry Pi 2, yet are able to: follow lanes while avoiding obstacles, pedestrians (duckies) and other Duckiebots, localize within a global map, navigate a city, and coordinate with other Duckiebots to avoid collisions. Duckietown is a useful tool since educators and researchers can save money and time by not having to develop all of the necessary supporting infrastructure and capabilities. All materials are available as open source, and the hope is that others in the community will adopt the platform for education and research.",
"In this paper, we study the problem of learning vision-based dynamic manipulation skills using a scalable reinforcement learning approach. We study this problem in the context of grasping, a longstanding challenge in robotic manipulation. In contrast to static learning behaviors that choose a grasp point and then execute the desired grasp, our method enables closed-loop vision-based control, whereby the robot continuously updates its grasp strategy based on the most recent observations to optimize long-horizon grasp success. To that end, we introduce QT-Opt, a scalable self-supervised vision-based reinforcement learning framework that can leverage over 580k real-world grasp attempts to train a deep neural network Q-function with over 1.2M parameters to perform closed-loop, real-world grasping that generalizes to 96 grasp success on unseen objects. Aside from attaining a very high success rate, our method exhibits behaviors that are quite distinct from more standard grasping systems: using only RGB vision-based perception from an over-the-shoulder camera, our method automatically learns regrasping strategies, probes objects to find the most effective grasps, learns to reposition objects and perform other non-prehensile pre-grasp manipulations, and responds dynamically to disturbances and perturbations.",
"This paper describes the Robotarium — a remotely accessible, multi-robot research facility. The impetus behind the Robotarium is that multi-robot testbeds constitute an integral and essential part of the multi-robot research cycle, yet they are expensive, complex, and time-consuming to develop, operate, and maintain. These resource constraints, in turn, limit access for large groups of researchers and students, which is what the Robotarium is remedying by providing users with remote access to a state-of-the-art multi-robot test facility. This paper details the design and operation of the Robotarium and discusses the considerations one must take when making complex hardware remotely accessible. In particular, safety must be built into the system already at the design phase without overly constraining what coordinated control programs users can upload and execute, which calls for minimally invasive safety routines with provable performance guarantees.",
""
]
} |
1905.07358 | 2946124920 | Cross-lingual embeddings represent the meaning of words from different languages in the same vector space. Recent work has shown that it is possible to construct such representations by aligning independently learned monolingual embedding spaces, and that accurate alignments can be obtained even without external bilingual data. In this paper we explore a research direction which has been surprisingly neglected in the literature: leveraging noisy user-generated text to learn cross-lingual embeddings particularly tailored towards social media applications. While the noisiness and informal nature of the social media genre poses additional challenges to cross-lingual embedding methods, we find that it also provides key opportunities due to the abundance of code-switching and the existence of a shared vocabulary of emoji and named entities. Our contribution consists in a very simple post-processing step that exploits these phenomena to significantly improve the performance of state-of-the-art alignment methods. | Cross-lingual embeddings are becoming increasingly popular in NLP @cite_2 @cite_23 , especially since the recent introduction of models requiring almost no supervision @cite_20 @cite_30 @cite_3 @cite_44 @cite_56 @cite_7 . These models have shown to be highly competitive compared to fully supervised baselines (which are typically trained on parallel corpora). | {
"cite_N": [
"@cite_30",
"@cite_7",
"@cite_3",
"@cite_56",
"@cite_44",
"@cite_23",
"@cite_2",
"@cite_20"
],
"mid": [
"342285082",
"2888389098",
"2294774419",
"2741602058",
"2594021297",
"2968867641",
"2962795068",
"2126725946"
],
"abstract": [
"The distributional hypothesis of Harris (1954), according to which the meaning of words is evidenced by the contexts they occur in, has motivated several effective techniques for obtaining vector space semantic representations of words using unannotated text corpora. This paper argues that lexico-semantic content should additionally be invariant across languages and proposes a simple technique based on canonical correlation analysis (CCA) for incorporating multilingual evidence into vectors generated monolingually. We evaluate the resulting word representations on standard lexical semantic evaluation tasks and show that our method produces substantially better semantic representations than monolingual techniques.",
"Cross-lingual word embeddings are becoming increasingly important in multilingual NLP. Recently, it has been shown that these embeddings can be effectively learned by aligning two disjoint monolingual vector spaces through linear transformations, using no more than a small bilingual dictionary as supervision. In this work, we propose to apply an additional transformation after the initial alignment step, which moves cross-lingual synonyms towards a middle point between them. By applying this transformation our aim is to obtain a better cross-lingual integration of the vector spaces. In addition, and perhaps surprisingly, the monolingual spaces also improve by this transformation. This is in contrast to the original alignment, which is typically learned such that the structure of the monolingual spaces is preserved. Our experiments confirm that the resulting cross-lingual embeddings outperform state-of-the-art models in both monolingual and cross-lingual evaluation tasks.",
"Word embedding has been found to be highly powerful to translate words from one language to another by a simple linear transform. However, we found some inconsistence among the objective functions of the embedding and the transform learning, as well as the distance measurement. This paper proposes a solution which normalizes the word vectors on a hypersphere and constrains the linear transform as an orthogonal transform. The experimental results confirmed that the proposed solution can offer better performance on a word similarity task and an English-toSpanish word translation task.",
"",
"Usually bilingual word vectors are trained \"online''. showed they can also be found \"offline\"; whereby two pre-trained embeddings are aligned with a linear transformation, using dictionaries compiled from expert knowledge. In this work, we prove that the linear transformation between two spaces should be orthogonal. This transformation can be obtained using the singular value decomposition. We introduce a novel \"inverted softmax\" for identifying translation pairs, with which we improve the precision @1 of Mikolov's original mapping from 34 to 43 , when translating a test set composed of both common and rare English words into Italian. Orthogonal transformations are more robust to noise, enabling us to learn the transformation without expert bilingual signal by constructing a \"pseudo-dictionary\" from the identical character strings which appear in both languages, achieving 40 precision on the same test set. Finally, we extend our method to retrieve the true translations of English sentences from a corpus of 200k Italian sentences with a precision @1 of 68 .",
"",
"",
"Dictionaries and phrase tables are the basis of modern statistical machine translation systems. This paper develops a method that can automate the process of generating and extending dictionaries and phrase tables. Our method can translate missing word and phrase entries by learning language structures based on large monolingual data and mapping between languages from small bilingual data. It uses distributed representation of words and learns a linear mapping between vector spaces of languages. Despite its simplicity, our method is surprisingly effective: we can achieve almost 90 precision@5 for translation of words between English and Spanish. This method makes little assumption about the languages, so it can be used to extend and refine dictionaries and translation tables for any language pairs."
]
} |
1905.07346 | 2945856475 | In recent years, advances in deep learning have resulted in unprecedented leaps in diverse tasks spanning from speech and object recognition to context awareness and health monitoring. As a result, an increasing number of AI-enabled applications are being developed targeting ubiquitous and mobile devices. While deep neural networks (DNNs) are getting bigger and more complex, they also impose a heavy computational and energy burden on the host devices, which has led to the integration of various specialized processors in commodity devices. Given the broad range of competing DNN architectures and the heterogeneity of the target hardware, there is an emerging need to understand the compatibility between DNN-platform pairs and the expected performance benefits on each platform. This work attempts to demystify this landscape by systematically evaluating a collection of state-of-the-art DNNs on a wide variety of commodity devices. In this respect, we identify potential bottlenecks in each architecture and provide important guidelines that can assist the community in the co-design of more efficient DNNs and accelerators. | So far, a few studies have focused on analyzing the system-level properties of DNNs on deployment platforms. Canziani @cite_15 presented a system-level analysis of 14 convolutional neural networks (CNNs) on the NVIDIA Jetson TX1 platform. Despite the fact that the analysis spanned across multiple metrics, the study was conducted over a limited number of networks and targeted solely a single platform. Bianco @cite_11 extended the covered space by evaluating a wider range of networks and targeting one embedded (Jetson TX1) and one high-end compute platform (NVIDIA Titan X GPU). Both studies conducted an analysis of the selected networks across multiple dimensions, including accuracy, compute speed, memory footprint and power consumption. Nevertheless, by including a total of two platforms --and given the heterogeneity of currently available devices-- the presented insights are not directly transferable to platforms with different characteristics. | {
"cite_N": [
"@cite_15",
"@cite_11"
],
"mid": [
"2759287047",
"2893813411"
],
"abstract": [
"Since the emergence of Deep Neural Networks (DNNs) as a prominent technique in the field of computer vision, the ImageNet classification challenge has played a major role in advancing the state-of-the-art. While accuracy figures have steadily increased, the resource utilization of winning models has not been properly taken into account. In this work, we present a comprehensive analysis of important metrics in practical applications: accuracy, memory footprint, parameters, operations count, inference time and power consumption. Key findings are: (1) power consumption is independent of batch size and architecture; (2) accuracy and inference time are in a hyperbolic relationship; (3) energy constraint is an upper bound on the maximum achievable accuracy and model complexity; (4) the number of operations is a reliable estimate of the inference time. We believe our analysis provides a compelling set of information that helps design and engineer efficient DNNs.",
"This paper presents an in-depth analysis of the majority of the deep neural networks (DNNs) proposed in the state of the art for image recognition. For each DNN, multiple performance indices are observed, such as recognition accuracy, model complexity, computational complexity, memory usage, and inference time. The behavior of such performance indices and some combinations of them are analyzed and discussed. To measure the indices, we experiment the use of DNNs on two different computer architectures, a workstation equipped with a NVIDIA Titan X Pascal, and an embedded system based on a NVIDIA Jetson TX1 board. This experimentation allows a direct comparison between DNNs running on machines with very different computational capacities. This paper is useful for researchers to have a complete view of what solutions have been explored so far and in which research directions are worth exploring in the future, and for practitioners to select the DNN architecture(s) that better fit the resource constraints of practical deployments and applications. To complete this work, all the DNNs, as well as the software used for the analysis, are available online."
]
} |
1905.07346 | 2945856475 | In recent years, advances in deep learning have resulted in unprecedented leaps in diverse tasks spanning from speech and object recognition to context awareness and health monitoring. As a result, an increasing number of AI-enabled applications are being developed targeting ubiquitous and mobile devices. While deep neural networks (DNNs) are getting bigger and more complex, they also impose a heavy computational and energy burden on the host devices, which has led to the integration of various specialized processors in commodity devices. Given the broad range of competing DNN architectures and the heterogeneity of the target hardware, there is an emerging need to understand the compatibility between DNN-platform pairs and the expected performance benefits on each platform. This work attempts to demystify this landscape by systematically evaluating a collection of state-of-the-art DNNs on a wide variety of commodity devices. In this respect, we identify potential bottlenecks in each architecture and provide important guidelines that can assist the community in the co-design of more efficient DNNs and accelerators. | On a slightly different setting, Huang @cite_21 concentrated on the task of object detection and evaluated a wide set of CNN-based object detectors in terms of processing performance and detection accuracy. With a focus on the mobile space, Ignatov @cite_24 assembled a benchmark suite of representative AI tasks to assess the processing capabilities of currently available smartphones. In this paper, we adopt a wider scope than @cite_21 by treating network architectures in a task-agnostic manner and target more diverse families of devices compared to @cite_24 . | {
"cite_N": [
"@cite_24",
"@cite_21"
],
"mid": [
"2895432151",
"2557728737"
],
"abstract": [
"Over the last years, the computational power of mobile devices such as smartphones and tablets has grown dramatically, reaching the level of desktop computers available not long ago. While standard smartphone apps are no longer a problem for them, there is still a group of tasks that can easily challenge even high-end devices, namely running artificial intelligence algorithms. In this paper, we present a study of the current state of deep learning in the Android ecosystem and describe available frameworks, programming models and the limitations of running AI on smartphones. We give an overview of the hardware acceleration resources available on four main mobile chipset platforms: Qualcomm, HiSilicon, MediaTek and Samsung. Additionally, we present the real-world performance results of different mobile SoCs collected with AI Benchmark (http: ai-benchmark.com) that are covering all main existing hardware configurations.",
"The goal of this paper is to serve as a guide for selecting a detection architecture that achieves the right speed memory accuracy balance for a given application and platform. To this end, we investigate various ways to trade accuracy for speed and memory usage in modern convolutional object detection systems. A number of successful systems have been proposed in recent years, but apples-toapples comparisons are difficult due to different base feature extractors (e.g., VGG, Residual Networks), different default image resolutions, as well as different hardware and software platforms. We present a unified implementation of the Faster R-CNN [30], R-FCN [6] and SSD [25] systems, which we view as meta-architectures and trace out the speed accuracy trade-off curve created by using alternative feature extractors and varying other critical parameters such as image size within each of these meta-architectures. On one extreme end of this spectrum where speed and memory are critical, we present a detector that achieves real time speeds and can be deployed on a mobile device. On the opposite end in which accuracy is critical, we present a detector that achieves state-of-the-art performance measured on the COCO detection task."
]
} |
1905.07491 | 2966531495 | In the presented scenario, an autonomous surface vehicle (ASV) equipped with a laser scanner navigates on a inland pathway surrounded and crossed by man-made structures such as bridges and locks. GPS receiver present on board experiences signal loss and multipath reflections in situation when the view of the sky is obscured by a bridge or tall buildings. In both cases, a potentially dangerous situation is provoked as the robot has no or inaccurate positioning data. A sensor data processing scheme is proposed where these gaps are smoothly filled in by positioning data generated from scan matching and registration of the laser data. This article shows preliminary results of positioning data improvement during trials in harbor-river environment. | Several advanced, mostly experimental autonomous platforms are also dotted in the complimentary environment mapping algorithms, together constituting a simultaneous localization and mapping SLAM . The technique was also applied to underwater robots by Ribas @cite_9 , Mallios @cite_1 and others. While SLAM represents a complete localization solution, it creates a significant memory overhead due to the need to maintain a map of the environment and the computation needs due to constant new sensor data and map updates. In case of survey vehicles, creating a product-grade map of the scene for immediate navigation purpose would be impractical. Thus, some authors propose a localisation on an (partial) existing map, notably using techniques of scan matching @cite_4 . Matching a current scan to a global map yields a global position candidate. The approach taken in this work makes use of relative matching of consecutive scans without a global map. | {
"cite_N": [
"@cite_9",
"@cite_1",
"@cite_4"
],
"mid": [
"2101034852",
"2060977605",
""
],
"abstract": [
"This paper describes a navigation system for autonomous underwater vehicles (AUVs) in partially structured environments, such as dams, harbors, marinas, and marine platforms. A mechanically scanned imaging sonar is used to obtain information about the location of vertical planar structures present in such environments. A robust voting algorithm has been developed to extract line features, together with their uncertainty, from the continuous sonar data flow. The obtained information is incorporated into a feature-based simultaneous localization and mapping (SLAM) algorithm running an extended Kalman filter. Simultaneously, the AUV's position estimate is provided to the feature extraction algorithm to correct the distortions that the vehicle motion produces in the acoustic images. Moreover, a procedure to build and maintain a sequence of local maps and to posteriorly recover the full global map has been adapted for the application presented. Experiments carried out in a marina located in the Costa Brava (Spain) with the Ictineu AUV show the viability of the proposed approach. © 2008 Wiley Periodicals, Inc.",
"This paper proposes a pose-based algorithm to solve the full Simultaneous Localization And Mapping (SLAM) problem for an Autonomous Underwater Vehicle (AUV), navigating in an unknown and possibly unstructured environment. A probabilistic scan matching technique using range scans gathered from a Mechanical Scanning Imaging Sonar (MSIS) is used together with the robot dead-reckoning displacements. The proposed method utilizes two Extended Kalman Filters (EKFs). The first, estimates the local path traveled by the robot while forming the scan as well as its uncertainty, providing position estimates for correcting the distortions that the vehicle motion produces in the acoustic images. The second is an augmented state EKF that estimates and keeps the registered scans poses. The raw data from the sensors are processed and fused in-line. No priory structural information or initial pose are considered. Also, a method of estimating the uncertainty of the scan matching estimation is provided. The algorithm has been tested on an AUV guided along a 600 m path within a marina environment, showing the viability of the proposed approach.",
""
]
} |
1905.07491 | 2966531495 | In the presented scenario, an autonomous surface vehicle (ASV) equipped with a laser scanner navigates on a inland pathway surrounded and crossed by man-made structures such as bridges and locks. GPS receiver present on board experiences signal loss and multipath reflections in situation when the view of the sky is obscured by a bridge or tall buildings. In both cases, a potentially dangerous situation is provoked as the robot has no or inaccurate positioning data. A sensor data processing scheme is proposed where these gaps are smoothly filled in by positioning data generated from scan matching and registration of the laser data. This article shows preliminary results of positioning data improvement during trials in harbor-river environment. | LiDAR , or 3-D laser scanners provide a measurement of distance along an array of laser beams typically mounted on a rotating head, so that they can sweep a large volume of the environment. They are commonly used in autonomous driving @cite_15 but they are also making their way underwater where they enable such applications as autonomous or milimetric-precision survey @cite_14 . 3-D scanners produce a considerable volume of data in form of structures clouds of 3-D points, although some devices can also output depth images. In addition to the geometric information, the intensity of laser reflection is often recorded for every point. | {
"cite_N": [
"@cite_15",
"@cite_14"
],
"mid": [
"1645808665",
"1577920937"
],
"abstract": [
"Road segmentation, obstacle detection, situation awareness constitute fundamental tasks for autonomous vehicles in urban environments. This paper describes an end-to-end system capable of generating high-quality 3D point clouds from one or two of the popular LMS200 laser on a continuously moving vehicle.",
"Advances in autonomous inspection of subsea facilities using a 3 Dimensional(3D) LIght Detection And Ranging (LiDAR) sensor are examined to illustrate the favorable enhancement of safety, reliability, reduction in risks, economic benefits and superior data products compared to conventional means. These benefits provide operators with significant improvements over general visual inspection by the addition of sensors that produce 3D models of the structure being inspected. Examples are provided illustrating test data from operations conducted from 2012-2013. Supported by funding from the Research Partnership to Secure Energy for America (RPSEA) Lockheed Martin MST teamed with the innovative company 3D at Depth to incorporate their new DP2TM 3D LiDAR onto the proven Marlin® Autonomous Underwater Vehicle. The objectives of the project include: Survey using combination of high resolution 3D Sonar and 3D LiDAR Generation of high fidelity 3D models Detection and localization of structural changes vs. reference model Lockheed Martin has successfully integrated the laser into the Marlin shore based laboratory addressing the challenges associated with coordinating a scanning laser from a moving platform and producing high resolution georegistered images. Testing will be conducted in early 2014 to validate the system in the Atlantic Ocean waters offshore Palm Beach, Florida. Three dimensional georegistered models of an entire scene can be rapidly collected providing a clear vision of the underwater scene along high resolution 3D models of the imaged item. Data collected at sea and in the laboratory will be presented demonstrating the performance of the system. Application to deepwater life of field inspection will be presented with evidence gained from offshore trials. This emergent technology supports Subsea Facility Inspection Repair and Maintenance, Integrity Management Inspections of Marine Risers, Moorings and anchors, Subsea Pipelines, Flowlines, Umbilicals, and supporting subsea infrastructure."
]
} |
1905.07491 | 2966531495 | In the presented scenario, an autonomous surface vehicle (ASV) equipped with a laser scanner navigates on a inland pathway surrounded and crossed by man-made structures such as bridges and locks. GPS receiver present on board experiences signal loss and multipath reflections in situation when the view of the sky is obscured by a bridge or tall buildings. In both cases, a potentially dangerous situation is provoked as the robot has no or inaccurate positioning data. A sensor data processing scheme is proposed where these gaps are smoothly filled in by positioning data generated from scan matching and registration of the laser data. This article shows preliminary results of positioning data improvement during trials in harbor-river environment. | In addition to sonar and LiDAR, vision-based ASV navigation has been explored, for example by Dunbabin al @cite_13 , Wang al @cite_10 , Heidarsson and Sukhatme @cite_8 . Despite producing information-rich data at a high frequency, standard imaging techniques do not provide the exact geometric information about the environment. Thus, with the decreasing sensor prices, LiDARs become more commonplace in autonomous vehicles, including ASVs. | {
"cite_N": [
"@cite_10",
"@cite_13",
"@cite_8"
],
"mid": [
"2026923097",
"2138906051",
"2132169253"
],
"abstract": [
"Over the past decade, the trajectory tracking of underactuated water surface robots (or boats, surface vessels, etc.) has been an attractive topic in the control and automation community, and numerous controllers are proposed for this challenging problem. However, most, if not all, of the existing trajectory tracking controllers of the underactuated water surface robots assume the global positions of the robots can be accurately measured. In the practical applications, the global position measurement systems are sometimes unstable or even unavailable in the working environments of the robots. To avoid the direct position measurement, this brief presents a new controller for the trajectory tracking of underactuated water surface robots using monocular visual feedback. This controller works on the basis of a novel adaptive algorithm for estimating global position of the robot online using visual feature tracking from a monocular camera, and its orientation and velocity measured by the attitude and heading reference system sensor and a velocity observer. It is proved by Lyapunov theory that the proposed adaptive visual servo controller gives rise to asymptotic tracking of a desired trajectory and convergence of the position estimation to the actual position. Experiments are conducted to validate the effectiveness and robust performance of the proposed controller.",
"This paper describes the development of a novel vision-based autonomous surface vehicle with the purpose of performing coordinated docking manoeuvres with a target, such as an autonomous underwater vehicle, at the water's surface. The system architecture integrates two small processor units; the first performs vehicle control and implements a virtual force based docking strategy, with the second performing vision-based target segmentation and tracking. Furthermore, the architecture utilises wireless sensor network technology allowing the vehicle to be observed by, and even integrated within an ad-hoc sensor network. Simulated and experimental results are presented demonstrating the autonomous vision- based docking strategy on a proof-of-concept vehicle.",
"We describe a technique for an Autonomous Surface Vehicle (ASV) to learn an obstacle map by classifying overhead imagery. Classification labels are supplied by a front-facing sonar, mounted under the water line on the ASV. We use aerial imagery from two online sources for each of two water bodies (a small lake and a harbor) and train classifiers using features generated from each image source separately, followed by combining their output. Data collected using a sonar mounted on the ASV were used to generate the labels in the experimental study. The results show that we are able to generate accurate obstacle maps well-suited for ASV navigation."
]
} |
1905.07491 | 2966531495 | In the presented scenario, an autonomous surface vehicle (ASV) equipped with a laser scanner navigates on a inland pathway surrounded and crossed by man-made structures such as bridges and locks. GPS receiver present on board experiences signal loss and multipath reflections in situation when the view of the sky is obscured by a bridge or tall buildings. In both cases, a potentially dangerous situation is provoked as the robot has no or inaccurate positioning data. A sensor data processing scheme is proposed where these gaps are smoothly filled in by positioning data generated from scan matching and registration of the laser data. This article shows preliminary results of positioning data improvement during trials in harbor-river environment. | The idea of calculating relative displacement from the incremental LiDAR scan matching is not new. Tang al @cite_3 makes use of such technique for indoor navigation of a terrestrial robot. However, data collected in a building environment has many easy to exploit characteristics, such as near-perfect planes created by the walls, short sensing distances and the existence of the ground plane. The techniques using bathymetric sonar readings and a known depth map, commonly referred to as terrain-based navigation'' belong to the same family, although with a slightly different geometry of the problem (as explored by Lucido al @cite_2 , Li al @cite_12 and others). | {
"cite_N": [
"@cite_12",
"@cite_3",
"@cite_2"
],
"mid": [
"2792681400",
"2162136131",
"2017444115"
],
"abstract": [
"Abstract Considering that the terrain-aided navigation (TAN) system based on iterated closest contour point (ICCP) algorithm diverges easily when the indicative track of strapdown inertial navigation system (SINS) is large, Kalman filter is adopted in the traditional ICCP algorithm, difference between matching result and SINS output is used as the measurement of Kalman filter, then the cumulative error of the SINS is corrected in time by filter feedback correction, and the indicative track used in ICCP is improved. The mathematic model of the autonomous underwater vehicle (AUV) integrated into the navigation system and the observation model of TAN is built. Proper matching point number is designated by comparing the simulation results of matching time and matching precision. Simulation experiments are carried out according to the ICCP algorithm and the mathematic model. It can be concluded from the simulation experiments that the navigation accuracy and stability are improved with the proposed combinational algorithm in case that proper matching point number is engaged. It will be shown that the integrated navigation system is effective in prohibiting the divergence of the indicative track and can meet the requirements of underwater, long-term and high precision of the navigation system for autonomous underwater vehicles.",
"A new scan that matches an aided Inertial Navigation System (INS) with a low-cost LiDAR is proposed as an alternative to GNSS-based navigation systems in GNSS-degraded or -denied environments such as indoor areas, dense forests, or urban canyons. In these areas, INS-based Dead Reckoning (DR) and Simultaneous Localization and Mapping (SLAM) technologies are normally used to estimate positions as separate tools. However, there are critical implementation problems with each standalone system. The drift errors of velocity, position, and heading angles in an INS will accumulate over time, and on-line calibration is a must for sustaining positioning accuracy. SLAM performance is poor in featureless environments where the matching errors can significantly increase. Each standalone positioning method cannot offer a sustainable navigation solution with acceptable accuracy. This paper integrates two complementary technologies—INS and LiDAR SLAM—into one navigation frame with a loosely coupled Extended Kalman Filter (EKF) to use the advantages and overcome the drawbacks of each system to establish a stable long-term navigation process. Static and dynamic field tests were carried out with a self-developed Unmanned Ground Vehicle (UGV) platform—NAVIS. The results prove that the proposed approach can provide positioning accuracy at the centimetre level for long-term operations, even in a featureless indoor environment.",
"A terrain-based underwater navigation using sonar bathymetric profiles is presented. It deals with matching high-resolution local depth maps against a large on-board reference map. The matching algorithm locates the local depth map within the a priori known larger map to determine the absolute position and heading of the vehicle. Two separate approaches for this problem are presented. The first uses a contour-based representation of depth maps. Contours are extracted from both local and reference maps. Invariant attributes under rigid plane transformation are associated with each contour point, so that the problem is reduced to a point-based matching algorithm: given two point sets, find correspondences and estimate transformation between the two sets. We shall particularly focus on the formalism of partial differential equations, which is used to smooth depth maps in a morphologically invariant way and to obtain anisotropic contours. The second approach is also based on a correspondence algorithm. Here, ..."
]
} |
1905.07173 | 2945551312 | Committees are an important scenario for reaching consensus. Beyond standard consensus-seeking issues, committee decisions are complicated by a deadline, e.g., the next start date for a budget, or the start of a semester. In committee hiring decisions, it may be that if no candidate is supported by a strong majority, the default is to hire no one---an option that may cost committee members dearly. As a result, committee members might prefer to agree on a reasonable, if not necessarily the best, candidate, to avoid unfilled positions. In this paper we propose a model for the above scenario------based on a time-bounded iterative voting process. We explore theoretical features of CUDs, particularly focusing on convergence guarantees and quality of the final decision. An extensive experimental study demonstrates more subtle features of CUDs, e.g., the difference between two simple types of committee member behavior, lazy vs. proactive voters. Finally, a user study examines the differences between the behavior of rational voting bots and real voters, concluding that it may often be best to have bots play on the voters' behalf. | A CUD game is a type of iterative voting game, e.g., @cite_8 @cite_24 @cite_14 @cite_6 @cite_16 . However, CUDs have several unique features. First, although CUDs do utilise a known voting rule, they work directly with the set of possible winners (i.e., alternatives that might be chosen by the majority), and behave much like non-myopic games based on local-dominance (see, e.g., @cite_19 @cite_18 @cite_5 ). On the other hand, the distinction between lazy and proactive voter behaviour links CUDs with biased voting (see, e.g., @cite_10 ). | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_8",
"@cite_6",
"@cite_24",
"@cite_19",
"@cite_5",
"@cite_16",
"@cite_10"
],
"mid": [
"",
"2507751917",
"",
"1485127598",
"2143304863",
"1970744965",
"101344701",
"",
"1485809414"
],
"abstract": [
"",
"We cast the various different models used for the analysis of iterative voting schemes into a general framework, consistent with the literature on acyclicity in games. More specifically, we classify convergence results based on the underlying assumptions on the agent scheduler (the order of players) and the action scheduler (the response played by the agent).",
"",
"We study convergence properties of iterative voting procedures. Such procedures are defined by a voting rule and a (restricted) iterative process, where at each step one agent can modify his vote towards a better outcome for himself. It is already known that if the iteration dynamics (the manner in which voters are allowed to modify their votes) are unrestricted, then the voting process may not converge. For most common voting rules this may be observed even under the best response dynamics limitation. It is therefore important to investigate whether and which natural restrictions on the dynamics of iterative voting procedures can guarantee convergence. To this end, we provide two general conditions on the dynamics based on iterative myopic improvements, each of which is sufficient for convergence. We then identify several classes of voting rules (including Positional Scoring Rules, Maximin, Copeland and Bucklin), along with their corresponding iterative processes, for which at least one of these conditions hold.",
"We develop a formal model of opinion polls in elections and study how they influence the voting behaviour of the participating agents, and thereby election outcomes. This approach is particularly relevant to the study of collective decision making by means of voting in multiagent systems, where it is reasonable to assume that we can precisely model the amount of information available to agents and where agents can be expected to follow relatively simple rules when adjusting their behaviour in response to polls. We analyse two settings, one where a single agent strategises in view of a single poll, and one where multiple agents repeatedly update their voting intentions in view of a sequence of polls. In the single-poll setting we vary the amount of information a poll provides and examine, for different voting rules, when an agent starts and stops having an incentive to manipulate the election. In the repeated-poll setting, using both analytical and experimental methods, we study how the properties of different voting rules are affected under different sets of assumptions on how agents will respond to poll information. Together, our results clarify under which circumstances sharing information via opinion polls can improve the quality of election outcomes and under which circumstances it may have negative effects, due to the increased opportunities for manipulation it provides.",
"We suggest a new model for strategic voting based on local dominance, where voters consider a set of possible outcomes without assigning probabilities to them. We prove that voting equilibria under the Plurality rule exist for a broad class of local dominance relations. Furthermore, we show that local dominance-based dynamics quickly converge to an equilibrium if voters start from the truthful state, and we provide weaker convergence guarantees in more general settings. Using extensive simulations of strategic voting on generated and real profiles, we show that emerging equilibria replicate widely known patterns of human voting behavior such as Duverger's law, and that they generally improve the quality of the winner compared to non-strategic voting.",
"Understanding the nature of strategic voting is the holy grail of social choice theory, where game-theory, social science and recently computational approaches are all applied in order to model the incentives and behavior of voters. In a recent paper, (2014) made another step in this direction, by suggesting a behavioral game-theoretic model for voters under uncertainty. For a specific variation of best-response heuristics, they proved initial existence and convergence results in the Plurality voting system. This paper extends the model in multiple directions, considering voters with different uncertainty levels, simultaneous strategic decisions, and a more permissive notion of best-response. It is proved that a voting equilibrium exists even in the most general case. Further, any society voting in an iterative setting is guaranteed to converge to an equilibrium. An alternative behavior is analyzed, where voters try to minimize their worst-case regret. As it turns out, the two behaviors coincide in the simple setting of (2014), but not in the general case.",
"",
"We present a systematic study of Plurality elections with strategic voters who, in addition to having preferences over election winners, also have secondary preferences, governing their behavior when their vote cannot affect the election outcome. Specifically, we study two models that have been recently considered in the literature: lazy voters, who prefer to abstain when they are not pivotal, and truth-biased voters, who prefer to vote truthfully when they are not pivotal. For both lazy and truth-biased voters, we are interested in their behavior under different tie-breaking rules (lexicographic rule, random voter rule, random candidate rule). Two of these six combinations of secondary preferences and tie-breaking rules have been studied in prior work; for the remaining four, we characterize pure Nash equilibria (PNE) of the resulting strategic games and study the complexity of related computational problems. We then use these results to analyze the impact of different secondary preferences and tie-breaking rules on the election outcomes. Our results extend to settings where some of the voters are non-strategic."
]
} |
1905.07173 | 2945551312 | Committees are an important scenario for reaching consensus. Beyond standard consensus-seeking issues, committee decisions are complicated by a deadline, e.g., the next start date for a budget, or the start of a semester. In committee hiring decisions, it may be that if no candidate is supported by a strong majority, the default is to hire no one---an option that may cost committee members dearly. As a result, committee members might prefer to agree on a reasonable, if not necessarily the best, candidate, to avoid unfilled positions. In this paper we propose a model for the above scenario------based on a time-bounded iterative voting process. We explore theoretical features of CUDs, particularly focusing on convergence guarantees and quality of the final decision. An extensive experimental study demonstrates more subtle features of CUDs, e.g., the difference between two simple types of committee member behavior, lazy vs. proactive voters. Finally, a user study examines the differences between the behavior of rational voting bots and real voters, concluding that it may often be best to have bots play on the voters' behalf. | @cite_11 performed a study of human behavior in online voting. They classified voters into three distinct types, two of which are not strategic and one that will perform straightforward strategic moves. However their setting does not contain a deadline. Moreover, while they provide valuable insights on human voting patterns, they do not compare their framework with a rational strategic model, so the question of whether humans and bots act alike remained unanswered. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2245193660"
],
"abstract": [
"Plurality voting is perhaps the most commonly used way to aggregate the preferences of multiple voters. The purpose of this paper is to provide a comprehensive study of people's voting behaviour in various online settings under the Plurality rule. Our empirical methodology consisted of a voting game in which participants vote for a single candidate out of a given set. We implemented voting games that replicate two common real-world voting scenarios: In the first, a single voter votes once after seeing a large pre-election poll. In the second game, several voters play simultaneously, and change their vote as the game progresses, as in small committees. The winning candidate in each game (and hence the subject's payment) is determined using the plurality rule. For each of these settings we generated hundreds of game instances, varying conditions such as the number of voters and their preferences. We show that people can be classified into at least three groups, two of which are not engaged in any strategic behavior. The third and largest group tends to select the natural \"default\" action when there is no clear strategic alternative. When an active strategic decision can be made that improves their immediate payoff, people usually choose that strategic alternative. Our study has insight for multi-agent system designers in uncovering patterns that provide reasonable predictions of voters' behaviors, which may facilitate the design of agents that support people or act autonomously in voting systems."
]
} |
1905.07072 | 2946830201 | Model compression is eminently suited for deploying deep learning on IoT-devices. However, existing model compression techniques rely on access to the original or some alternate dataset. In this paper, we address the model compression problem when no real data is available, e.g., when data is private. To this end, we propose Dream Distillation, a data-independent model compression framework. Our experiments show that Dream Distillation can achieve 88.5 accuracy on the CIFAR-10 test set without actually training on the original data! | KD refers to the teacher-student paradigm, where the teacher model is a large deep network we want to compress @cite_1 @cite_6 . In KD, we train a significantly smaller student neural network to mimic this large teacher model (see Fig. (a)). KD has also been shown to work with unlabeled datasets @cite_3 . Of note, since the term model compression'' usually refers to pruning and quantization, we assume KD to be a part of model compression, as it also leads to significantly compressed models. | {
"cite_N": [
"@cite_3",
"@cite_1",
"@cite_6"
],
"mid": [
"2604342492",
"1821462560",
""
],
"abstract": [
"Current approaches for Knowledge Distillation (KD) either directly use training data or sample from the training data distribution. In this paper, we demonstrate effectiveness of 'mismatched' unlabeled stimulus to perform KD for image classification networks. For illustration, we consider scenarios where this is a complete absence of training data, or mismatched stimulus has to be used for augmenting a small amount of training data. We demonstrate that stimulus complexity is a key factor for distillation's good performance. Our examples include use of various datasets for stimulating MNIST and CIFAR teachers.",
"A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.",
""
]
} |
1905.07072 | 2946830201 | Model compression is eminently suited for deploying deep learning on IoT-devices. However, existing model compression techniques rely on access to the original or some alternate dataset. In this paper, we address the model compression problem when no real data is available, e.g., when data is private. To this end, we propose Dream Distillation, a data-independent model compression framework. Our experiments show that Dream Distillation can achieve 88.5 accuracy on the CIFAR-10 test set without actually training on the original data! | Despite its significance, the literature on model compression in absence of real data is very sparse. A relevant prior work is @cite_4 where the authors propose a Data-Free KD (DFKD) framework. However, there are major differences between DFKD and the present work: DFKD requires significantly more metadata than our approach. Specifically, @cite_4 argue that using metadata from only the final layer under-constrains the image generation problem, and results in very poor student accuracy. Consequently, DFKD assumes access to metadata at layers. In contrast, Dream Distillation assumes that metadata is available only at one layer of the teacher network. Hence, in this paper, we precisely demonstrate that metadata from a layer is sufficient to achieve high student accuracy, something that DFKD failed to accomplish. When using metadata from only one layer, DFKD achieves only @math -$77 DFKD also proposes a spectral method-based metadata for synthetic image generation. However, both spectral methods and all-layer metadata can be computationally very expensive and do not scale for larger networks. Compared to these, we follow a clustering-based approach which helps generate diverse images while using significantly less computation. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2766966408"
],
"abstract": [
"Recent advances in model compression have provided procedures for compressing large neural networks to a fraction of their original size while retaining most if not all of their accuracy. However, all of these approaches rely on access to the original training set, which might not always be possible if the network to be compressed was trained on a very large dataset, or on a dataset whose release poses privacy or safety concerns as may be the case for biometrics tasks. We present a method for data-free knowledge distillation, which is able to compress deep neural networks trained on large-scale datasets to a fraction of their size leveraging only some extra metadata to be provided with a pretrained model release. We also explore different kinds of metadata that can be used with our method, and discuss tradeoffs involved in using each of them."
]
} |
1905.07072 | 2946830201 | Model compression is eminently suited for deploying deep learning on IoT-devices. However, existing model compression techniques rely on access to the original or some alternate dataset. In this paper, we address the model compression problem when no real data is available, e.g., when data is private. To this end, we propose Dream Distillation, a data-independent model compression framework. Our experiments show that Dream Distillation can achieve 88.5 accuracy on the CIFAR-10 test set without actually training on the original data! | Finally, @cite_0 focus on data-free finetuning for pruning, and show the effectiveness of their approach for fully-connected layers. In comparison, our work is much more general, as we do not focus on just the finetuning of a compressed model, but rather on training a compressed student model from scratch. | {
"cite_N": [
"@cite_0"
],
"mid": [
"992687842"
],
"abstract": [
"Deep Neural nets (NNs) with millions of parameters are at the heart of many state-of-the-art computer vision systems today. However, recent works have shown that much smaller models can achieve similar levels of performance. In this work, we address the problem of pruning parameters in a trained NN model. Instead of removing individual weights one at a time as done in previous works, we remove one neuron at a time. We show how similar neurons are redundant, and propose a systematic way to remove them. Our experiments in pruning the densely connected layers show that we can remove upto 85 of the total parameters in an MNIST-trained network, and about 35 for AlexNet without significantly affecting performance. Our method can be applied on top of most networks with a fully connected layer to give a smaller network."
]
} |
1905.07152 | 2966686838 | Congestion control is an indispensable component of transport protocols to prevent congestion collapse. As such, it distributes the available bandwidth among all competing flows, ideally in a fair manner. However, there exists a constantly evolving set of congestion control algorithms, each addressing different performance needs and providing the potential for custom parametrizations. In particular, content providers such as CDNs are known to tune TCP stacks for performance gains. In this paper, we thus empirically investigate if current Internet traffic generated by content providers still adheres to the conventional understanding of fairness. Our study compares fairness properties of testbed hosts to actual traffic of six major content providers subject to different bandwidths, RTTs, queue sizes, and queueing disciplines in a home-user setting. We find that some employed congestion control algorithms lead to significantly asymmetric bandwidth shares, however, AQMs such as FQ_CoDel are able to alleviate such unfairness. | Intra-Protocol and RTT-Fairness. For Cubic, research has commonly found decent intra-protocol fairness and an inverse-proportional RTT-fairness, meaning that instances with smaller RTTs get a larger share of the overall bandwidth. These findings have been confirmed for a large set of different network characteristics, ranging from small ( [10] Mbps , @cite_24 ) to large bottleneck bandwidths ( [10] Gbps , @cite_12 ) or short ( [16] ms , @cite_11 @cite_3 ) to long ( [324] ms , @cite_11 @cite_3 ) RTTs. | {
"cite_N": [
"@cite_24",
"@cite_3",
"@cite_12",
"@cite_11"
],
"mid": [
"600394635",
"2107575014",
"1986442642",
"2022844530"
],
"abstract": [
"In this paper we present an initial experimental evaluation of the recently proposed Cubic-TCP algorithm. Results are presented using a suite of benchmark tests that have been recently proposed in the literature [12], and a number of issues are of practical concern highlighted.",
"We present experimental results evaluating fairness of several proposals to change the TCP congestion control algorithm, in support of operation on high bandwidth-delay- product (BDP) network paths. We examine and compare the fairness of New Reno TCP BIC, cubic, hamilton-TCP, highspeed-TCP and Scalable-TCP. We focus on four different views of fairness: TCP-friendliness RTT-fairness, intra- and inter-protocol fairness.",
"Several high-speed TCP variants are adopted by end users and therefore, heterogeneous congestion control has become the characteristic of newly emerging high-speed optical networks. In contrast to homogeneous TCP flows, fairness among heterogeneous TCP flows now depends on router parameters such as queue management scheme and buffer size. To the best of our knowledge, this is the first study of fairness performance among heterogeneous TCP variants over a 10 Gbps high-speed optical network environment. Our evaluation scenarios for heterogeneous TCP flows consist of TCP variants with substantial presence in current Internet; therefore, TCP-SACK, CUBIC TCP (CUBIC) and HighSpeed TCP (HSTCP) compete for bottleneck bandwidth. Experimental results for fairness are presented for different queue management schemes, such as Drop-tail, Random Early Detection (RED), CHOose and Keep for responsive flows CHOose and Kill for unresponsive flows (CHOKe), and Approximate Fair Dropping (AFD), with varying degree of buffer sizes. We observe heterogeneous TCP flows induce more unfairness than homogeneous ones. Active queue management (AQM) schemes, RED, CHOKe, and AFD, improve fairness for large buffer sizes as compared to Drop-tail, whereas AQM schemes lose the fairness advantage for small buffer sizes. This study provides preliminary results for fairness challenges to deploy all-optical routers with limited buffer size.",
"CUBIC is a congestion control protocol for TCP (transmission control protocol) and the current default TCP algorithm in Linux. The protocol modifies the linear window growth function of existing TCP standards to be a cubic function in order to improve the scalability of TCP over fast and long distance networks. It also achieves more equitable bandwidth allocations among flows with different RTTs (round trip times) by making the window growth to be independent of RTT -- thus those flows grow their congestion window at the same rate. During steady state, CUBIC increases the window size aggressively when the window is far from the saturation point, and the slowly when it is close to the saturation point. This feature allows CUBIC to be very scalable when the bandwidth and delay product of the network is large, and at the same time, be highly stable and also fair to standard TCP flows. The implementation of CUBIC in Linux has gone through several upgrades. This paper documents its design, implementation, performance and evolution as the default TCP algorithm of Linux."
]
} |
1905.07152 | 2966686838 | Congestion control is an indispensable component of transport protocols to prevent congestion collapse. As such, it distributes the available bandwidth among all competing flows, ideally in a fair manner. However, there exists a constantly evolving set of congestion control algorithms, each addressing different performance needs and providing the potential for custom parametrizations. In particular, content providers such as CDNs are known to tune TCP stacks for performance gains. In this paper, we thus empirically investigate if current Internet traffic generated by content providers still adheres to the conventional understanding of fairness. Our study compares fairness properties of testbed hosts to actual traffic of six major content providers subject to different bandwidths, RTTs, queue sizes, and queueing disciplines in a home-user setting. We find that some employed congestion control algorithms lead to significantly asymmetric bandwidth shares, however, AQMs such as FQ_CoDel are able to alleviate such unfairness. | For BBR, less research exists and the available studies partly disagree on the properties of BBR. This is especially true for intra-protocol fairness, as Cardwell al @cite_7 claim a high degree of fairness across the board, while Hock al @cite_4 identify scenarios where the fairness is significantly impaired. Regarding RTT-fairness, it is commonly found that BBR has a proportional RTT-fairness property, a flow with a larger RTT gets a larger share of the available bandwidth @cite_7 @cite_16 @cite_20 . Hock al @cite_4 generally confirm the findings but by investigating two different bottleneck queue sizes, they find that in scenarios with a smaller queue size (0.8 @math bandwidth delay product (BDP)) flows with a smaller RTT have a slight advantage, while in large buffer scenarios (8 @math BDP) the inverse is true and flows with larger RTTs have a significant advantage. | {
"cite_N": [
"@cite_20",
"@cite_16",
"@cite_4",
"@cite_7"
],
"mid": [
"2940580470",
"2745739520",
"2768913804",
"2939779488"
],
"abstract": [
"In 2016, Google published the bottleneck bandwidth and round-trip time (BBR) congestion control algorithm. Unlike established loss- or delay-based algorithms like CUBIC or Vegas, BBR claims to operate without creating packet loss or filling buffers. Because of these prospects and promising initial performance results, BBR has gained wide-spread attention. As such it has been subject to behavior and performance analysis, which confirmed the results, but also revealed critical flaws.Because BBR is still work in progress, measurement results have limited validity for the future. In this paper we present our publicly available framework for reproducible TCP measurements based on network emulation. In a case study, we analyze the TCP BBR algorithm, reproduce and confirm weaknesses of the current BBR implementation, and provide further insights. We also contribute an analysis of BBR’s inter-flow synchronization behavior, showing that it reaches fairness equilibrium for long lived flows.",
"BBR is a new congestion-based congestion control algorithm proposed by Google. A BBR flow sequentially measures the bottleneck bandwidth and round-trip delay of the network pipe, and uses the measured results to govern its sending behavior, maximizing the delivery bandwidth while minimizing the delay. However, our deployment in geo-distributed cloud servers reveals a severe RTT fairness problem: a BBR flow with longer RTT dominates a competing flow with shorter RTT. Somewhat surprisingly, our deployment of BBR on the Internet and an in-house cluster unearthed a consistent bandwidth disparity among competing flows. Long BBR flows are bound to seize bandwidth from short ones. Intrigued by this unexpected behavior, we ask, is the phenomenon intrinsic to BBR? how's the severity? and what's the root cause? To this end, we conduct thorough measurements and develop a theoretical model on bandwidth dynamics. We find, as long as the competing flows are of different RTTs, bandwidth disparities will arise. With an RTT ratio of 10, even flow starvation can happen. We blame it on BBR's connivance at sending an excessive amount of data when probing bandwidth. Specifically, the amount of data is in proportion to RTT, making long RTT flows overwhelming short ones. Based on this observation, we design a derivative of BBR that achieves guaranteed flow fairness, at the meantime without losing any merits. We have implemented our proposed solution in Linux kernel and evaluated it through extensive experiments.",
"BBR is a recently proposed congestion control. Instead of using packet loss as congestion signal, like many currently used congestion controls, it uses an estimate of the available bottleneck link bandwidth to determine its sending rate. BBR tries to provide high link utilization while avoiding to create queues in bottleneck buffers. The original publication of BBR shows that it can deliver superior performance compared to Cubic TCP in some environments. This paper provides an independent and extensive experimental evaluation of BBR at higher speeds. The experimental setup uses BBR's Linux kernel 4.9 implementation and typical data rates of 10Gbit s and 1 Gbit s at the bottleneck link. The experiments vary the flows' round-trip times, the number of flows, and buffer sizes at the bottleneck. The evaluation considers throughput, queuing delay, packet loss, and fairness. On the one hand, the intended behavior of BBR could be observed with our experiments. On the other hand, some severe inherent issues such as increased queuing delays, unfairness, and massive packet loss were also detected. The paper provides an in-depth discussion of BBR's behavior in different experiment setups.",
"This document specifies the BBR congestion control algorithm. BBR uses recent measurements of a transport connection's delivery rate and round-trip time to build an explicit model that includes both the maximum recent bandwidth available to that connection, and its minimum recent round-trip delay. BBR then uses this model to control both how fast it sends data and the maximum amount of data it allows in flight in the network at any time. Relative to loss-based congestion control algorithms such as Reno [RFC5681] or CUBIC [draft-ietf-tcpm-cubic], BBR offers substantially higher throughput for bottlenecks with shallow buffers or random losses, and substantially lower queueing delays for bottlenecks with deep buffers (avoiding \"bufferbloat\"). This algorithm can be implemented in any transport protocol that supports packet-delivery acknowledgment (thus far, open source implementations are available for TCP [RFC793] and QUIC [draft-ietf- quic-transport-00])."
]
} |
1905.07152 | 2966686838 | Congestion control is an indispensable component of transport protocols to prevent congestion collapse. As such, it distributes the available bandwidth among all competing flows, ideally in a fair manner. However, there exists a constantly evolving set of congestion control algorithms, each addressing different performance needs and providing the potential for custom parametrizations. In particular, content providers such as CDNs are known to tune TCP stacks for performance gains. In this paper, we thus empirically investigate if current Internet traffic generated by content providers still adheres to the conventional understanding of fairness. Our study compares fairness properties of testbed hosts to actual traffic of six major content providers subject to different bandwidths, RTTs, queue sizes, and queueing disciplines in a home-user setting. We find that some employed congestion control algorithms lead to significantly asymmetric bandwidth shares, however, AQMs such as FQ_CoDel are able to alleviate such unfairness. | Inter-Protocol Fairness. While the intra-protocol and RTT-fairness of CC is important for a large scale-out of the algorithms, the inter-protocol fairness property shines a light on the coexisting use of different CC algorithms in the Internet. Unfortunately, several groups of researchers have found that BBR and Cubic do not cooperate well, as Cubic flows dominate BBR flows in scenarios with larger buffers (generally above 1 @math BDP) while the opposite is true for small buffer scenarios @cite_7 @cite_4 @cite_20 . | {
"cite_N": [
"@cite_20",
"@cite_4",
"@cite_7"
],
"mid": [
"2940580470",
"2768913804",
"2939779488"
],
"abstract": [
"In 2016, Google published the bottleneck bandwidth and round-trip time (BBR) congestion control algorithm. Unlike established loss- or delay-based algorithms like CUBIC or Vegas, BBR claims to operate without creating packet loss or filling buffers. Because of these prospects and promising initial performance results, BBR has gained wide-spread attention. As such it has been subject to behavior and performance analysis, which confirmed the results, but also revealed critical flaws.Because BBR is still work in progress, measurement results have limited validity for the future. In this paper we present our publicly available framework for reproducible TCP measurements based on network emulation. In a case study, we analyze the TCP BBR algorithm, reproduce and confirm weaknesses of the current BBR implementation, and provide further insights. We also contribute an analysis of BBR’s inter-flow synchronization behavior, showing that it reaches fairness equilibrium for long lived flows.",
"BBR is a recently proposed congestion control. Instead of using packet loss as congestion signal, like many currently used congestion controls, it uses an estimate of the available bottleneck link bandwidth to determine its sending rate. BBR tries to provide high link utilization while avoiding to create queues in bottleneck buffers. The original publication of BBR shows that it can deliver superior performance compared to Cubic TCP in some environments. This paper provides an independent and extensive experimental evaluation of BBR at higher speeds. The experimental setup uses BBR's Linux kernel 4.9 implementation and typical data rates of 10Gbit s and 1 Gbit s at the bottleneck link. The experiments vary the flows' round-trip times, the number of flows, and buffer sizes at the bottleneck. The evaluation considers throughput, queuing delay, packet loss, and fairness. On the one hand, the intended behavior of BBR could be observed with our experiments. On the other hand, some severe inherent issues such as increased queuing delays, unfairness, and massive packet loss were also detected. The paper provides an in-depth discussion of BBR's behavior in different experiment setups.",
"This document specifies the BBR congestion control algorithm. BBR uses recent measurements of a transport connection's delivery rate and round-trip time to build an explicit model that includes both the maximum recent bandwidth available to that connection, and its minimum recent round-trip delay. BBR then uses this model to control both how fast it sends data and the maximum amount of data it allows in flight in the network at any time. Relative to loss-based congestion control algorithms such as Reno [RFC5681] or CUBIC [draft-ietf-tcpm-cubic], BBR offers substantially higher throughput for bottlenecks with shallow buffers or random losses, and substantially lower queueing delays for bottlenecks with deep buffers (avoiding \"bufferbloat\"). This algorithm can be implemented in any transport protocol that supports packet-delivery acknowledgment (thus far, open source implementations are available for TCP [RFC793] and QUIC [draft-ietf- quic-transport-00])."
]
} |
1905.07039 | 2949484661 | In recent years, the use of bio-sensing signals such as electroencephalogram (EEG), electrocardiogram (ECG), etc. have garnered interest towards applications in affective computing. The parallel trend of deep-learning has led to a huge leap in performance towards solving various vision-based research problems such as object detection. Yet, these advances in deep-learning have not adequately translated into bio-sensing research. This work applies novel deep-learning-based methods to various bio-sensing and video data of four publicly available multi-modal emotion datasets. For each dataset, we first individually evaluate the emotion-classification performance obtained by each modality. We then evaluate the performance obtained by fusing the features from these modalities. We show that our algorithms outperform the results reported by other studies for emotion valence arousal liking classification on DEAP and MAHNOB-HCI datasets and set up benchmarks for the newer AMIGOS and DREAMER datasets. We also evaluate the performance of our algorithms by combining the datasets and by using transfer learning to show that the proposed method overcomes the inconsistencies between the datasets. Hence, we do a thorough analysis of multi-modal affective data from more than 120 subjects and 2,800 trials. Finally, utilizing a convolution-deconvolution network, we propose a new technique towards identifying salient brain regions corresponding to various affective states. | Table shows that in almost all the cases EEG has been the preferred bio-sensing modality while vision modality i.e. the use of the frontal videos to analyze facial expressions has not been commonly used on these datasets. The classification accuracy for all emotion classes as per the circumplex model rather than only for arousal valence are rarely reported. In other cases such as @cite_61 @cite_6 , where the analysis of emotions is reported, the goal seems to be clustering the complete dataset into four classes rather than having a distinct training and testing partition for evaluation. | {
"cite_N": [
"@cite_61",
"@cite_6"
],
"mid": [
"2536442339",
"2222581876"
],
"abstract": [
"Recognition of discriminative brain functional network pattern and regions corresponding to emotions are important in understanding the neuron functional network underlying the human emotion process. Emotion models mapping onto brain is possible with the help of emotion-specific network patterns and its corresponding brain regions. This paper presents a method to identify emotion related functional connectivity pattern and their distinctive associated regions using EEG phase synchrony (phase locking value (PLV)) connectivity analysis. The emotion-specific channel pairs, reactive band, and synchrony related locations are identified based on the network dissimilarities between emotion and rest tasks. With the most reactive pairs identified, the emotion-specific functional network is formed. The proposed method is validated on ‘database for emotion analysis using physiological signals (DEAP)’ that confirms the distinct nature of identified functional connectivity pattern and the regions corresponding to the emotion.",
"The brain functional network perspective forms the basis to relate mechanisms of brain functions. This work analyzes the network mechanisms related to human emotion based on synchronization measure - phase-locking value in EEG to formulate the emotion specific brain functional network. Based on network dissimilarities between emotion and rest tasks, most reactive channel pairs and the reactive band corresponding to emotions are identified. With the identified most reactive pairs, the subject-specific functional network is formed. The identified subject-specific and emotion-specific dynamic network pattern show significant synchrony variation in line with the experiment protocol. The same network pattern are then employed for classification of emotions. With the study conducted on the 4 subjects, an average classification accuracy of 62 was obtained with the proposed technique."
]
} |
1905.07039 | 2949484661 | In recent years, the use of bio-sensing signals such as electroencephalogram (EEG), electrocardiogram (ECG), etc. have garnered interest towards applications in affective computing. The parallel trend of deep-learning has led to a huge leap in performance towards solving various vision-based research problems such as object detection. Yet, these advances in deep-learning have not adequately translated into bio-sensing research. This work applies novel deep-learning-based methods to various bio-sensing and video data of four publicly available multi-modal emotion datasets. For each dataset, we first individually evaluate the emotion-classification performance obtained by each modality. We then evaluate the performance obtained by fusing the features from these modalities. We show that our algorithms outperform the results reported by other studies for emotion valence arousal liking classification on DEAP and MAHNOB-HCI datasets and set up benchmarks for the newer AMIGOS and DREAMER datasets. We also evaluate the performance of our algorithms by combining the datasets and by using transfer learning to show that the proposed method overcomes the inconsistencies between the datasets. Hence, we do a thorough analysis of multi-modal affective data from more than 120 subjects and 2,800 trials. Finally, utilizing a convolution-deconvolution network, we propose a new technique towards identifying salient brain regions corresponding to various affective states. | In terms of accuracy, we see from Table that using multiple sensor modalities, the best performance on the DEAP datset is by @cite_41 when utilizing data from mulitple modalities. For the MAHNOB-HCI dataset, the best accuracy for valence and arousal is 73 This study will utilize complete datasets and not a subset of them, as in some previous studies. We evaluate our methods with disjoint partitions between training, validation, and test subsets of the complete datasets. Our evaluation is first reported for all modalities separately (including using frontal videos that were ignored by other studies) and then combining them together. Since not all datasets and previous studies report results on Dominance, we chose to classify valence, arousal, liking and emotions as the affective measures. | {
"cite_N": [
"@cite_41"
],
"mid": [
"2565944610"
],
"abstract": [
"An ensemble of deep classifiers is built for recognizing emotions using multimodal physiological signals.The higher-level abstractions of physiological features of each modality are separately extracted by deep hidden neurons in member stacked-autoencoders.The minimal structure of the deep model is identified according to a structural loss function for local geometrical information preservation.The physiological feature abstractions are merged via an adjacent-graph based fusion network with hierarchical layers. Background and ObjectiveUsing deep-learning methodologies to analyze multimodal physiological signals becomes increasingly attractive for recognizing human emotions. However, the conventional deep emotion classifiers may suffer from the drawback of the lack of the expertise for determining model structure and the oversimplification of combining multimodal feature abstractions. MethodsIn this study, a multiple-fusion-layer based ensemble classifier of stacked autoencoder (MESAE) is proposed for recognizing emotions, in which the deep structure is identified based on a physiological-data-driven approach. Each SAE consists of three hidden layers to filter the unwanted noise in the physiological features and derives the stable feature representations. An additional deep model is used to achieve the SAE ensembles. The physiological features are split into several subsets according to different feature extraction approaches with each subset separately encoded by a SAE. The derived SAE abstractions are combined according to the physiological modality to create six sets of encodings, which are then fed to a three-layer, adjacent-graph-based network for feature fusion. The fused features are used to recognize binary arousal or valence states. ResultsDEAP multimodal database was employed to validate the performance of the MESAE. By comparing with the best existing emotion classifier, the mean of classification rate and F-score improves by 5.26 . ConclusionsThe superiority of the MESAE against the state-of-the-art shallow and deep emotion classifiers has been demonstrated under different sizes of the available physiological instances."
]
} |
1905.07058 | 2946012704 | Gait recognition using noninvasively acquired data has been attracting an increasing interest in the last decade. Among various modalities of data sources, it is experimentally found that the data involving skelet al representation are amenable for reliable feature compaction and fast processing. Model-based gait recognition methods that exploit features from a fitted model, like skeleton, are recognized for their view and scale-invariant properties. We propose a model-based gait recognition method, using sequences recorded by a single flash lidar. Existing state-of-the-art model-based approaches that exploit features from high quality skelet al data collected by Kinect and Mocap are limited to controlled laboratory environments. The performance of conventional research efforts is negatively affected by poor data quality. We address the problem of gait recognition under challenging scenarios, such as lower quality and noisy imaging process of lidar, that degrades the performance of state-of-the-art skeleton-based systems. We present GlidarCo to attain high accuracy on gait recognition under the described conditions. A filtering mechanism corrects faulty skeleton joint measurements, and robust statistics are integrated to conventional feature moments to encode the dynamic of the motion. As a comparison, length-based and vector-based features extracted from the noisy skeletons are investigated for outlier removal. Experimental results illustrate the efficacy of the proposed methodology in improving gait recognition given noisy low resolution lidar data. | This paper is an extension of our previous work @cite_33 , with an improvement on joint coordinate filtering and per-frame identification. Furthermore, we propose a new method to integrate the dynamic of the motion. We also present an outlier removal method for vector-based feature vectors that can be employed in applications that data removal is not an issue. In addition, we evaluate the proposed methodologies on two different feature vectors, and compare with more state-of-the-art relevant methods. | {
"cite_N": [
"@cite_33"
],
"mid": [
"2943148947"
],
"abstract": [
"Gait recognition is a leading remote-based identification method, suitable for real-world surveillance and medical applications. Model-based gait recognition methods have been particularly recognized due to their scale and view-invariant properties. We present the first model-based gait recognition methodology, @math lidar3DJ using a skeleton model extracted from sequences generated by a single flash lidar camera. Existing successful model-based approaches take advantage of high quality skeleton data collected by Kinect and Mocap, for example, are not practicable for applications outside the laboratory. The low resolution and noisy imaging process of lidar negatively affects the performance of state-of-the-art skeleton-based systems, generating a significant number of outlier skeletons. We propose a rule-based filtering mechanism that adopts robust statistics to correct for skeleton joint measurements. Quantitative measurements validate the efficacy of the proposed method in improving gait recognition."
]
} |
1905.07107 | 2944868848 | This paper considers the real-time detection of anomalies in high-dimensional systems. The goal is to detect anomalies quickly and accurately so that the appropriate countermeasures could be taken in time, before the system possibly gets harmed. We propose a sequential and multivariate anomaly detection method that scales well to high-dimensional datasets. The proposed method follows a nonparametric, i.e., data-driven, and semi-supervised approach, i.e., trains only on nominal data. Thus, it is applicable to a wide range of applications and data types. Thanks to its multivariate nature, it can quickly and accurately detect challenging anomalies, such as changes in the correlation structure and stealth low-rate cyberattacks. Its asymptotic optimality and computational complexity are comprehensively analyzed. In conjunction with the detection method, an effective technique for localizing the anomalous data dimensions is also proposed. We further extend the proposed detection and localization methods to a supervised setup where an additional anomaly dataset is available, and combine the proposed semi-supervised and supervised algorithms to obtain an online learning algorithm under the semi-supervised framework. The practical use of proposed algorithms are demonstrated in DDoS attack mitigation, and their performances are evaluated using a real IoT-botnet dataset and simulations. | The problem of anomaly detection has been an important subject of study in several research communities such as statistics, signal processing, machine learning, information theory, data mining, etc. either specifically for an application domain or as a generic method. To name a few, an SVM classification approach for anomaly detection was proposed in @cite_30 ; several information theoretic measures were proposed in @cite_35 for the intrusion detection problem; and two new information metrics for DDoS attack detection was introduced in @cite_7 . Due to the challenging nature of the problem and considering the challenges posed by today's technological advances such as big data problems, there is still a need for reconsidering the anomaly detection problem. | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_7"
],
"mid": [
"2167717760",
"2096847629",
"2119227347"
],
"abstract": [
"One way to describe anomalies is by saying that anomalies are not concentrated. This leads to the problem of finding level sets for the data generating density. We interpret this learning problem as a binary classification problem and compare the corresponding classification risk with the standard performance measure for the density level problem. In particular it turns out that the empirical classification risk can serve as an empirical performance measure for the anomaly detection problem. This allows us to compare different anomaly detection algorithms empirically, i.e. with the help of a test set. Furthermore, by the above interpretation we can give a strong justification for the well-known heuristic of artificially sampling \"labeled\" samples, provided that the sampling plan is well chosen. In particular this enables us to propose a support vector machine (SVM) for anomaly detection for which we can easily establish universal consistency. Finally, we report some experiments which compare our SVM to other commonly used methods including the standard one-class SVM.",
"Anomaly detection is an essential component of protection mechanisms against novel attacks. We propose to use several information-theoretic measures, namely, entropy, conditional entropy, relative conditional entropy, information gain, and information cost for anomaly detection. These measures can be used to describe the characteristics of an audit data set, suggest the appropriate anomaly detection model(s) to be built, and explain the performance of the model(s). We use case studies on Unix system call data, BSM data, and network tcpdump data to illustrate the utilities of these measures.",
"A low-rate distributed denial of service (DDoS) attack has significant ability of concealing its traffic because it is very much like normal traffic. It has the capacity to elude the current anomaly-based detection schemes. An information metric can quantify the differences of network traffic with various probability distributions. In this paper, we innovatively propose using two new information metrics such as the generalized entropy metric and the information distance metric to detect low-rate DDoS attacks by measuring the difference between legitimate traffic and attack traffic. The proposed generalized entropy metric can detect attacks several hops earlier (three hops earlier while the order α = 10 ) than the traditional Shannon metric. The proposed information distance metric outperforms (six hops earlier while the order α = 10) the popular Kullback-Leibler divergence approach as it can clearly enlarge the adjudication distance and then obtain the optimal detection sensitivity. The experimental results show that the proposed information metrics can effectively detect low-rate DDoS attacks and clearly reduce the false positive rate. Furthermore, the proposed IP traceback algorithm can find all attacks as well as attackers from their own local area networks (LANs) and discard attack traffic."
]
} |
1905.06883 | 2945574655 | Process consistency checking (PCC), an interdiscipline of natural language processing (NLP) and business process management (BPM), aims to quantify the degree of (in)consistencies between graphical and textual descriptions of a process. However, previous studies heavily depend on a great deal of complex expert-defined knowledge such as alignment rules and assessment metrics, thus suffer from the problems of low accuracy and poor adaptability when applied in open-domain scenarios. To address the above issues, this paper makes the first attempt that uses deep learning to perform PCC. Specifically, we proposed TraceWalk, using semantic information of process graphs to learn latent node representations, and integrates it into a convolutional neural network (CNN) based model called TraceNet to predict consistencies. The theoretical proof formally provides the PCC's lower limit and experimental results demonstrate that our approach performs more accurately than state-of-the-art baselines. | Recently, some NLP techniques are applied to address a variety of use cases in the context of BPM. This includes a variety of works that focus on process graph labels, for example by annotating process graph elements and correcting linguistic guideline violations @cite_19 , investigating the problem of mixing graphical and textual languages @cite_1 , or resolving lexical ambiguities in process graphs labels @cite_25 @cite_11 . Other use cases involve process texts generation @cite_6 or process graph extraction @cite_4 . However, these methods have been found to produce inaccurate results and require extensive manual participation. | {
"cite_N": [
"@cite_4",
"@cite_1",
"@cite_6",
"@cite_19",
"@cite_25",
"@cite_11"
],
"mid": [
"",
"2287187499",
"2618442697",
"2794405265",
"2065269465",
"2515798589"
],
"abstract": [
"",
"Business process modeling has become an integral part of many organizations for documenting and redesigning complex organizational operations. However, the increasing size of process model repositories calls for automated quality assurance techniques. While many aspects such as formal and structural problems are well understood, there is only a limited understanding of semantic issues caused by natural language. One particularly severe problem arises when modelers employ natural language for expressing control-flow constructs such as gateways or loops. This may not only negatively affect the understandability of process models, but also the performance of analysis tools, which typically assume that process model elements do not encode control-flow related information in natural language. In this paper, we aim at increasing the current understanding of mixing natural and modeling language and therefore exploratively investigate three process model collections from practice. As a result, we identify a set of nine anti patterns for mixing natural and modeling language.",
"Business processes are normally managed by designing, operating and analysing corresponding process models. While delivering these process models, an understanding gap arises depending on the degree of different users’ familiarity with modeling languages, which may slow down or even stop the normal functioning of processes. Therefore, a method for automatically generating texts from process models was proposed. However, the current method just involves ordinary model patterns so that the coverage of the generated text is too low and information loss exists. In this paper, we propose an improved transformation algorithm named Goun to tackle this problem of describing the process models automatically. The experimental results demonstrate that the Goun algorithm not only supports more elements and complex structures, but also remarkably improves the coverage of generated text.",
"",
"System-related engineering tasks are often conducted using process models. In this context, it is essential that these models do not contain structural or terminological inconsistencies. To this end, several automatic analysis techniques have been proposed to support quality assurance. While formal properties of control flow can be checked in an automated fashion, there is a lack of techniques addressing textual quality. More specifically, there is currently no technique available for handling the issue of lexical ambiguity caused by homonyms and synonyms. In this paper, we address this research gap and propose a technique that detects and resolves lexical ambiguities in process models. We evaluate the technique using three process model collections from practice varying in size, domain, and degree of standardization. The evaluation demonstrates that the technique significantly reduces the level of lexical ambiguity and that meaningful candidates are proposed for resolving ambiguity.",
"Textual process descriptions are widely used in organizations since they can be created and understood by virtually everyone. The inherent ambiguity of natural language, however, impedes the automated analysis of textual process descriptions. While human readers can use their context knowledge to correctly understand statements with multiple possible interpretations, automated analysis techniques currently have to make assumptions about the correct meaning. As a result, automated analysis techniques are prone to draw incorrect conclusions about the correct execution of a process. To overcome this issue, we introduce the concept of a behavioral space as a means to deal with behavioral ambiguity in textual process descriptions. A behavioral space captures all possible interpretations of a textual process description in a systematic manner. Thus, it avoids the problem of focusing on a single interpretation. We use a compliance checking scenario and a quantitative evaluation with a set of 47 textual process descriptions to demonstrate the usefulness of a behavioral space for reasoning about a process described by a text. Our evaluation demonstrates that a behavioral space strikes a balance between ignoring ambiguous statements and imposing fixed interpretations on them."
]
} |
1905.06883 | 2945574655 | Process consistency checking (PCC), an interdiscipline of natural language processing (NLP) and business process management (BPM), aims to quantify the degree of (in)consistencies between graphical and textual descriptions of a process. However, previous studies heavily depend on a great deal of complex expert-defined knowledge such as alignment rules and assessment metrics, thus suffer from the problems of low accuracy and poor adaptability when applied in open-domain scenarios. To address the above issues, this paper makes the first attempt that uses deep learning to perform PCC. Specifically, we proposed TraceWalk, using semantic information of process graphs to learn latent node representations, and integrates it into a convolutional neural network (CNN) based model called TraceNet to predict consistencies. The theoretical proof formally provides the PCC's lower limit and experimental results demonstrate that our approach performs more accurately than state-of-the-art baselines. | Along this line, alignment-based PCC methods designed various alignment rules between process graphs and texts. We summarize and show their used procedures in Table . @cite_2 (language-analysis based) set out to create an action-sentence correspondence relation between an action of a process graph and a sentence of a process text through linguistic analysis, similarity computation and best-first searching. @cite_14 (language-analysis based) extended previous work in order to detect missing actions and conflicting orders. Thus can detect inconsistencies in a much more fine-granular manner. @cite_17 (manual-feature based) extended the linguistic analysis and encoded the problem of computing an alignment as the resolution of an integer linear programming problem. @cite_3 (manual-feature based) extracted features that correspond to important process-related information and used so-called predictors to detect if a provided model-text pair is likely to contain inconsistencies. These methods require language analysis, manual feature extraction and sentence similarity computation etc. They heavily depend on existing language analysis tools, and always cause the weak generalization and adaptability. | {
"cite_N": [
"@cite_14",
"@cite_17",
"@cite_3",
"@cite_2"
],
"mid": [
"2484703597",
"2617792510",
"",
"1924512274"
],
"abstract": [
"Many organizations maintain textual process descriptions alongside graphical process models. The purpose is to make process information accessible to various stakeholders, including those who are not familiar with reading and interpreting the complex execution logic of process models. Despite this merit, there is a clear risk that model and text become misaligned when changes are not applied to both descriptions consistently. For organizations with hundreds of different processes, the effort required to identify and clear up such conflicts is considerable. To support organizations in keeping their process descriptions consistent, we present an approach to automatically identify inconsistencies between a process model and a corresponding textual description. Our approach detects cases where the two process representations describe activities in different orders and detect process model activities not contained in the textual description. A quantitative evaluation with 53 real-life model-text pairs demonstrates that our approach accurately identifies inconsistencies between model and text. HighlightsWe propose an approach to detect conflicts between textual and model-based process descriptions.The approach is fully automatic based on tailored natural language processing techniques.Quantitative evaluation demonstrates the applicability of the approach on real-life data.",
"With the aim of having individuals from different backgrounds and expertise levels examine the operations in an organization, different representations of business processes are maintained. To have these different representations aligned is not only a desired feature, but also a real challenge due to the contrasting nature of each process representation. In this paper we present an efficient technique for aligning a textual description and a graphical model of a process. The technique is grounded on using natural language processing techniques to extract linguistic features of each representation, and encode the search as a mathematical optimization encoded using Integer Linear Programming (ILP) whose resolution ensures an optimal alignment between both descriptions. The technique has been implemented and the experiments witness the significance of the approach with respect to the state-of-the-art technique for the same task.",
"",
"Text-based and model-based process descriptions have their own particular strengths and, as such, appeal to different stakeholders. For this reason, it is not unusual to find within an organization descriptions of the same business processes in both modes. When considering that hundreds of such descriptions may be in use in a particular organization by dozens of people, using a variety of editors, there is a clear risk that such models become misaligned. To reduce the time and effort needed to repair such situations, this paper presents the first approach to automatically identify inconsistencies between a process model and a corresponding textual description. Our approach leverages natural language processing techniques to identify cases where the two process representations describe activities in different orders, as well as model activities that are missing from the textual description. A quantitative evaluation with 46 real-life model-text pairs demonstrates that our approach allows users to quickly and effectively identify those descriptions in a process repository that are inconsistent."
]
} |
1905.06533 | 2945743426 | Abstract The rapid population aging has stimulated the development of assistive devices that provide personalized medical support to the needies suffering from various etiologies. One prominent clinical application is a computer-assisted speech training system which enables personalized speech therapy to patients impaired by communicative disorders in the patient’s home environment. Such a system relies on the robust automatic speech recognition (ASR) technology to be able to provide accurate articulation feedback. With the long-term aim of developing off-the-shelf ASR systems that can be incorporated in clinical context without prior speaker information, we compare the ASR performance of speaker-independent bottleneck and articulatory features on dysarthric speech used in conjunction with dedicated neural network-based acoustic models that have been shown to be robust against spectrotemporal deviations. We report ASR performance of these systems on two dysarthric speech datasets of different characteristics to quantify the achieved performance gains. Despite the remaining performance gap between the dysarthric and normal speech, significant improvements have been reported on both datasets using speaker-independent ASR architectures. | There have been numerous efforts to build ASR systems operating on pathological speech. @cite_6 has reported the ASR performance on Cantonese aphasic speech and disordered voice. A DNN-HMM system provided significant improvements on disordered voice and minor improvements on aphasic speech compared to a GMM-HMM system. @cite_24 proposed a feature extraction scheme using convolutional bottleneck networks for dysarthric speech recognition. They tested the proposed approach on a small test set consisting of 3 repetitions of 216 words by a single male speaker with an articulation disorder and reported some gains over a system using MFCC features. In a recent work, @cite_55 investigated convolutional long short-term memory (LSTM) networks dysarhtric speech from 9 speakers. They reported improved ASR accuracies compared to CNN- and LSTM-based acoustic models. | {
"cite_N": [
"@cite_24",
"@cite_55",
"@cite_6"
],
"mid": [
"2217954322",
"2889469831",
"2404317780"
],
"abstract": [
"In this paper, we investigate the recognition of speech uttered by a person with an articulation disorder resulting from athetoid cerebral palsy based on a robust feature extraction method using pre-trained convolutive bottleneck networks (CBN). Generally speaking, the amount of speech data obtained from a person with an articulation disorder is limited because their burden is large due to strain on the speech muscles. Therefore, a trained CBN tends toward overfitting for a small corpus of training data. In our previous work, the experimental results showed speech recognition using features extracted from CBNs outperformed conventional features. However, the recognition accuracy strongly depends on the initial values of the convolution kernels. To prevent overfitting in the networks, we introduce in this paper a pre-training technique using a convolutional restricted Boltzmann machine (CRBM). Through word-recognition experiments, we confirmed its superiority in comparison to convolutional networks without pre-training.",
"",
"This paper describes the application of state-of-the-art automatic speech recognition (ASR) systems to objective assessment of voice and speech disorders. Acoustical analysis of speech has long been considered a promising approach to non-invasive and objective assessment of people. In the past the types and amount of speech materials used for acoustical assessment were very limited. With the ASR technology, we are able to perform acoustical and linguistic analyses with a large amount of natural speech from impaired speakers. The present study is focused on Cantonese, which is a major Chinese dialect. Two representative disorders of speech production are investigated: dysphonia and aphasia. ASR experiments are carried out with continuous and spontaneous speech utterances from Cantonese-speaking patients. The results confirm the feasibility and potential of using natural speech for acoustical assessment of voice and speech disorders, and reveal the challenging issues in acoustic modeling and language modeling of pathological speech."
]
} |
1905.06533 | 2945743426 | Abstract The rapid population aging has stimulated the development of assistive devices that provide personalized medical support to the needies suffering from various etiologies. One prominent clinical application is a computer-assisted speech training system which enables personalized speech therapy to patients impaired by communicative disorders in the patient’s home environment. Such a system relies on the robust automatic speech recognition (ASR) technology to be able to provide accurate articulation feedback. With the long-term aim of developing off-the-shelf ASR systems that can be incorporated in clinical context without prior speaker information, we compare the ASR performance of speaker-independent bottleneck and articulatory features on dysarthric speech used in conjunction with dedicated neural network-based acoustic models that have been shown to be robust against spectrotemporal deviations. We report ASR performance of these systems on two dysarthric speech datasets of different characteristics to quantify the achieved performance gains. Despite the remaining performance gap between the dysarthric and normal speech, significant improvements have been reported on both datasets using speaker-independent ASR architectures. | Shahamiri and Salim @cite_42 proposed an artificial neural network-based system trained on digit utterances from nine non-dysarthric and 13 dysarthric individuals affected by Cerebral Palsy (CP). @cite_23 trained their models solely on 18 hours of speech of 15 dysarthric speakers due to CP leaving one speaker out as test set. Rudzicz @cite_40 compared the performance of a speaker-dependent and a speaker-adaptive GMM-HMM systems on the Nemours database @cite_22 . Later, Rudzicz @cite_30 tried using AF together with conventional acoustic features for phone classification experiments on dysarhtric speech. Mengistu and Rudzicz @cite_38 combined dysarthric data of eight dysarthric speakers with that of seven normal speakers, leaving one out as test set and obtained an average increase by 13.0 In one of the earliest work on Dutch pathological speech by @cite_19 , a pilot study was presented on ASR of Dutch dysarthric speech data obtained from two speakers with a birth defect and a cerebrovascular accident. Both speakers were classified as mild dysarthric by a speech pathologist. @cite_60 proposed a weighted finite state transducer (WSFT)-based ASR correction technique applied to an ASR system trained. Similar work had been proposed by Caballero-Morales and Cox @cite_28 previously. | {
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_22",
"@cite_60",
"@cite_28",
"@cite_42",
"@cite_19",
"@cite_40",
"@cite_23"
],
"mid": [
"2151936436",
"2109848220",
"2140360678",
"28792051",
"2019657476",
"2026303233",
"1568802241",
"1998988061",
"2401277329"
],
"abstract": [
"Disabled speech is not compatible with modern generative and acoustic-only models of speech recognition (ASR). This work considers the use of theoretical and empirical knowledge of the vocal tract for atypical speech in labeling segmented and unsegmented sequences. These combined models are compared against discriminative models such as neural networks, support vector machines, and conditional random fields. Results show significant improvements in accuracy over the baseline through the use of production knowledge. Furthermore, although the statistics of vocal tract movement do not appear to be transferable between regular and disabled speakers, transforming the space of the former given knowledge of the latter before retraining gives high accuracy. This work may be applied within components of assistive software for speakers with dysarthria.",
"Dysarthria is a motor speech disorder resulting from neurological damage to the part of the brain that controls the physical production of speech. It is, in part, characterized by pronunciation errors that include deletions, substitutions, insertions, and distortions of phonemes. These errors follow consistent intra-speaker patterns that we exploit through acoustic and lexical model adaptation to improve automatic speech recognition (ASR) on dysarthric speech. We show that acoustic model adaptation yields an average relative word error rate (WER) reduction of 36.99 and that pronunciation lexicon adaptation (PLA) further reduces the relative WER by an average of 8.29 on a large vocabulary task of over 1500 words for six speakers with severe to moderate dysarthria. PLA also shows an average relative WER reduction of 7.11 on speaker-dependent models evaluated using 5-fold cross-validation.",
"The Nemours database is a collection of 814 short nonsense sentences; 74 sentences spoken by each of 11 male speakers with varying degrees of dysarthria. Additionally, the database contains two connected-speech paragraphs produced by each of the 11 speakers. The database was designed to test the intelligibility of dysarthric speech before and after enhancement by various signal processing methods, and is available on CD-ROM. It can also be used to investigate general characteristics of dysarthric speech such as production error patterns. The entire database has been marked at the word level and sentences for 10 of the 11 talkers have been marked at the phoneme level as well. The paper describes the database structure and techniques adopted to improve the performance of a Discrete Hidden Markov Model (DHMM) labeler used to assign initial phoneme labels to the elements of the database. These techniques may be useful in the design of automatic recognition systems for persons with speech disorders, especially when limited amounts of training data are available.",
"In this paper, we propose a dysarthric speech recognition error correction method based on weighted finite state transducers (WFSTs). First, the proposed method constructs a context---dependent (CD) confusion matrix by aligning a recognized word sequence with the corresponding reference sequence at a phoneme level. However, because the dysarthric speech database is too insufficient to reflect all combinations of context---dependent phonemes, the CD confusion matrix can be underestimated. To mitigate this underestimation problem, the CD confusion matrix is interpolated with a context---independent (CI) confusion matrix. Finally, WFSTs based on the interpolated CD confusion matrix are built and integrated with a dictionary and language model transducers in order to correct speech recognition errors. The effectiveness of the proposed method is demonstrated by performing speech recognition using the proposed error correction method incorporated with the CD confusion matrix. It is shown from the speech recognition experiment that the average word error rate (WER) of a speech recognition system employing the proposed error correction method with the CD confusion matrix is relatively reduced by 13.68 and 5.93 , compared to those of the baseline speech recognition system and the error correction method with the CI confusion matrix, respectively.",
"Dysarthria is a motor speech disorder characterized by weakness, paralysis, or poor coordination of the muscles responsible for speech. Although automatic speech recognition (ASR) systems have been developed for disordered speech, factors such as low intelligibility and limited phonemic repertoire decrease speech recognition accuracy, making conventional speaker adaptation algorithms perform poorly on dysarthric speakers. In this work, rather than adapting the acoustic models, we model the errors made by the speaker and attempt to correct them. For this task, two techniques have been developed: (1) a set of \"metamodels\" that incorporate a model of the speaker's phonetic confusion matrix into the ASR process; (2) a cascade of weighted finite-state transducers at the confusion matrix, word, and language levels. Both techniques attempt to correct the errors made at the phonetic level and make use of a language model to find the best estimate of the correct word sequence. Our experiments show that both techniques outperform standard adaptation techniques.",
"Dysarthria is a neurological impairment of controlling the motor speech articulators that compromises the speech signal. Automatic Speech Recognition (ASR) can be very helpful for speakers with dysarthria because the disabled persons are often physically incapacitated. Mel-Frequency Cepstral Coefficients (MFCCs) have been proven to be an appropriate representation of dysarthric speech, but the question of which MFCC-based feature set represents dysarthric acoustic features most effectively has not been answered. Moreover, most of the current dysarthric speech recognisers are either speaker-dependent (SD) or speaker-adaptive (SA), and they perform poorly in terms of generalisability as a speaker-independent (SI) model. First, by comparing the results of 28 dysarthric SD speech recognisers, this study identifies the best-performing set of MFCC parameters, which can represent dysarthric acoustic features to be used in Artificial Neural Network (ANN)-based ASR. Next, this paper studies the application of ANNs as a fixed-length isolated-word SI ASR for individuals who suffer from dysarthria. The results show that the speech recognisers trained by the conventional 12 coefficients MFCC features without the use of delta and acceleration features provided the best accuracy, and the proposed SI ASR recognised the speech of the unforeseen dysarthric evaluation subjects with word recognition rate of 68.38 .",
"This paper describes a feasibility study into automatic recognition of Dutch dysarthric speech. Recognition experiments with speaker independent and speaker dependent models are compared, for tasks with different perplexities. The results show that speaker dependent speech recognition for dysarthric speakers is very well possible, even for higher perplexity tasks.",
"Acoustic modeling of dysarthric speech is complicated by its increased intra- and inter-speaker variability. The accuracy of speaker-dependent and speaker-adaptive models are compared for this task, with the latter prevailing across varying levels of speaker intelligibility.",
"Speech-driven assistive technology can be an attractive alternative to conventional interfaces for people with physical disabilities. However, often the lack of motor-control of the speech articulators results in disordered speech, as condition known as dysarthria. Dysarthric speakers can generally not obtain satisfactory performances with off-the-shelf automatic speech recognition (ASR) products and disordered speech ASR is an increasingly active research area. Sparseness of suitable data is a big challenge. The experiments described here use UAspeech, one of the largest dysarthric databases available, which is still easily an order of magnitude smaller than typical speech databases. This study investigates how far fundamental training and adaptation techniques developed in the LVCSR community can take us. A variety of ASR systems using maximum likelihood and MAP adaptation strategies are established with all speakers obtaining significant improvements compared to the baseline system regardless of the severity of their condition. The best systems show on average 34 relative improvement on known published results. An analysis of the correlation between intelligibility of the speaker and the type of system which would represent an optimal operating point in terms of performance shows that for severely dysarthric speakers, the exact choice of system configuration is more critical than for speakers with less disordered speech."
]
} |
1905.06712 | 2944953725 | In recent years, considerable progress has been made towards a vehicle's ability to operate autonomously. An end-to-end approach attempts to achieve autonomous driving using a single, comprehensive software component. Recent breakthroughs in deep learning have significantly increased end-to-end systems' capabilities, and such systems are now considered a possible alternative to the current state-of-the-art solutions. This paper examines end-to-end learning for autonomous vehicles in simulated urban environments containing other vehicles, traffic lights, and speed limits. Furthermore, the paper explores end-to-end systems' ability to execute navigational commands and examines whether improved performance can be achieved by utilizing temporal dependencies between subsequent visual cues. Two end-to-end architectures are proposed: a traditional Convolutional Neural Network and an extended design combining a Convolutional Neural Network with a recurrent layer. The models are trained using expert driving data from a simulated urban setting, and are evaluated by their driving performance in an unseen simulated environment. The results of this paper indicate that end-to-end systems can operate autonomously in simple urban environments. Moreover, it is found that the exploitation of temporal information in subsequent images enhances a system's ability to judge movement and distance. | @cite_7 further explored NVIDIA's architecture by adding navigational commands to incorporate the drivers intent into the system and predicted both steering angle and acceleration. The authors proposed two network architectures: a and a . The used the navigational input as a switch between a CNN and three fully connected networks, each specialized to a single intersection action, while the concatenated the navigational command with the output of the CNN, connected to a single fully connected network. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2760878839"
],
"abstract": [
"Deep networks trained on demonstrations of human driving have learned to follow roads and avoid obstacles. However, driving policies trained via imitation learning cannot be controlled at test time. A vehicle trained end-to-end to imitate an expert cannot be guided to take a specific turn at an upcoming intersection. This limits the utility of such systems. We propose to condition imitation learning on high-level command input. At test time, the learned driving policy functions as a chauffeur that handles sensorimotor coordination but continues to respond to navigational commands. We evaluate different architectures for conditional imitation learning in vision-based driving. We conduct experiments in realistic three-dimensional simulations of urban driving and on a 1 5 scale robotic truck that is trained to drive in a residential area. Both systems drive based on visual input yet remain responsive to high-level navigational commands. The supplementary video can be viewed at this https URL"
]
} |
1905.06712 | 2944953725 | In recent years, considerable progress has been made towards a vehicle's ability to operate autonomously. An end-to-end approach attempts to achieve autonomous driving using a single, comprehensive software component. Recent breakthroughs in deep learning have significantly increased end-to-end systems' capabilities, and such systems are now considered a possible alternative to the current state-of-the-art solutions. This paper examines end-to-end learning for autonomous vehicles in simulated urban environments containing other vehicles, traffic lights, and speed limits. Furthermore, the paper explores end-to-end systems' ability to execute navigational commands and examines whether improved performance can be achieved by utilizing temporal dependencies between subsequent visual cues. Two end-to-end architectures are proposed: a traditional Convolutional Neural Network and an extended design combining a Convolutional Neural Network with a recurrent layer. The models are trained using expert driving data from a simulated urban setting, and are evaluated by their driving performance in an unseen simulated environment. The results of this paper indicate that end-to-end systems can operate autonomously in simple urban environments. Moreover, it is found that the exploitation of temporal information in subsequent images enhances a system's ability to judge movement and distance. | @cite_2 proposed using turn signals as control commands to incorporate the steering commands into the network. Furthermore, they proposed a modified network architecture to improve driving accuracy. They used a CNN that receives an image and a turn indicator as input such that the model could be controlled in real time. To handle sharp turns and obstacles along the road the authors proposed using images recorded several meters back to obtain a spatial history of the environment. Images captured 4 and 8 meters behind the current position were added as an input to make up for the limited vision from a single centered camera. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2790640584"
],
"abstract": [
"Navigation and obstacle avoidance are two problems that are not easily incorporated into direct control of autonomous vehicles solely based on visual input. However, they are required if lane following given proper lane markings is not enough to incorporate trained systems into larger architectures. We present a method to allow for obstacle avoidance while driving using a single, front-facing camera as well as navigation capabilities such as taking turns at junctions and lane changes by feeding turn indicator signals into a Convolutional Neural Network. Both situations share the difficulty intrinsic to single camera setups of limited field of views. This problem is handled by using a spatial history of input images to extend the field of view regarding static obstacles. The trained model, referred to as DriveNet, is evaluated in real world driving scenarios, using the same model for lateral vehicle control to both dynamically drive around obstacles as well as perform lane changing and turning in intersections."
]
} |
1905.06712 | 2944953725 | In recent years, considerable progress has been made towards a vehicle's ability to operate autonomously. An end-to-end approach attempts to achieve autonomous driving using a single, comprehensive software component. Recent breakthroughs in deep learning have significantly increased end-to-end systems' capabilities, and such systems are now considered a possible alternative to the current state-of-the-art solutions. This paper examines end-to-end learning for autonomous vehicles in simulated urban environments containing other vehicles, traffic lights, and speed limits. Furthermore, the paper explores end-to-end systems' ability to execute navigational commands and examines whether improved performance can be achieved by utilizing temporal dependencies between subsequent visual cues. Two end-to-end architectures are proposed: a traditional Convolutional Neural Network and an extended design combining a Convolutional Neural Network with a recurrent layer. The models are trained using expert driving data from a simulated urban setting, and are evaluated by their driving performance in an unseen simulated environment. The results of this paper indicate that end-to-end systems can operate autonomously in simple urban environments. Moreover, it is found that the exploitation of temporal information in subsequent images enhances a system's ability to judge movement and distance. | Eraqi, H.M. et. al. @cite_9 tried to utilize the temporal dependencies by combining a CNN with a Long Short-Term Memory Neural Network. Their results showed that the C-LSTM improved the angle prediction accuracy by 35 | {
"cite_N": [
"@cite_9"
],
"mid": [
"2761595090"
],
"abstract": [
"Steering a car through traffic is a complex task that is difficult to cast into algorithms. Therefore, researchers turn to training artificial neural networks from front-facing camera data stream along with the associated steering angles. Nevertheless, most existing solutions consider only the visual camera frames as input, thus ignoring the temporal relationship between frames. In this work, we propose a Convolutional Long Short-Term Memory Recurrent Neural Network (C-LSTM), that is end-to-end trainable, to learn both visual and dynamic temporal dependencies of driving. Additionally, We introduce posing the steering angle regression problem as classification while imposing a spatial relationship between the output layer neurons. Such method is based on learning a sinusoidal function that encodes steering angles. To train and validate our proposed methods, we used the publicly available Comma.ai dataset. Our solution improved steering root mean square error by 35 over recent methods, and led to a more stable steering by 87 ."
]
} |
1905.06712 | 2944953725 | In recent years, considerable progress has been made towards a vehicle's ability to operate autonomously. An end-to-end approach attempts to achieve autonomous driving using a single, comprehensive software component. Recent breakthroughs in deep learning have significantly increased end-to-end systems' capabilities, and such systems are now considered a possible alternative to the current state-of-the-art solutions. This paper examines end-to-end learning for autonomous vehicles in simulated urban environments containing other vehicles, traffic lights, and speed limits. Furthermore, the paper explores end-to-end systems' ability to execute navigational commands and examines whether improved performance can be achieved by utilizing temporal dependencies between subsequent visual cues. Two end-to-end architectures are proposed: a traditional Convolutional Neural Network and an extended design combining a Convolutional Neural Network with a recurrent layer. The models are trained using expert driving data from a simulated urban setting, and are evaluated by their driving performance in an unseen simulated environment. The results of this paper indicate that end-to-end systems can operate autonomously in simple urban environments. Moreover, it is found that the exploitation of temporal information in subsequent images enhances a system's ability to judge movement and distance. | The implemented models in this paper are based on the architecture in @cite_8 . The authors were able to use a CNN to drive on trafficked roads with and without lane markings, parking lots and unpaved roads. This complies with this paper's results. Even though the implemented models were not tested on unmarked roads or parking lost, they were able to drive on roads with lane marking, both on roads with and without pavements. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2342840547"
],
"abstract": [
"We trained a convolutional neural network (CNN) to map raw pixels from a single front-facing camera directly to steering commands. This end-to-end approach proved surprisingly powerful. With minimum training data from humans the system learns to drive in traffic on local roads with or without lane markings and on highways. It also operates in areas with unclear visual guidance such as in parking lots and on unpaved roads. The system automatically learns internal representations of the necessary processing steps such as detecting useful road features with only the human steering angle as the training signal. We never explicitly trained it to detect, for example, the outline of roads. Compared to explicit decomposition of the problem, such as lane marking detection, path planning, and control, our end-to-end system optimizes all processing steps simultaneously. We argue that this will eventually lead to better performance and smaller systems. Better performance will result because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e.g., lane detection. Such criteria understandably are selected for ease of human interpretation which doesn't automatically guarantee maximum system performance. Smaller networks are possible because the system learns to solve the problem with the minimal number of processing steps. We used an NVIDIA DevBox and Torch 7 for training and an NVIDIA DRIVE(TM) PX self-driving car computer also running Torch 7 for determining where to drive. The system operates at 30 frames per second (FPS)."
]
} |
1905.06712 | 2944953725 | In recent years, considerable progress has been made towards a vehicle's ability to operate autonomously. An end-to-end approach attempts to achieve autonomous driving using a single, comprehensive software component. Recent breakthroughs in deep learning have significantly increased end-to-end systems' capabilities, and such systems are now considered a possible alternative to the current state-of-the-art solutions. This paper examines end-to-end learning for autonomous vehicles in simulated urban environments containing other vehicles, traffic lights, and speed limits. Furthermore, the paper explores end-to-end systems' ability to execute navigational commands and examines whether improved performance can be achieved by utilizing temporal dependencies between subsequent visual cues. Two end-to-end architectures are proposed: a traditional Convolutional Neural Network and an extended design combining a Convolutional Neural Network with a recurrent layer. The models are trained using expert driving data from a simulated urban setting, and are evaluated by their driving performance in an unseen simulated environment. The results of this paper indicate that end-to-end systems can operate autonomously in simple urban environments. Moreover, it is found that the exploitation of temporal information in subsequent images enhances a system's ability to judge movement and distance. | Codevilla et. al. @cite_7 claimed that their performed inadequately when executing navigational commands. This does not comply with the results of this paper. The proposed architecture takes the navigational command as input after the CNN, in a similar matter to the , but was able to execute the given navigational commands with a high degree of success. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2760878839"
],
"abstract": [
"Deep networks trained on demonstrations of human driving have learned to follow roads and avoid obstacles. However, driving policies trained via imitation learning cannot be controlled at test time. A vehicle trained end-to-end to imitate an expert cannot be guided to take a specific turn at an upcoming intersection. This limits the utility of such systems. We propose to condition imitation learning on high-level command input. At test time, the learned driving policy functions as a chauffeur that handles sensorimotor coordination but continues to respond to navigational commands. We evaluate different architectures for conditional imitation learning in vision-based driving. We conduct experiments in realistic three-dimensional simulations of urban driving and on a 1 5 scale robotic truck that is trained to drive in a residential area. Both systems drive based on visual input yet remain responsive to high-level navigational commands. The supplementary video can be viewed at this https URL"
]
} |
1905.06712 | 2944953725 | In recent years, considerable progress has been made towards a vehicle's ability to operate autonomously. An end-to-end approach attempts to achieve autonomous driving using a single, comprehensive software component. Recent breakthroughs in deep learning have significantly increased end-to-end systems' capabilities, and such systems are now considered a possible alternative to the current state-of-the-art solutions. This paper examines end-to-end learning for autonomous vehicles in simulated urban environments containing other vehicles, traffic lights, and speed limits. Furthermore, the paper explores end-to-end systems' ability to execute navigational commands and examines whether improved performance can be achieved by utilizing temporal dependencies between subsequent visual cues. Two end-to-end architectures are proposed: a traditional Convolutional Neural Network and an extended design combining a Convolutional Neural Network with a recurrent layer. The models are trained using expert driving data from a simulated urban setting, and are evaluated by their driving performance in an unseen simulated environment. The results of this paper indicate that end-to-end systems can operate autonomously in simple urban environments. Moreover, it is found that the exploitation of temporal information in subsequent images enhances a system's ability to judge movement and distance. | In @cite_2 the turn indicators of the car was used as the navigational commands, which were sent as input to the network. The authors did not use an RNN, but fed three subsequent images to three CNNs and concatenated the output. It was able to perform lane following, avoid obstacles and change lanes. The navigational commands in this paper were introduced to the network in a similar way, and both approaches were able to execute the navigational commands. The proposed system was not tested for lane changes, but seeing achievements in similar approaches indicates that this should be possible. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2790640584"
],
"abstract": [
"Navigation and obstacle avoidance are two problems that are not easily incorporated into direct control of autonomous vehicles solely based on visual input. However, they are required if lane following given proper lane markings is not enough to incorporate trained systems into larger architectures. We present a method to allow for obstacle avoidance while driving using a single, front-facing camera as well as navigation capabilities such as taking turns at junctions and lane changes by feeding turn indicator signals into a Convolutional Neural Network. Both situations share the difficulty intrinsic to single camera setups of limited field of views. This problem is handled by using a spatial history of input images to extend the field of view regarding static obstacles. The trained model, referred to as DriveNet, is evaluated in real world driving scenarios, using the same model for lateral vehicle control to both dynamically drive around obstacles as well as perform lane changing and turning in intersections."
]
} |
1905.06712 | 2944953725 | In recent years, considerable progress has been made towards a vehicle's ability to operate autonomously. An end-to-end approach attempts to achieve autonomous driving using a single, comprehensive software component. Recent breakthroughs in deep learning have significantly increased end-to-end systems' capabilities, and such systems are now considered a possible alternative to the current state-of-the-art solutions. This paper examines end-to-end learning for autonomous vehicles in simulated urban environments containing other vehicles, traffic lights, and speed limits. Furthermore, the paper explores end-to-end systems' ability to execute navigational commands and examines whether improved performance can be achieved by utilizing temporal dependencies between subsequent visual cues. Two end-to-end architectures are proposed: a traditional Convolutional Neural Network and an extended design combining a Convolutional Neural Network with a recurrent layer. The models are trained using expert driving data from a simulated urban setting, and are evaluated by their driving performance in an unseen simulated environment. The results of this paper indicate that end-to-end systems can operate autonomously in simple urban environments. Moreover, it is found that the exploitation of temporal information in subsequent images enhances a system's ability to judge movement and distance. | The CNN model in this paper was extended with an LSTM to utilize temporal dependencies. A similar approach was attempted in @cite_9 . They showed that adding temporal dependencies improved both the accuracy and stability of a model using a single CNN. Similar results can be seen in this paper. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2761595090"
],
"abstract": [
"Steering a car through traffic is a complex task that is difficult to cast into algorithms. Therefore, researchers turn to training artificial neural networks from front-facing camera data stream along with the associated steering angles. Nevertheless, most existing solutions consider only the visual camera frames as input, thus ignoring the temporal relationship between frames. In this work, we propose a Convolutional Long Short-Term Memory Recurrent Neural Network (C-LSTM), that is end-to-end trainable, to learn both visual and dynamic temporal dependencies of driving. Additionally, We introduce posing the steering angle regression problem as classification while imposing a spatial relationship between the output layer neurons. Such method is based on learning a sinusoidal function that encodes steering angles. To train and validate our proposed methods, we used the publicly available Comma.ai dataset. Our solution improved steering root mean square error by 35 over recent methods, and led to a more stable steering by 87 ."
]
} |
1905.06679 | 2961814621 | We consider an energy harvesting source equipped with a finite battery, which needs to send timely status updates to a remote destination. The timeliness of status updates is measured by a non-decreasing penalty function of the age of information (AoI). The problem is to find a policy for generating updates that achieves the lowest possible time-average expected age penalty among all online policies. We prove that one optimal solution of this problem is a monotone threshold policy, which satisfies (i) each new update is sent out only when the age is higher than a threshold and (ii) the threshold is a non-increasing function of the instantaneous battery level. Let τB denote the optimal threshold corresponding to the full battery level B, and p(·) denote the age-penalty function, then we can show that p(τB) is equal to the optimum objective value, i.e., the minimum achievable time-average expected age penalty. These structural properties are used to develop an algorithm to compute the optimal thresholds. Our numerical analysis indicates that the improvement in average age with added battery capacity is largest at small battery sizes; specifically, more than half the total possible reduction in age is attained when battery storage increases from one transmission's worth of energy to two. This encourages further study of status update policies for sensors with small battery storage. | The problem in @cite_11 was extended to a continuous-time formulation with Poisson energy arrivals, finite energy storage (battery) capacity, and random packet errors in the channel in @cite_21 . An age-optimal threshold policy was proposed for the unit battery case, and the achievable AoI for arbitrary battery size was bounded for a channel with a constant packet erasure probability. The concurrent study in @cite_1 , limited to the special cases of unit battery capacity and infinite battery capacity computed the same threshold-type policies under these assumptions. These special cases were investigated also for noisy channels with a constant packet erasure probability in @cite_37 @cite_16 . The case for a battery with 2-units capacity was studied in @cite_29 and the optimal policies for this case characterized as threshold-type policies similar to the optimal policy for unit battery capacity introduced in @cite_21 and @cite_1 . Optimal policies for arbitrary battery sizes were characterized via Lagrangian approach in @cite_25 and using optimal stopping theory in @cite_20 . | {
"cite_N": [
"@cite_37",
"@cite_29",
"@cite_21",
"@cite_1",
"@cite_16",
"@cite_25",
"@cite_20",
"@cite_11"
],
"mid": [
"2887762842",
"2898389279",
"2963053102",
"2649848418",
"",
"2808948305",
"2962999278",
"1932416753"
],
"abstract": [
"In this paper, we consider a status monitoring system where an energy harvesting sensor continuously sends time-stamped status updates to a destination. With a non-zero probability, each update will be corrupted by noise and result in an updating failure. The destination keeps track of the system status through the successfully delivered updates. We assume that there is a perfect feedback channel between the destination and the source, so that the source is aware of the updating failures once they occur. With the feedback information, our objective is to design the optimal online status updating policy to minimize the expected long-term average Age of Information (AoI) at the destination, subject to the energy causality constraint at the sensor. We propose a status updating policy called Best-effort Uniform updating with Retransmission (BUR), under which the source tries to equalize the delay between two successful updates as much as possible, and retransmits an update immediately if the previous transmission fails. We show that the BUR policy achieves the minimum expected long-term average AoI among a broad class of online policies.",
"A sensor node that is sending measurement updates regarding some physical phenomenon to a destination is considered. The sensor relies on energy harvested from nature to transmit its updates, and is equipped with a finite @math -sized battery to save its harvested energy. Energy recharges the battery incrementally in units, according to a Poisson process, and one update consumes one energy unit to reach the destination. The setting is online, where the energy arrival times are revealed causally after the energy is harvested. The goal is to update the destination in a timely manner, namely, such that the long term average age of information is minimized, subject to energy causality constraints. The age of information at a given time is defined as the time spent since the latest update has reached the destination. It is shown that the optimal update policy follows a renewal structure, where the inter-update times are independent, and the time durations between any two consecutive events of submitting an update and having @math units of energy remaining in the battery are independent and identically distributed for a given @math . The optimal renewal policy for the case of @math energy units is explicitly characterized, and it is shown that it has an energy-dependent threshold structure, where the sensor updates only if the age grows above a certain threshold that is a function of the amount of energy in its battery.",
"Age of Information is a measure of the freshness of status updates in monitoring applications and update-based systems. We study a real-time sensing scenario with a sensor which is restricted by time-varying energy constraints and battery limitations. The sensor sends updates over a packet erasure channel with no feedback. The problem of finding an age-optimal threshold policy, with the transmission threshold being a function of the energy state and the estimated current age, is formulated. The average age is analyzed for the unit battery scenario under a memoryless energy arrival process. Somewhat surprisingly, for any finite arrival rate of energy, there is a positive age threshold for transmission, which corresponding to transmitting at a rate lower than that dictated by the rate of energy arrivals. A lower bound on the average age is obtained for general battery size.",
"In this paper, we consider a scenario where an energy harvesting sensor continuously monitors a system and sends time-stamped status updates to a destination. The destination keeps track of the system status through the received updates. We use the metric Age of Information (AoI), the time that has elapsed since the last received update was generated, to measure the “freshness” of the status information available at the destination. Our objective is to design optimal online status update policies to minimize the long-term average AoI, subject to the energy causality constraint at the sensor. We consider three scenarios, i.e., the battery size is infinite, finite, and one unit only, respectively. For the infinite battery scenario, we adopt a best-effort uniform status update policy and show that it minimizes the long-term average AoI. For the finite battery scenario, we adopt an energy-aware adaptive status update policy, and prove that it is asymptotically optimal when the battery size goes to infinity. For the last scenario where the battery size is one, we first show that within a broadly defined class of online policies, the optimal policy should have a renewal structure. We then focus on a renewal interval, and prove that the optimal policy should have a threshold structure, i.e., if the AoI in the system is below a threshold when an energy arrival enters an empty battery, the sensor should store the energy first and then update when the AoI reaches the threshold; otherwise, it updates the status immediately. Simulation results corroborate the theoretical bounds.",
"",
"An energy-harvesting sensor node that is sending status updates to a destination is considered. The sensor is equipped with a battery of finite size to save its incoming energy, and consumes one unit of energy per status update transmission, which is delivered to the destination instantly over an error-free channel. The setting is online in which the harvested energy is revealed to the sensor causally over time, and the goal is to design status update transmission policy such that the long term average age of information (AoI) is minimized. AoI is defined as the time elapsed since the latest update has reached at the destination. Two energy arrival models are considered: a random battery recharge (RBR) model, and an incremental battery recharge (IBR) model. In both models, energy arrives according to a Poisson process with unit rate, with values that completely fill up the battery in the RBR model, and with values that fill up the battery incrementally, unit-by-unit, in the IBR model. The key approach to characterizing the optimal status update policy for both models is showing the optimality of renewal policies, in which the inter-update times follow a specific renewal process that depends on the energy arrival model and the battery size. It is then shown that the optimal renewal policy has an energy-dependent threshold structure, in which the sensor sends a status update only if the AoI grows above a certain threshold that depends on the energy available. For both the RBR and the IBR models, the optimal energy-dependent thresholds are characterized explicitly, i.e., in closed-form, in terms of the optimal long term average AoI. It is also shown that the optimal thresholds are monotonically decreasing in the energy available in the battery, and that the smallest threshold, which comes in effect when the battery is full, is equal to the optimal long term average AoI.",
"We study the problem of minimizing the time-average expected Age of Information for status updates sent by an energy-harvesting source with a finite-capacity battery. In prior literature, optimal policies were observed to have a threshold structure under Poisson energy arrivals, for the special case of a unit-capacity battery. In this paper, we generalize this result to any (integer) battery capacity, and explicitly characterize the threshold structure. We provide the expressions relating the threshold values on the age to the average age. One of these results, that we derive from these expressions, is the unexpected equivalence of the minimum average AoI and the optimal threshold for the highest energy state.",
"We consider managing the freshness of status updates sent from a source (such as a sensor) to a monitoring node. The time-varying availability of energy at the sender limits the rate of update packet transmissions. At any time, the age of information is defined as the amount of time since the most recent update was successfully received. An offline solution that minimizes not only the time average age, but also the peak age for an arbitrary energy replenishment profile is derived. The related decision problem under stochastic energy arrivals at the sender is studied through a discrete time dynamic programming formulation, and the structure of the optimal policy that minimizes the expected age is shown. It is found that tracking the expected value of the current age (which is a linear operation), together with the knowledge of the current energy level at the sender side is sufficient for generating an optimal threshold policy. An effective online heuristic, Balance Updating (BU), that achieves performance close to an omniscient (offline) policy is proposed. Simulations of the policies indicate that they can significantly improve the age over greedy approaches. An extension of the formulation to stochastically formed updates is considered."
]
} |
1905.06821 | 2951089710 | We consider the problem of adaptively placing sensors along an interval to detect stochastically-generated events. We present a new formulation of the problem as a continuum-armed bandit problem with feedback in the form of partial observations of realisations of an inhomogeneous Poisson process. We design a solution method by combining Thompson sampling with nonparametric inference via increasingly granular Bayesian histograms and derive an @math bound on the Bayesian regret in @math rounds. This is coupled with the design of an efficent optimisation approach to select actions in polynomial time. In simulations we demonstrate our approach to have substantially lower and less variable regret than competitor algorithms. | The continuum-armed bandit (CAB) model @cite_13 is an infinitely-many armed extension of the classic multi-armed bandit (MAB) problem. There are two main classes of algorithm for CAB problems: discretisation-based approaches which select from a discrete subset of the continuous action space at each iteration, and approaches which make decisions directly on the whole action space. Our proposed method belongs to the former class. Early discretisation-based approaches focused on fixed discretisation @cite_8 @cite_0 , with more recent approaches typically using adaptive discretisations such as a zooming'' approach @cite_17 or a tree-based structure @cite_1 @cite_5 @cite_15 to manage the exploration. Authors who handle the full continuous action space typically use Gaussian process models to capture uncertainty in the unknown continuous function and balance exploration-exploitation in light of this @cite_2 @cite_11 @cite_4 . As mentioned in Section , our problem can map into a CAB, but since our information structure is more complex, our action space has dimension greater than 1, and the stochastic components have heavier tails than usual, standard algorithms and results do not apply. | {
"cite_N": [
"@cite_11",
"@cite_4",
"@cite_8",
"@cite_1",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_13",
"@cite_17"
],
"mid": [
"2963271096",
"2620530181",
"2097487180",
"",
"",
"2951665052",
"",
"2183438559",
"2010189695",
"2115519224"
],
"abstract": [
"",
"We consider the global optimization of a function over a continuous domain. At every evaluation attempt, we can observe the function at a chosen point in the domain and we reap the reward of the value observed. We assume that drawing these observations are expensive and noisy. We frame it as a continuum-armed bandit problem with a Gaussian Process prior on the function. In this regime, most algorithms have been developed to minimize some form of regret. Contrary to this popular norm, in this paper, we study the convergence of the sequential point @math to the global optimizer @math for the Thompson Sampling approach. Under some assumptions and regularity conditions, we show an exponential rate of convergence to the true optimal.",
"In the multi-armed bandit problem, an online algorithm must choose from a set of strategies in a sequence of n trials so as to minimize the total cost of the chosen strategies. While nearly tight upper and lower bounds are known in the case when the strategy set is finite, much less is known when there is an infinite strategy set. Here we consider the case when the set of strategies is a subset of ℝd, and the cost functions are continuous. In the d = 1 case, we improve on the best-known upper and lower bounds, closing the gap to a sublogarithmic factor. We also consider the case where d > 1 and the cost functions are convex, adapting a recent online convex optimization algorithm of Zinkevich to the sparser feedback model of the multi-armed bandit problem.",
"",
"",
"Many applications require optimizing an unknown, noisy function that is expensive to evaluate. We formalize this task as a multi-armed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low RKHS norm. We resolve the important open problem of deriving regret bounds for this setting, which imply novel convergence rates for GP optimization. We analyze GP-UCB, an intuitive upper-confidence based algorithm, and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design. Moreover, by bounding the latter in terms of operator spectra, we obtain explicit sublinear regret bounds for many commonly used covariance functions. In some important cases, our bounds have surprisingly weak dependence on the dimensionality. In our experiments on real sensor data, GP-UCB compares favorably with other heuristical GP optimization approaches.",
"",
"We study the problem of black-box optimization of a function f of any dimension, given function evaluations perturbed by noise. The function is assumed to be locally smooth around one of its global optima, but this smoothness is unknown. Our contribution is an adaptive optimization algorithm, POO or parallel optimistic optimization, that is able to deal with this setting. POO performs almost as well as the best known algorithms requiring the knowledge of the smoothness. Furthermore, POO works for a larger class of functions than what was previously considered, especially for functions that are difficult to optimize, in a very precise sense. We provide a finite-time analysis of POO's performance, which shows that its error after n evaluations is at most a factor of √ln n away from the error of the best known optimization algorithms using the knowledge of the smoothness.",
"In this paper we consider the multiarmed bandit problem where the arms are chosen from a subset of the real line and the mean rewards are assumed to be a continuous function of the arms. The problem with an infinite number of arms is much more difficult than the usual one with a finite number of arms because the built-in learning task is now infinite dimensional. We devise a kernel estimator-based learning scheme for the mean reward as a function of the arms. Using this learning scheme, we construct a class of certainty equivalence control with forcing schemes and derive asymptotic upper bounds on their learning loss. To the best of our knowledge, these bounds are the strongest rates yet available. Moreover, they are stronger than the @math required for optimality with respect to the average-cost-per-unit-time criterion.",
"In a multi-armed bandit problem, an online algorithm chooses from a set of strategies in a sequence of @math trials so as to maximize the total payoff of the chosen strategies. While the performance of bandit algorithms with a small finite strategy set is quite well understood, bandit problems with large strategy sets are still a topic of very active investigation, motivated by practical applications such as online auctions and web advertisement. The goal of such research is to identify broad and natural classes of strategy sets and payoff functions which enable the design of efficient solutions. In this work we study a very general setting for the multi-armed bandit problem in which the strategies form a metric space, and the payoff function satisfies a Lipschitz condition with respect to the metric. We refer to this problem as the \"Lipschitz MAB problem\". We present a complete solution for the multi-armed problem in this setting. That is, for every metric space (L,X) we define an isometry invariant Max Min COV(X) which bounds from below the performance of Lipschitz MAB algorithms for @math , and we present an algorithm which comes arbitrarily close to meeting this bound. Furthermore, our technique gives even better results for benign payoff functions."
]
} |
1905.06906 | 2946008934 | Domain Adaptation explores the idea of how to maximize performance on a target domain, distinct from source domain, upon which the classifier was trained. This idea has been explored for the task of sentiment analysis extensively. The training of reviews pertaining to one domain and evaluation on another domain is widely studied for modeling a domain independent algorithm. This further helps in understanding correlation between domains. In this paper, we show that Gated Convolutional Neural Networks (GCN) perform effectively at learning sentiment analysis in a manner where domain dependant knowledge is filtered out using its gates. We perform our experiments on multiple gate architectures: Gated Tanh ReLU Unit (GTRU), Gated Tanh Unit (GTU) and Gated Linear Unit (GLU). Extensive experimentation on two standard datasets relevant to the task, reveal that training with Gated Convolutional Neural Networks give significantly better performance on target domains than regular convolution and recurrent based architectures. While complex architectures like attention, filter domain specific knowledge as well, their complexity order is remarkably high as compared to gated architectures. GCNs rely on convolution hence gaining an upper hand through parallelization. | Traditionally methods for tackling Domain Adaptation are lexicon based. Blitzer @cite_9 used a pivot method to select features that occur frequently in both domains. It assumes that the selected pivot features can reliably represent the source domain. The pivots are selected using mutual information between selected features and the source domain labels. SFA @cite_0 method argues that pivot features selected from source domain cannot attest a representation of target domain. Hence, SFA tries to exploit the relationship between domain-specific and domain independent words via simultaneously co-clustering them in a common latent space. SDA @cite_28 performs Domain Adaptation by learning intermediate representations through auto-encoders. Yu @cite_11 used two auxiliary tasks to help induce sentence embeddings that work well across different domains. These embeddings are trained using Convolutional Neural Networks (CNN). | {
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_28",
"@cite_11"
],
"mid": [
"2153353890",
"2163302275",
"22861983",
"2567698949"
],
"abstract": [
"Sentiment classification aims to automatically predict sentiment polarity (e.g., positive or negative) of users publishing sentiment data (e.g., reviews, blogs). Although traditional classification algorithms can be used to train sentiment classifiers from manually labeled text data, the labeling work can be time-consuming and expensive. Meanwhile, users often use some different words when they express sentiment in different domains. If we directly apply a classifier trained in one domain to other domains, the performance will be very low due to the differences between these domains. In this work, we develop a general solution to sentiment classification when we do not have any labels in a target domain but have some labeled data in a different domain, regarded as source domain. In this cross-domain sentiment classification setting, to bridge the gap between the domains, we propose a spectral feature alignment (SFA) algorithm to align domain-specific words from different domains into unified clusters, with the help of domain-independent words as a bridge. In this way, the clusters can be used to reduce the gap between domain-specific words of the two domains, which can be used to train sentiment classifiers in the target domain accurately. Compared to previous approaches, SFA can discover a robust representation for cross-domain data by fully exploiting the relationship between the domain-specific and domain-independent words via simultaneously co-clustering them in a common latent space. We perform extensive experiments on two real world datasets, and demonstrate that SFA significantly outperforms previous approaches to cross-domain sentiment classification.",
"Automatic sentiment classification has been extensively studied and applied in recent years. However, sentiment is expressed differently in different domains, and annotating corpora for every possible domain of interest is impractical. We investigate domain adaptation for sentiment classifiers, focusing on online reviews for different types of products. First, we extend to sentiment classification the recently-proposed structural correspondence learning (SCL) algorithm, reducing the relative error due to adaptation between domains by an average of 30 over the original SCL algorithm and 46 over a supervised baseline. Second, we identify a measure of domain similarity that correlates well with the potential for adaptation of a classifier from one domain to another. This measure could for instance be used to select a small set of domains to annotate whose trained classifiers would transfer well to many other domains.",
"The exponential increase in the availability of online reviews and recommendations makes sentiment classification an interesting topic in academic and industrial research. Reviews can span so many different domains that it is difficult to gather annotated training data for all of them. Hence, this paper studies the problem of domain adaptation for sentiment classifiers, hereby a system is trained on labeled reviews from one source domain but is meant to be deployed on another. We propose a deep learning approach which learns to extract a meaningful representation for each review in an unsupervised fashion. Sentiment classifiers trained with this high-level feature representation clearly outperform state-of-the-art methods on a benchmark composed of reviews of 4 types of Amazon products. Furthermore, this method scales well and allowed us to successfully perform domain adaptation on a larger industrial-strength dataset of 22 domains.",
""
]
} |
1905.06906 | 2946008934 | Domain Adaptation explores the idea of how to maximize performance on a target domain, distinct from source domain, upon which the classifier was trained. This idea has been explored for the task of sentiment analysis extensively. The training of reviews pertaining to one domain and evaluation on another domain is widely studied for modeling a domain independent algorithm. This further helps in understanding correlation between domains. In this paper, we show that Gated Convolutional Neural Networks (GCN) perform effectively at learning sentiment analysis in a manner where domain dependant knowledge is filtered out using its gates. We perform our experiments on multiple gate architectures: Gated Tanh ReLU Unit (GTRU), Gated Tanh Unit (GTU) and Gated Linear Unit (GLU). Extensive experimentation on two standard datasets relevant to the task, reveal that training with Gated Convolutional Neural Networks give significantly better performance on target domains than regular convolution and recurrent based architectures. While complex architectures like attention, filter domain specific knowledge as well, their complexity order is remarkably high as compared to gated architectures. GCNs rely on convolution hence gaining an upper hand through parallelization. | Gated convolutional neural networks have achieved state-of-art results in language modelling @cite_22 . Since then, they have been used in different areas of natural language processing (NLP) like sentence similarity @cite_29 and aspect based sentiment analysis @cite_2 . | {
"cite_N": [
"@cite_29",
"@cite_22",
"@cite_2"
],
"mid": [
"2888868988",
"2963970792",
"2798590591"
],
"abstract": [
"",
"The pre-dominant approach to language modeling to date is based on recurrent neural networks. Their success on this task is often linked to their ability to capture unbounded context. In this paper we develop a finite context approach through stacked convolutions, which can be more efficient since they allow parallelization over sequential tokens. We propose a novel simplified gating mechanism that outperforms (2016b) and investigate the impact of key architectural decisions. The proposed approach achieves state-of-the-art on the WikiText-103 benchmark, even though it features long-term dependencies, as well as competitive results on the Google Billion Words benchmark. Our model reduces the latency to score a sentence by an order of magnitude compared to a recurrent baseline. To our knowledge, this is the first time a non-recurrent approach is competitive with strong recurrent models on these large scale language tasks.",
"Aspect based sentiment analysis (ABSA) can provide more detailed information than general sentiment analysis, because it aims to predict the sentiment polarities of the given aspects or entities in text. We summarize previous approaches into two subtasks: aspect-category sentiment analysis (ACSA) and aspect-term sentiment analysis (ATSA). Most previous approaches employ long short-term memory and attention mechanisms to predict the sentiment polarity of the concerned targets, which are often complicated and need more training time. We propose a model based on convolutional neural networks and gating mechanisms, which is more accurate and efficient. First, the novel Gated Tanh-ReLU Units can selectively output the sentiment features according to the given aspect or entity. The architecture is much simpler than attention layer used in the existing models. Second, the computations of our model could be easily parallelized during training, because convolutional layers do not have time dependency as in LSTM layers, and gating units also work independently. The experiments on SemEval datasets demonstrate the efficiency and effectiveness of our models."
]
} |
1905.06650 | 2944875204 | Video caching has been a basic network functionality in today's network architectures. Although the abundance of caching replacement algorithms has been proposed recently, these methods all suffer from a key limitation: due to their immature rules, inaccurate feature engineering or unresponsive model update, they cannot strike a balance between the long-term history and short-term sudden events. To address this concern, we propose LA-E2, a long-short-term fusion caching replacement approach, which is based on a learning-aided exploration-exploitation process. Specifically, by effectively combining the deep neural network (DNN) based prediction with the online exploitation-exploration process through a method, LA-E2 can both make use of the historical information and adapt to the constantly changing popularity responsively. Through the extensive experiments in two real-world datasets, we show that LA-E2 can achieve state-of-the-art performance and generalize well. Especially when the cache size is small, our approach can outperform the baselines by 17.5 -68.7 higher in total hit rate. | Content caching has been a basic network functionality in today's network architectures such as content delivery network (CDN) @cite_9 and 5G networking @cite_2 . Due to the simplicity of implementation, most currently used caching algorithms are still based on FIFO, LRU, and LFU @cite_10 . However, the effectiveness of these algorithms requires the presumption of the request patterns (e.g., Poisson arrival), which is always problematic in the real world. | {
"cite_N": [
"@cite_9",
"@cite_10",
"@cite_2"
],
"mid": [
"2093375622",
"2051818334",
"2039090559"
],
"abstract": [
"Striking a balance between the costs for Web content providers and the quality of service for Web customers.",
"Content Delivery Networks (CDNs) differ from other caching systems in terms of both workload characteristics and performance metrics. However, there has been little prior work on large-scale measurement and characterization of content requests and caching performance in CDNs. For workload characteristics, CDNs deal with extremely large content volume, high content diversity, and strong temporal dynamics. For performance metrics, other than hit ratio, CDNs also need to minimize the disk operations and the volume of traffic from origin servers. In this paper, we conduct a large-scale measurement study to characterize the content request patterns using real-world data from a commercial CDN provider.",
"Small cells constitute a promising solution for managing the mobile data growth that has overwhelmed network operators. Local caching of popular content items at the small cell base stations (SBSs) has been proposed to decrease the costly transmissions from the macrocell base stations without requiring high capacity backhaul links for connecting the SBSs with the core network. However, the caching policy design is a challenging problem especially if one considers realistic parameters such as the bandwidth capacity constraints of the SBSs that can be reached in congested urban areas. We consider such a scenario and formulate the joint routing and caching problem aiming to maximize the fraction of content requests served locally by the deployed SBSs. This is an NP-hard problem and, hence, we cannot obtain an optimal solution. Thus, we present a novel reduction to a variant of the facility location problem, which allows us to exploit the rich literature of it, to establish algorithms with approximation guarantees for our problem. Although the reduction does not ensure tight enough bounds in general, extensive numerical results reveal a near-optimal performance that is even up to 38 better compared to conventional caching schemes using realistic system settings."
]
} |
1905.06650 | 2944875204 | Video caching has been a basic network functionality in today's network architectures. Although the abundance of caching replacement algorithms has been proposed recently, these methods all suffer from a key limitation: due to their immature rules, inaccurate feature engineering or unresponsive model update, they cannot strike a balance between the long-term history and short-term sudden events. To address this concern, we propose LA-E2, a long-short-term fusion caching replacement approach, which is based on a learning-aided exploration-exploitation process. Specifically, by effectively combining the deep neural network (DNN) based prediction with the online exploitation-exploration process through a method, LA-E2 can both make use of the historical information and adapt to the constantly changing popularity responsively. Through the extensive experiments in two real-world datasets, we show that LA-E2 can achieve state-of-the-art performance and generalize well. Especially when the cache size is small, our approach can outperform the baselines by 17.5 -68.7 higher in total hit rate. | Inspired by the success of deep learning method, the latest work has paid attention to using DNN to solve the caching problem. @cite_0 , authors propose an approach named DeepCache'', which directly uses long-short-term-memory (LSTM) network to predict popularity. @cite_7 , researchers use a sequence-to-sequence model, also based on LSTM, to predict future characteristics of each content. Nevertheless, these algorithms may suffer from prediction bias due to unresponsive model update, which separates them from the optimal solution. | {
"cite_N": [
"@cite_0",
"@cite_7"
],
"mid": [
"2912078769",
"2885965959"
],
"abstract": [
"The emerging 5G mobile networking promises ultrahigh network bandwidth and ultra-low communication latency ( 100ms), due to its store-and-forward design and the physical barrier from signal propagation speed, not to mention congestion that frequently happens. Caching is known to be effective to bridge the speed gap, which has become a critical component in the 5G deployment as well. Besides storage, 5G base stations (BSs) will also be powered with strong computing modules, offering mobile edge computing (MEC) capability. This paper explores the potentials of edge computing towards improving the cache performance, and we envision a learning-based framework that facilitates smart caching beyond simple frequency- and time-based replace strategies and cooperation among base stations. Within this framework, we develop DeepCache, a deep-learning-based solution to understand the request patterns in individual base stations and accordingly make intelligent cache decisions. Using mobile video, one of the most popular applications with high traffic demand, as a case, we further develop a cooperation strategy for nearby base stations to collectively serve user requests. Experimental results on real-world dataset show that using the collaborative DeepCache algorithm, the overall transmission delay is reduced by 14 ∼22 , with a backhaul data traffic saving of 15 ∼23 .",
"In this paper, we present DEEPCACHE a novel Framework for content caching, which can significantly boost cache performance. Our Framework is based on powerful deep recurrent neural network models. It comprises of two main components: i) Object Characteristics Predictor, which builds upon deep LSTM Encoder-Decoder model to predict the future characteristics of an object (such as object popularity) -- to the best of our knowledge, we are the first to propose LSTM Encoder-Decoder model for content caching; ii) a caching policy component, which accounts for predicted information of objects to make smart caching decisions. In our thorough experiments, we show that applying DEEPCACHE Framework to existing cache policies, such as LRU and k-LRU, significantly boosts the number of cache hits."
]
} |
1905.06642 | 2946649969 | We consider the problem of recovering a common latent source with independent components from multiple views. This applies to settings in which a variable is measured with multiple experimental modalities, and where the goal is to synthesize the disparate measurements into a single unified representation. We consider the case that the observed views are a nonlinear mixing of component-wise corruptions of the sources. When the views are considered separately, this reduces to nonlinear Independent Component Analysis (ICA) for which it is provably impossible to undo the mixing. We present novel identifiability proofs that this is possible when the multiple views are considered jointly, showing that the mixing can theoretically be undone using function approximators such as deep neural networks. In contrast to known identifiability results for nonlinear ICA, we prove that independent latent sources with arbitrary mixing can be recovered as long as multiple, sufficiently different noisy views are available. | Given two (or more) random variables, the goal of Canonical Correlation Analysis (CCA) @cite_28 is to find a corresponding pair of linear subspaces that have high cross-correlation, so that each component within one of the subspaces is correlated with a single component from the other subspace @cite_14 . In dealing with correlation instead of independence, CCA is more closely related to Principal Component Analysis (PCA) than to ICA. | {
"cite_N": [
"@cite_28",
"@cite_14"
],
"mid": [
"2025341678",
"1663973292"
],
"abstract": [
"Concepts of correlation and regression may be applied not only to ordinary one-dimensional variates but also to variates of two or more dimensions. Marksmen side by side firing simultaneous shots at targets, so that the deviations are in part due to independent individual errors and in part to common causes such as wind, provide a familiar introduction to the theory of correlation; but only the correlation of the horizontal components is ordinarily discussed, whereas the complex consisting of horizontal and vertical deviations may be even more interesting. The wind at two places may be compared, using both components of the velocity in each place. A fluctuating vector is thus matched at each moment with another fluctuating vector. The study of individual differences in mental and physical traits calls for a detailed study of the relations between sets of correlated variates. For example the scores on a number of mental tests may be compared with physical measurements on the same persons. The questions then arise of determining the number and nature of the independent relations of mind and body shown by these data to exist, and of extracting from the multiplicity of correlations in the system suitable characterizations of these independent relations. As another example, the inheritance of intelligence in rats might be studied by applying not one but s different mental tests to N mothers and to a daughter of each",
"Cristopher M. BishopInformation Science and StatisticsSpringer 2006, 738 pagesAs the author writes in the preface of the book, pattern recognition has its origin inengineering, whereas machine learning grew out of computer science. However, theseactivities can be viewed as two facets of the same field, and they have undergonesubstantial development over the past years.Bayesian methods are widely used, while graphical models have emerged as a generalframework for describing and applying probabilistic models. Similarly new modelsbased on kernels have had significant impact on both algorithms and applications.This textbook reflects these recent developments while providing a comprehensiveintroduction to the fields of pattern recognition and machine learning. It is aimedat advanced undergraduate or first year PhD students, as well as researchers andpractitioners. It can be consider as an introductory course to the subject.The first four chapters are devoted to the concepts of Probability and Statistics that areneededforreadingtherestofthebook,sowecanimaginethatthespeedishighinorderto get from zero to infinity. I believe that it is better to study the book after a previouscourse on Probability and Statistics. On the other hand, a basic knowledge of linearalgebra and multivariate calculus is assumed.The other chapters give to a classic probabilist or statistician a point of view on someapplications that are very interesting but far from his usual world. In all the text themathematical aspects are at the second level in relation withthe ideas and intuitionsthatthe author wants to communicate.The book is supported by a great deal of additional material, including lecture slides aswell as the complete set of figures used in it, and the reader is encouraged to visit thebook web site for the latest information. So it can be very useful for a course or a talkabout the subject."
]
} |
1905.06642 | 2946649969 | We consider the problem of recovering a common latent source with independent components from multiple views. This applies to settings in which a variable is measured with multiple experimental modalities, and where the goal is to synthesize the disparate measurements into a single unified representation. We consider the case that the observed views are a nonlinear mixing of component-wise corruptions of the sources. When the views are considered separately, this reduces to nonlinear Independent Component Analysis (ICA) for which it is provably impossible to undo the mixing. We present novel identifiability proofs that this is possible when the multiple views are considered jointly, showing that the mixing can theoretically be undone using function approximators such as deep neural networks. In contrast to known identifiability results for nonlinear ICA, we prove that independent latent sources with arbitrary mixing can be recovered as long as multiple, sufficiently different noisy views are available. | CCA can be interpretated probabilistically @cite_33 and is equivalent to maximum likelihood estimation in a graphical model which is a special case of that depicted in Figure . The differences compared to our setting are (i) the latent components retrieved in CCA are forced to be uncorrelated, whereas our method is retrieves independent components; (ii) in CCA, mappings between the sources @math and @math are linear, whereas our method allows for nonlinear mappings. | {
"cite_N": [
"@cite_33"
],
"mid": [
"2125290066"
],
"abstract": [
"We give a probabilistic interpretation of canonical correlation (CCA) analysis as a latent variable model for two Gaussian random vectors. Our interpretation is similar to the probabilistic interpretation of principal component analysis (Tipping and Bishop, 1999, Roweis, 1998). In addition, we cast Fisher linear discriminant analysis (LDA) within the CCA framework."
]
} |
1905.06642 | 2946649969 | We consider the problem of recovering a common latent source with independent components from multiple views. This applies to settings in which a variable is measured with multiple experimental modalities, and where the goal is to synthesize the disparate measurements into a single unified representation. We consider the case that the observed views are a nonlinear mixing of component-wise corruptions of the sources. When the views are considered separately, this reduces to nonlinear Independent Component Analysis (ICA) for which it is provably impossible to undo the mixing. We present novel identifiability proofs that this is possible when the multiple views are considered jointly, showing that the mixing can theoretically be undone using function approximators such as deep neural networks. In contrast to known identifiability results for nonlinear ICA, we prove that independent latent sources with arbitrary mixing can be recovered as long as multiple, sufficiently different noisy views are available. | Bearing a strong resemblance to our considered setting, @cite_15 proposes a sequence of diffusion maps to find the common source of variability captured by multiple sensors, discarding irrelevant sensor-specific effects. It computes the distance among the samples measured by different sensors to form a similarity matrix for the measurements of each sensor; each similarity matrix is then associated to a diffusion operator, which is a Markov matrix by construction. A Markov chain is then run by alternately applying these Markov matrices on the initial state. During these Markovian dynamics, sensor specific information will eventually vanish, and the final state will only contain information on the common source. While the method focuses on recovering the common information in the form of a parametrization of the common variable, our method both inverts the mixing mechanisms of each view and recovers the common latent variables. | {
"cite_N": [
"@cite_15"
],
"mid": [
"2179941082"
],
"abstract": [
"Abstract One of the challenges in data analysis is to distinguish between different sources of variability manifested in data. In this paper, we consider the case of multiple sensors measuring the same physical phenomenon, such that the properties of the physical phenomenon are manifested as a hidden common source of variability (which we would like to extract), while each sensor has its own sensor-specific effects (hidden variables which we would like to suppress); the relations between the measurements and the hidden variables are unknown. We present a data-driven method based on alternating products of diffusion operators and show that it extracts the common source of variability. Moreover, we show that it extracts the common source of variability in a multi-sensor experiment as if it were a standard manifold learning algorithm used to analyze a simple single-sensor experiment, in which the common source of variability is the only source of variability."
]
} |
1905.06642 | 2946649969 | We consider the problem of recovering a common latent source with independent components from multiple views. This applies to settings in which a variable is measured with multiple experimental modalities, and where the goal is to synthesize the disparate measurements into a single unified representation. We consider the case that the observed views are a nonlinear mixing of component-wise corruptions of the sources. When the views are considered separately, this reduces to nonlinear Independent Component Analysis (ICA) for which it is provably impossible to undo the mixing. We present novel identifiability proofs that this is possible when the multiple views are considered jointly, showing that the mixing can theoretically be undone using function approximators such as deep neural networks. In contrast to known identifiability results for nonlinear ICA, we prove that independent latent sources with arbitrary mixing can be recovered as long as multiple, sufficiently different noisy views are available. | Half-sibling regression @cite_42 is a method to reconstruct a source from noisy observations by exploiting other sources that are affected by the same noise process but otherwise independent from it. | {
"cite_N": [
"@cite_42"
],
"mid": [
"2470207857"
],
"abstract": [
"Abstract We describe a method for removing the effect of confounders to reconstruct a latent quantity of interest. The method, referred to as “half-sibling regression,” is inspired by recent work in causal inference using additive noise models. We provide a theoretical justification, discussing both independent and identically distributed as well as time series data, respectively, and illustrate the potential of the method in a challenging astronomy application."
]
} |
1905.06860 | 2945998549 | Speech-driven visual speech synthesis involves mapping features extracted from acoustic speech to the corresponding lip animation controls for a face model. This mapping can take many forms, but a powerful approach is to use deep neural networks (DNNs). However, a limitation is the lack of synchronized audio, video, and depth data required to reliably train the DNNs, especially for speaker-independent models. In this paper, we investigate adapting an automatic speech recognition (ASR) acoustic model (AM) for the visual speech synthesis problem. We train the AM on ten thousand hours of audio-only data. The AM is then adapted to the visual speech synthesis domain using ninety hours of synchronized audio-visual speech. Using a subjective assessment test, we compared the performance of the AM-initialized DNN to one with a random initialization. The results show that viewers significantly prefer animations generated from the AM-initialized DNN than the ones generated using the randomly initialized model. We conclude that visual speech synthesis can significantly benefit from the powerful representation of speech in the ASR acoustic models. | To model the temporal effects of coarticulation, many variants of hidden Markov models (HMMs) have been proposed. Inspired by the task dynamics model of articulatory phonology, one approach adopted by some text-based systems is to concatenate context-dependent phone models and sample the maximum likelihood parameters, and then use these parameters to guide the selection of samples from real data @cite_35 @cite_36 . Alternatively, longer phone units can be used (e.g. quinphones) to better capture longer-term speech (and other visual prosodic) effects @cite_29 ; but, these models require increasingly large training sets. | {
"cite_N": [
"@cite_36",
"@cite_35",
"@cite_29"
],
"mid": [
"2104480821",
"2395252436",
"2129360799"
],
"abstract": [
"In this paper, we propose an HMM trajectory-guided, real image sample concatenation approach to photo-realistic talking head synthesis. An audio-visual database of a person is recorded first for training a statistical Hidden Markov Model (HMM) of Lips movement. The HMM is then used to generate the dynamic trajectory of lips movement for given speech signals in the maximum probability sense. The generated trajectory is then used as a guide to select, from the original training database, an optimal sequence of lips images which are then stitched back to a background head video. We also propose a minimum generation error (MGE) training method to refine the audio-visual HMM to improve visual speech trajectory synthesis. Compared with the traditional maximum likelihood (ML) estimation, the proposed MGE training explicitly optimizes the quality of generated visual speech trajectory, where the audio-visual HMM modeling is jointly refined by using a heuristic method to find the optimal state alignment and a probabilistic descent algorithm to optimize the model parameters under the MGE criterion. In objective evaluation, compared with the ML-based method, the proposed MGE-based method achieves consistent improvement in the mean square error reduction, correlation increase, and recovery of global variance. For as short as 20 min recording of audio video footage, the proposed system can synthesize a highly photo-realistic talking head in sync with the given speech signals (natural or TTS synthesized). This system won the first place in the A V consistency contest in LIPS Challenge, perceptually evaluated by recruited human subjects.",
"",
"This paper presents a complete system for expressive visual text-to-speech (VTTS), which is capable of producing expressive output, in the form of a 'talking head', given an input text and a set of continuous expression weights. The face is modeled using an active appearance model (AAM), and several extensions are proposed which make it more applicable to the task of VTTS. The model allows for normalization with respect to both pose and blink state which significantly reduces artifacts in the resulting synthesized sequences. We demonstrate quantitative improvements in terms of reconstruction error over a million frames, as well as in large-scale user studies, comparing the output of different systems."
]
} |
1905.06860 | 2945998549 | Speech-driven visual speech synthesis involves mapping features extracted from acoustic speech to the corresponding lip animation controls for a face model. This mapping can take many forms, but a powerful approach is to use deep neural networks (DNNs). However, a limitation is the lack of synchronized audio, video, and depth data required to reliably train the DNNs, especially for speaker-independent models. In this paper, we investigate adapting an automatic speech recognition (ASR) acoustic model (AM) for the visual speech synthesis problem. We train the AM on ten thousand hours of audio-only data. The AM is then adapted to the visual speech synthesis domain using ninety hours of synchronized audio-visual speech. Using a subjective assessment test, we compared the performance of the AM-initialized DNN to one with a random initialization. The results show that viewers significantly prefer animations generated from the AM-initialized DNN than the ones generated using the randomly initialized model. We conclude that visual speech synthesis can significantly benefit from the powerful representation of speech in the ASR acoustic models. | Using HMMs to model complex multimodal signals has limitations because only a single hidden state is allowed in each time frame. This restriction means that many more states are required than would otherwise be necessary to capture the complexities of the cross-modal dynamics. To overcome this problem, dynamic Bayesian networks (DBNs) with Baum---Welch DBN inversion can be used to model the cross-model dependencies and perform the audio-to-visual conversion @cite_8 . | {
"cite_N": [
"@cite_8"
],
"mid": [
"2114336453"
],
"abstract": [
"This paper presents an articulatory modelling approach to convert acoustic speech into realistic mouth animation. We directly model the movements of articulators, such as lips, tongue, and teeth, using a dynamic Bayesian network (DBN)-based audio-visual articulatory model (AVAM). A multiple-stream structure with a shared articulator layer is adopted in the model to synchronously associate the two building blocks of speech, i.e., audio and video. This model not only describes the synchronization between visual articulatory movements and audio speech, but also reflects the linguistic fact that different articulators evolve asynchronously. We also present a Baum-Welch DBN inversion (DBNI) algorithm to generate optimal facial parameters from audio given the trained AVAM under maximum likelihood (ML) criterion. Extensive objective and subjective evaluations on the JEWEL audio-visual dataset demonstrate that compared with phonemic HMM approaches, facial parameters estimated by our approach follow the true parameters more accurately, and the synthesized facial animation sequences are so lively that 38 of them are undistinguishable"
]
} |
1905.06860 | 2945998549 | Speech-driven visual speech synthesis involves mapping features extracted from acoustic speech to the corresponding lip animation controls for a face model. This mapping can take many forms, but a powerful approach is to use deep neural networks (DNNs). However, a limitation is the lack of synchronized audio, video, and depth data required to reliably train the DNNs, especially for speaker-independent models. In this paper, we investigate adapting an automatic speech recognition (ASR) acoustic model (AM) for the visual speech synthesis problem. We train the AM on ten thousand hours of audio-only data. The AM is then adapted to the visual speech synthesis domain using ninety hours of synchronized audio-visual speech. Using a subjective assessment test, we compared the performance of the AM-initialized DNN to one with a random initialization. The results show that viewers significantly prefer animations generated from the AM-initialized DNN than the ones generated using the randomly initialized model. We conclude that visual speech synthesis can significantly benefit from the powerful representation of speech in the ASR acoustic models. | Increasingly, deep neural network (DNN) based models are used for audio-to-visual inversion. Architectures used include fully-connect feedforward networks @cite_28 , recurrent neural networks (RNNs) and or long-short term memory (LSTM) models @cite_18 @cite_4 @cite_15 @cite_26 , and generative adversarial networks @cite_25 @cite_10 . Many approaches are trained end-to-end to map directly from speech to video, but the approach by @cite_28 achieves speaker-independence using a phonemic transcription as input. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_4",
"@cite_28",
"@cite_15",
"@cite_10",
"@cite_25"
],
"mid": [
"1569907127",
"2738406145",
"2762899171",
"2737658251",
"2796931171",
"2804600264",
"2790649793"
],
"abstract": [
"Long short-term memory (LSTM) is a specific recurrent neural network (RNN) architecture that is designed to model temporal sequences and their long-range dependencies more accurately than conventional RNNs. In this paper, we propose to use deep bidirectional LSTM (BLSTM) for audio visual modeling in our photo-real talking head system. An audio visual database of a subject's talking is firstly recorded as our training data. The audio visual stereo data are converted into two parallel temporal sequences, i.e., contextual label sequences obtained by forced aligning audio against text, and visual feature sequences by applying active-appearance-model (AAM) on the lower face region among all the training image samples. The deep BLSTM is then trained to learn the regression model by minimizing the sum of square error (SSE) of predicting visual sequence from label sequence. After testing different network topologies, we interestingly found the best network is two BLSTM layers sitting on top of one feed-forward layer on our datasets. Compared with our previous HMM-based system, the newly proposed deep BLSTM-based one is better on both objective measurement and subjective A B test.",
"Given audio of President Barack Obama, we synthesize a high quality video of him speaking with accurate lip sync, composited into a target video clip. Trained on many hours of his weekly address footage, a recurrent neural network learns the mapping from raw audio features to mouth shapes. Given the mouth shape at each time instant, we synthesize high quality mouth texture, and composite it with proper 3D pose matching to change what he appears to be saying in a target video to match the input audio track. Our approach produces photorealistic results.",
"We present a deep learning framework for real-time speech-driven 3D facial animation from just raw waveforms. Our deep neural network directly maps an input sequence of speech audio to a series of micro facial action unit activations and head rotations to drive a 3D blendshape face model. In particular, our deep model is able to learn the latent representations of time-varying contextual information and affective states within the speech. Hence, our model not only activates appropriate facial action units at inference to depict different utterance generating actions, in the form of lip movements, but also, without any assumption, automatically estimates emotional intensity of the speaker and reproduces her ever-changing affective states by adjusting strength of facial unit activations. For example, in a happy speech, the mouth opens wider than normal, while other facial units are relaxed; or in a surprised state, both eyebrows raise higher. Experiments on a diverse audiovisual corpus of different actors across a wide range of emotional states show interesting and promising results of our approach. Being speaker-independent, our generalized model is readily applicable to various tasks in human-machine interaction and animation.",
"We introduce a simple and effective deep learning approach to automatically generate natural looking speech animation that synchronizes to input speech. Our approach uses a sliding window predictor that learns arbitrary nonlinear mappings from phoneme label input sequences to mouth movements in a way that accurately captures natural motion and visual coarticulation effects. Our deep learning approach enjoys several attractive properties: it runs in real-time, requires minimal parameter tuning, generalizes well to novel input speech sequences, is easily edited to create stylized and emotional speech, and is compatible with existing animation retargeting approaches. One important focus of our work is to develop an effective approach for speech animation that can be easily integrated into existing production pipelines. We provide a detailed description of our end-to-end approach, including machine learning design decisions. Generalized speech animation results are demonstrated over a wide range of animation clips on a variety of characters and voices, including singing and foreign language input. Our approach can also generate on-demand speech animation in real-time from user speech input.",
"Given an arbitrary face image and an arbitrary speech clip, the proposed work attempts to generating the talking face video with accurate lip synchronization while maintaining smooth transition of both lip and facial movement over the entire video clip. Existing works either do not consider temporal dependency on face images across different video frames thus easily yielding noticeable abrupt facial and lip movement or are only limited to the generation of talking face video for a specific person thus lacking generalization capacity. We propose a novel conditional video generation network where the audio input is treated as a condition for the recurrent adversarial network such that temporal dependency is incorporated to realize smooth transition for the lip and facial movement. In addition, we deploy a multi-task adversarial training scheme in the context of video generation to improve both photo-realism and the accuracy for lip synchronization. Finally, based on the phoneme distribution information extracted from the audio clip, we develop a sample selection method that effectively reduces the size of the training dataset without sacrificing the quality of the generated video. Extensive experiments on both controlled and uncontrolled datasets demonstrate the superiority of the proposed approach in terms of visual quality, lip sync accuracy, and smooth transition of lip and facial movement, as compared to the state-of-the-art.",
"Speech-driven facial animation is the process which uses speech signals to automatically synthesize a talking character. The majority of work in this domain creates a mapping from audio features to visual features. This often requires post-processing using computer graphics techniques to produce realistic albeit subject dependent results. We present a system for generating videos of a talking head, using a still image of a person and an audio clip containing speech, that doesn't rely on any handcrafted intermediate features. To the best of our knowledge, this is the first method capable of generating subject independent realistic videos directly from raw audio. Our method can generate videos which have (a) lip movements that are in sync with the audio and (b) natural facial expressions such as blinks and eyebrow movements. We achieve this by using a temporal GAN with 2 discriminators, which are capable of capturing different aspects of the video. The effect of each component in our system is quantified through an ablation study. The generated videos are evaluated based on their sharpness, reconstruction quality, and lip-reading accuracy. Finally, a user study is conducted, confirming that temporal GANs lead to more natural sequences than a static GAN-based approach.",
"We present a novel approach to generating photo-realistic images of a face with accurate lip sync, given an audio input. By using a recurrent neural network, we achieved mouth landmarks based on audio features. We exploited the power of conditional generative adversarial networks to produce highly-realistic face conditioned on a set of landmarks. These two networks together are capable of producing a sequence of natural faces in sync with an input audio track."
]
} |
1905.06625 | 2966820671 | In recent decades, it has become a significant tendency for industrial manufacturers to adopt decentralization as a new manufacturing paradigm. This enables more efficient operations and facilitates the shift from mass to customized production. At the same time, advances in data analytics give more insights into the production lines, thus improving its overall productivity. The primary objective of this paper is to apply a decentralized architecture to address new challenges in industrial analytics. The main contributions of this work are therefore two-fold: (1) an assessment of the microservices’ feasibility in industrial environments, and (2) a microservices-based architecture for industrial data analytics. Also, a prototype has been developed, analyzed, and evaluated, to provide further practical insights. Initial evaluation results of this prototype underpin the adoption of microservices in industrial analytics with less than 20ms end-to-end processing latency for predicting movement paths for 100 autonomous robots on a commodity hardware server. However, it also identifies several drawbacks of the approach, which is, among others, the complexity in structure, leading to higher resource consumption. | @cite_19 propose a simulation-based architecture for ) Cyber-Physical systems are systems with the seamless, real-time interaction between computing elements and physical assets using intelligent data management, analytics and computational capability @cite_7 . at shop-floor level, providing an environment for Digital Twins are representations of real-world assets (including the assets in designing building stage) created with the ability to collect and synthesize data from various sources including physical data, manufacturing data, operational data and insights from analytics software @cite_2 . along the whole plant life-cycle. The proposed platform implements a microservice IoT-Big Data architecture supporting the publication of multidisciplinary simulation models and managing streams of data coming from the shop-floor for real-digital synchronization. Microservices architecture is applied in their support infrastructure, in order to manage the DTs. In our proposal, we extend this work by employing microservices also for simulating physical assets to decentralize the whole system. Rather than storing all digital copies of physical assets in one place, building them as microservices allows more flexibility in deployment strategies. For each service, a best physical location to deploy could be determined based on various criteria. This is an important foundation to enable automated and QoS-aware deployment strategies. | {
"cite_N": [
"@cite_19",
"@cite_7",
"@cite_2"
],
"mid": [
"2738102248",
"2029608738",
""
],
"abstract": [
"Abstract In recent years a considerable effort has been spent by research and industrial communities in the digitalization of production environments with the main objective of achieving a new automation paradigm, more flexible, responsive to changes, and safe. This paper presents the architecture, and discusses the benefits, of a distributed middleware prototype supporting a new generation of smart-factory-enabled applications with special attention paid to simulation tools. Devised within the scope of MAYA EU project, the proposed platform aims at being the first solution capable of empowering shop-floor Cyber-Physical-Systems (CPSs), providing an environment for their Digital Twin along the whole plant life-cycle. The platform implements a microservice IoT-Big Data architecture supporting the distributed publication of multidisciplinary simulation models, managing in an optimized way streams of data coming from the shop-floor for real-digital synchronization, ensuring security and confidentiality of sensible data.",
"Abstract Recent advances in manufacturing industry has paved way for a systematical deployment of Cyber-Physical Systems (CPS), within which information from all related perspectives is closely monitored and synchronized between the physical factory floor and the cyber computational space. Moreover, by utilizing advanced information analytics, networked machines will be able to perform more efficiently, collaboratively and resiliently. Such trend is transforming manufacturing industry to the next generation, namely Industry 4.0. At this early development phase, there is an urgent need for a clear definition of CPS. In this paper, a unified 5-level architecture is proposed as a guideline for implementation of CPS.",
""
]
} |
1905.06625 | 2966820671 | In recent decades, it has become a significant tendency for industrial manufacturers to adopt decentralization as a new manufacturing paradigm. This enables more efficient operations and facilitates the shift from mass to customized production. At the same time, advances in data analytics give more insights into the production lines, thus improving its overall productivity. The primary objective of this paper is to apply a decentralized architecture to address new challenges in industrial analytics. The main contributions of this work are therefore two-fold: (1) an assessment of the microservices’ feasibility in industrial environments, and (2) a microservices-based architecture for industrial data analytics. Also, a prototype has been developed, analyzed, and evaluated, to provide further practical insights. Initial evaluation results of this prototype underpin the adoption of microservices in industrial analytics with less than 20ms end-to-end processing latency for predicting movement paths for 100 autonomous robots on a commodity hardware server. However, it also identifies several drawbacks of the approach, which is, among others, the complexity in structure, leading to higher resource consumption. | @cite_20 describe a 5-layer framework based on microservices for manufacturing systems. Each physical unit of the plant is transformed to a smart entity, named a (CPMS). The authors define two main types of CPMS: , which encapsulates a physical artifact and transform it to a smart entity, and , which utilizes at least one . In this work, the authors focus on the architecture of each individual CPMS and evaluate the overhead of microservice orchestration. Our paper is complementary to this work, as it proposes a complete architecture for utilizing CPMSs. | {
"cite_N": [
"@cite_20"
],
"mid": [
"2964247905"
],
"abstract": [
"Recent advances in ICT enable the evolution of the manufacturing industry to meet the new requirements of the society. Cyber-physical systems, Internet-of-Things (IoT), and Cloud computing, play a key role in the fourth industrial revolution known as Industry 4.0. The microservice architecture has evolved as an alternative to SOA and promises to address many of the challenges in software development. In this paper, we adopt the concept of microservice and describe a framework for manufacturing systems that has the cyber-physical microservice as the key construct. The manufacturing plant processes are defined as compositions of primitive cyber-physical microservices adopting either the orchestration or the choreography pattern. IoT technologies are used for system integration and model-driven engineering is utilized to semi-automate the development process for the industrial engineer, who is not familiar with microservices and IoT. Two case studies demonstrate the feasibility of the proposed approach."
]
} |
1905.06625 | 2966820671 | In recent decades, it has become a significant tendency for industrial manufacturers to adopt decentralization as a new manufacturing paradigm. This enables more efficient operations and facilitates the shift from mass to customized production. At the same time, advances in data analytics give more insights into the production lines, thus improving its overall productivity. The primary objective of this paper is to apply a decentralized architecture to address new challenges in industrial analytics. The main contributions of this work are therefore two-fold: (1) an assessment of the microservices’ feasibility in industrial environments, and (2) a microservices-based architecture for industrial data analytics. Also, a prototype has been developed, analyzed, and evaluated, to provide further practical insights. Initial evaluation results of this prototype underpin the adoption of microservices in industrial analytics with less than 20ms end-to-end processing latency for predicting movement paths for 100 autonomous robots on a commodity hardware server. However, it also identifies several drawbacks of the approach, which is, among others, the complexity in structure, leading to higher resource consumption. | NIMBLE Collaborative Platform @cite_9 adopts microservices architecture to build a collaborative Industrie 4.0 platform that enables IoT-based real-time monitoring, optimization and negotiation in manufacturing supply chains. The microservices in the architecture provide beyond the core business functionalities essential supporting services such as Gateway Proxy, Service Logging, Service Monitoring, Service Discovery, Service Configuration, Identity Management. The implementation uses either REST (i.e. HTTP) or messaging as the mean of service communication. The authors describe from an architectural viewpoint how they use microservices to build a platform processing incoming data but they don't address the scalability of the platform. In our proposed architecture, we incorporate the self-management capability for the system by introducing a new component named . Its main responsibility is to monitor the load level of each microservices, and scale up or down a specific microservice based on various criteria such as memory consumption, incoming message queue length, etc. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2750017056"
],
"abstract": [
"This paper presents our architectural approach to building a collaborative Industry 4.0 platform that enables IoT-based real-time monitoring, optimization and negotiation in manufacturing supply chains. The platform is called NIMBLE and is currently being developed in a European research project. The presented approach utilizes microservice technology, and implements the core business functionality of the platform through a composition of decentralized, scalable services. The communication among services, and with platform users, manufacturers, suppliers, sensors and Web resources, is supported through simple protocols and lightweight mechanisms. Core business services of the implemented architecture are released as open source software, enabling multiple prospective platform-providers to establish B2B marketplaces for collaboration within their own industrial sector or region. To demonstrate microservices in practice, we present two scenarios, both related to manufacturing of wooden home buildings: one is IoT-based data sharing in a supply chain, and the other deals with product driven logistics planning. The further development of the platform will be driven by the requirements of at least four different use cases throughout Europe, and by incorporating advanced business models to support the growth of powerful network effects of the platform."
]
} |
1905.06625 | 2966820671 | In recent decades, it has become a significant tendency for industrial manufacturers to adopt decentralization as a new manufacturing paradigm. This enables more efficient operations and facilitates the shift from mass to customized production. At the same time, advances in data analytics give more insights into the production lines, thus improving its overall productivity. The primary objective of this paper is to apply a decentralized architecture to address new challenges in industrial analytics. The main contributions of this work are therefore two-fold: (1) an assessment of the microservices’ feasibility in industrial environments, and (2) a microservices-based architecture for industrial data analytics. Also, a prototype has been developed, analyzed, and evaluated, to provide further practical insights. Initial evaluation results of this prototype underpin the adoption of microservices in industrial analytics with less than 20ms end-to-end processing latency for predicting movement paths for 100 autonomous robots on a commodity hardware server. However, it also identifies several drawbacks of the approach, which is, among others, the complexity in structure, leading to higher resource consumption. | In @cite_6 , the authors propose a microservice-based reference architecture for the . In this architecture, there are six main components, which are Visualization (dashboard tools to interact with users), Calculation and Storage (provides aggregated information for Visualization layer), Data Transport and Integration (builds a common communication infrastructure for all services), Data Provider (feeds data to the system), Data Adapter (convert received data into readable formats) and Operation (contains services that ease operating and monitoring the application). This architecture is a basic design, as it is built to exploit most prominent advantages of microservice: high level of isolation between services, robust again complete system failure, and the support for the integration of heterogeneous systems. However, this architecture again does not address the scalability of a microservice-based application. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2035192272"
],
"abstract": [
"In our former work we proposed a micro service-based reference architecture for Enterprise Measurement Infrastructures (EMI) which received encouraging feedback. The reference architecture supports the systematic development of measurement systems. This paper provides deeper insight into the application of the reference architecture by presenting the results of two field studies after an examination of the most important requirements that drove the development of the reference architecture. The two selected field studies were conducted with large cooperation partners from industry and research and addressed real problems. Using our reference architecture, development process, and requirements gathering techniques we were able to successfully build the EMIs presented in this paper. These results further ease the application of micro service inside our reference architecture and support practitioners with specific examples."
]
} |
1905.06537 | 2946537100 | There are many factors affecting visual face recognition, such as low resolution images, aging, illumination and pose variance, etc. One of the most important problem is low resolution face images which can result in bad performance on face recognition. Most of the general face recognition algorithms usually assume a sufficient resolution for the face images. However, in practice many applications often do not have sufficient image resolutions. The modern face hallucination models demonstrate reasonable performance to reconstruct high-resolution images from its corresponding low resolution images. However, they do not consider identity level information during hallucination which directly affects results of the recognition of low resolution faces. To address this issue, we propose a Face Hallucination Generative Adversarial Network (FH-GAN) which improves the quality of low resolution face images and accurately recognize those low quality images. Concretely, we make the following contributions: 1) we propose FH-GAN network, an end-to-end system, that improves both face hallucination and face recognition simultaneously. The novelty of this proposed network depends on incorporating identity information in a GAN-based face hallucination algorithm via combining a face recognition network for identity preserving. 2) We also propose a new face hallucination network, namely Dense Sparse Network (DSNet), which improves upon the state-of-art in face hallucination. 3) We demonstrate benefits of training the face recognition and GAN-based DSNet jointly by reporting good result on face hallucination and recognition. | . Image SR methods can be applied to all kind of images which do not incorporate face-specific information. Generally, face hallucination is a type of class-specific image SR. @cite_38 introduced bichannel convolutional networks to hallucinate face images in the wild. @cite_6 introduced two-step auto-encoder architecture to hallucinate unaligned, noisy low resolution face images. @cite_19 also introduced identity information recovery in their proposed method. @cite_8 proposed GAN-based method to super resolve very low resolution image without using perceptual loss. Except from @cite_19 which is not using GAN-based generator, above mentioned methods do not consider identity information in hallucination process which is vital for recognition and visual quality. In our method, we used perceptual loss to achieve more realistic results and identity loss to incorporate with face recognition model to facilitate identity space by utilizing advanced GAN method. Our experiments demonstrate indistinguishable visual quality images and improve the performance of low resolution face recognition. | {
"cite_N": [
"@cite_19",
"@cite_38",
"@cite_6",
"@cite_8"
],
"mid": [
"2964167901",
"2201706299",
"2741976748",
"2520930090"
],
"abstract": [
"Face hallucination is a generative task to super-resolve the facial image with low resolution while human perception of face heavily relies on identity information. However, previous face hallucination approaches largely ignore facial identity recovery. This paper proposes Super-Identity Convolutional Neural Network (SICNN) to recover identity information for generating faces closed to the real identity. Specifically, we define a super-identity loss to measure the identity difference between a hallucinated face and its corresponding high-resolution face within the hypersphere identity metric space. However, directly using this loss will lead to a Dynamic Domain Divergence problem, which is caused by the large margin between the high-resolution domain and the hallucination domain. To overcome this challenge, we present a domain-integrated training approach by constructing a robust identity metric for faces from these two domains. Extensive experimental evaluations demonstrate that the proposed SICNN achieves superior visual quality over the state-of-the-art methods on a challenging task to super-resolve 12 ( ) 14 faces with an 8 ( ) upscaling factor. In addition, SICNN significantly improves the recognizability of ultra-low-resolution faces.",
"Face hallucination method is proposed to generate high-resolution images from low-resolution ones for better visualization. However, conventional hallucination methods are often designed for controlled settings and cannot handle varying conditions of pose, resolution degree, and blur. In this paper, we present a new method of face hallucination, which can consistently improve the resolution of face images even with large appearance variations. Our method is based on a novel network architecture called Bi-channel Convolutional Neural Network (Bi-channel CNN). It extracts robust face representations from raw input by using deep convolu-tional network, then adaptively integrates two channels of information (the raw input image and face representations) to predict the high-resolution image. Experimental results show our system outperforms the prior state-of-the-art methods.",
"Most of the conventional face hallucination methods assume the input image is sufficiently large and aligned, and all require the input image to be noise-free. Their performance degrades drastically if the input image is tiny, unaligned, and contaminated by noise. In this paper, we introduce a novel transformative discriminative autoencoder to 8X super-resolve unaligned noisy and tiny (16X16) low-resolution face images. In contrast to encoder-decoder based autoencoders, our method uses decoder-encoder-decoder networks. We first employ a transformative discriminative decoder network to upsample and denoise simultaneously. Then we use a transformative encoder network to project the intermediate HR faces to aligned and noise-free LR faces. Finally, we use the second decoder to generate hallucinated HR images. Our extensive evaluations on a very large face dataset show that our method achieves superior hallucination results and outperforms the state-of-the-art by a large margin of 1.82dB PSNR.",
"Conventional face super-resolution methods, also known as face hallucination, are limited up to (2 ! ! 4 ) scaling factors where (4 16 ) additional pixels are estimated for each given pixel. Besides, they become very fragile when the input low-resolution image size is too small that only little information is available in the input image. To address these shortcomings, we present a discriminative generative network that can ultra-resolve a very low resolution face image of size (16 16 ) pixels to its (8 ) larger version by reconstructing 64 pixels from a single pixel. We introduce a pixel-wise ( _2 ) regularization term to the generative model and exploit the feedback of the discriminative network to make the upsampled face images more similar to real ones. In our framework, the discriminative network learns the essential constituent parts of the faces and the generative network blends these parts in the most accurate fashion to the input image. Since only frontal and ordinary aligned images are used in training, our method can ultra-resolve a wide range of very low-resolution images directly regardless of pose and facial expression variations. Our extensive experimental evaluations demonstrate that the presented ultra-resolution by discriminative generative networks (UR-DGN) achieves more appealing results than the state-of-the-art."
]
} |
1905.06537 | 2946537100 | There are many factors affecting visual face recognition, such as low resolution images, aging, illumination and pose variance, etc. One of the most important problem is low resolution face images which can result in bad performance on face recognition. Most of the general face recognition algorithms usually assume a sufficient resolution for the face images. However, in practice many applications often do not have sufficient image resolutions. The modern face hallucination models demonstrate reasonable performance to reconstruct high-resolution images from its corresponding low resolution images. However, they do not consider identity level information during hallucination which directly affects results of the recognition of low resolution faces. To address this issue, we propose a Face Hallucination Generative Adversarial Network (FH-GAN) which improves the quality of low resolution face images and accurately recognize those low quality images. Concretely, we make the following contributions: 1) we propose FH-GAN network, an end-to-end system, that improves both face hallucination and face recognition simultaneously. The novelty of this proposed network depends on incorporating identity information in a GAN-based face hallucination algorithm via combining a face recognition network for identity preserving. 2) We also propose a new face hallucination network, namely Dense Sparse Network (DSNet), which improves upon the state-of-art in face hallucination. 3) We demonstrate benefits of training the face recognition and GAN-based DSNet jointly by reporting good result on face hallucination and recognition. | This is one of the main motivations in our work. We employed the face recognition model of @cite_12 . ArcFace model provides excellent performance on face verification on high resolution images as shown in @cite_12 . In our paper, ArcFace is trained specifically to preserve identity of low resolution face image as well as to enhance the face image quality while hallucinating. As a result, one of our contributions is to demonstrate that a face recognition model when incorporated and trained end-to-end with a super resolution network can still give high accuracy on low resolution face images. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2784874046"
],
"abstract": [
"One of the main challenges in feature learning using Deep Convolutional Neural Networks (DCNNs) for large-scale face recognition is the design of appropriate loss functions that enhance discriminative power. Centre loss penalises the distance between the deep features and their corresponding class centres in the Euclidean space to achieve intra-class compactness. SphereFace assumes that the linear transformation matrix in the last fully connected layer can be used as a representation of the class centres in an angular space and penalises the angles between the deep features and their corresponding weights in a multiplicative way. Recently, a popular line of research is to incorporate margins in well-established loss functions in order to maximise face class separability. In this paper, we propose an Additive Angular Margin Loss (ArcFace) to obtain highly discriminative features for face recognition. The proposed ArcFace has a clear geometric interpretation due to the exact correspondence to the geodesic distance on the hypersphere. We present arguably the most extensive experimental evaluation of all the recent state-of-the-art face recognition methods on over 10 face recognition benchmarks including a new large-scale image database with trillion level of pairs and a large-scale video dataset. We show that ArcFace consistently outperforms the state-of-the-art and can be easily implemented with negligible computational overhead. We release all refined training data, training codes, pre-trained models and training logs, which will help reproduce the results in this paper."
]
} |
1905.06566 | 2945918281 | Neural extractive summarization models usually employ a hierarchical encoder for document encoding and they are trained using sentence-level labels, which are created heuristically using rule-based methods. Training the hierarchical encoder with these labels is challenging. Inspired by the recent work on pre-training transformer sentence encoders devlin:2018:arxiv , we propose Hibert (as shorthand for HI erachical B idirectional E ncoder R epresentations from T ransformers) for document encoding and a method to pre-train it using unlabeled data. We apply the pre-trained Hibert to our summarization model and it outperforms its randomly initialized counterpart by 1.25 ROUGE on the CNN Dailymail dataset and by 2.0 ROUGE on a version of New York Times dataset. We also achieve the state-of-the-art performance on these two datasets. | In this section, we introduce work on extractive summarization, abstractive summarization and pre-trained natural language processing models. For a more comprehensive review of summarization, we refer the interested readers to and . Extractive summarization aims to select important sentences (sometimes other textual units such as elementary discourse units (EDUs)) from a document as its summary. It is usually modeled as a sentence ranking problem by using the scores from classifiers @cite_25 , sequential labeling models @cite_7 as well as integer linear programmers @cite_11 . Early work with these models above mostly leverage human engineered features such as sentence position and length @cite_0 , word frequency @cite_37 and event features @cite_2 . | {
"cite_N": [
"@cite_37",
"@cite_7",
"@cite_0",
"@cite_2",
"@cite_25",
"@cite_11"
],
"mid": [
"2140440594",
"2054211469",
"1602831581",
"1620608722",
"2101390659",
"2112077341"
],
"abstract": [
"The usual approach for automatic summarization is sentence extraction, where key sentences from the input documents are selected based on a suite of features. While word frequency often is used as a feature in summarization, its impact on system performance has not been isolated. In this paper, we study the contribution to summarization of three factors related to frequency: content word frequency, composition functions for estimating sentence importance from word frequency, and adjustment of frequency weights based on context. We carry out our analysis using datasets from the Document Understanding Conferences, studying not only the impact of these features on automatic summarizers, but also their role in human summarization. Our research shows that a frequency based summarizer can achieve performance comparable to that of state-of-the-art systems, but only with a good composition function; context sensitivity improves performance and significantly reduces repetition.",
"A sentence extract summary of a document is a subset of the document's sentences that contains the main ideas in the document. We present an approach to generating such summaries, a hidden Markov model that judges the likelihood that each sentence should be contained in the summary. We compare the results of this method with summaries generated by humans, showing that we obtain significantly higher agreement than do earlier methods.",
"Abstract This paper describes the functionality of MEAD, a comprehensive, public domain, open source, multidocument multilingual summarization environment that has been thus far downloaded by more than 500 organizations. MEAD has been used in a variety of summarization applications ranging from summarization for mobile devices to Web page summarization within a search engine and to novelty detection.",
"",
"●● ● ● ● To summarize is to reduce in complexity, and hence in length, while retaining some of the essential qualities of the original. This paper focusses on document extracts, a particular kind of computed document summary. Document extracts consisting of roughly 20 of the original cart be as informative as the full text of a document, which suggests that even shorter extracts may be useful indicative summmies. The trends in our results are in agreement with those of Edmundson who used a subjectively weighted combination of features as opposed to training the feature weights using a corpus.",
"In this paper we present a joint content selection and compression model for single-document summarization. The model operates over a phrase-based representation of the source document which we obtain by merging information from PCFG parse trees and dependency graphs. Using an integer linear programming formulation, the model learns to select and combine phrases subject to length, coverage and grammar constraints. We evaluate the approach on the task of generating \"story highlights\"---a small number of brief, self-contained sentences that allow readers to quickly gather information on news stories. Experimental results show that the model's output is comparable to human-written highlights in terms of both grammaticality and content."
]
} |
1905.06566 | 2945918281 | Neural extractive summarization models usually employ a hierarchical encoder for document encoding and they are trained using sentence-level labels, which are created heuristically using rule-based methods. Training the hierarchical encoder with these labels is challenging. Inspired by the recent work on pre-training transformer sentence encoders devlin:2018:arxiv , we propose Hibert (as shorthand for HI erachical B idirectional E ncoder R epresentations from T ransformers) for document encoding and a method to pre-train it using unlabeled data. We apply the pre-trained Hibert to our summarization model and it outperforms its randomly initialized counterpart by 1.25 ROUGE on the CNN Dailymail dataset and by 2.0 ROUGE on a version of New York Times dataset. We also achieve the state-of-the-art performance on these two datasets. | Most model pre-training methods in NLP leverage the natural ordering of text. For example, word2vec uses the surrounding words within a fixed size window to predict the word in the middle with a log bilinear model. The resulting word embedding table can be used in other downstream tasks. There are other word embedding pre-training methods using similar techniques @cite_4 @cite_39 . and find even a sentence encoder (not just word embeddings) can also be pre-trained with language model objectives (i.e., predicting the next or previous word). Language model objective is unidirectional, while many tasks can leverage the context in both directions. Therefore, propose the naturally bidirectional masked language model objective (i.e., masking several words with a special token in a sentence and then predicting them). All the methods above aim to pre-train word embeddings or sentence encoders, while our method aims to pre-train the hierarchical document encoders (i.e., hierarchical transformers), which is important in summarization. | {
"cite_N": [
"@cite_4",
"@cite_39"
],
"mid": [
"2250539671",
"2493916176"
],
"abstract": [
"Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.",
"Continuous word representations, trained on large unlabeled corpora are useful for many natural language processing tasks. Popular models to learn such representations ignore the morphology of words, by assigning a distinct vector to each word. This is a limitation, especially for languages with large vocabularies and many rare words. In this paper, we propose a new approach based on the skipgram model, where each word is represented as a bag of character n-grams. A vector representation is associated to each character n-gram, words being represented as the sum of these representations. Our method is fast, allowing to train models on large corpora quickly and allows to compute word representations for words that did not appear in the training data. We evaluate our word representations on nine different languages, both on word similarity and analogy tasks. By comparing to recently proposed morphological word representations, we show that our vectors achieve state-of-the-art performance on these tasks."
]
} |
1905.06751 | 2945610077 | We present an efficient method for the computation of homogenized coefficients of divergence-form operators with random coefficients. The approach is based on a multiscale representation of the homogenized coefficients. We then implement the method numerically using a finite-element method with hierarchical hybrid grids, which is a semi-implicit method allowing for significant gains in memory usage and execution time. Finally, we demonstrate the efficiency of our approach on two- and three-dimensional examples, for piecewise-constant coefficients with corner discontinuities. For moderate ellipticity contrast and for a precision of a few percentage points, our method allows to compute the homogenized coefficients on a laptop computer in a few seconds, in two dimensions, or in a few minutes, in three dimensions. | Over the last decade, an intensive research effort has been devoted to developing theoretical quantitative results on stochastic homogenization. The multiscale representation of the homogenized coefficients forming the basis of the method is inspired by the renormalization'' approach to quantitative stochastic homogenization, as developed in @cite_47 @cite_38 @cite_60 @cite_15 ; see also @cite_62 for a gentle introduction to this line of research and @cite_36 for a monograph. A related approach based on the parabolic flow was put forward in @cite_37 , see also [Chapter 9] AKMbook , and will give us the most convenient statement for us to build upon here. A different approach based on concentration inequalities was put forward in @cite_42 @cite_7 @cite_27 @cite_0 @cite_55 @cite_56 , inspired by earlier insights from statistical mechanics @cite_33 @cite_10 . | {
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_62",
"@cite_33",
"@cite_7",
"@cite_15",
"@cite_60",
"@cite_36",
"@cite_55",
"@cite_42",
"@cite_56",
"@cite_0",
"@cite_27",
"@cite_47",
"@cite_10"
],
"mid": [
"1900856771",
"1889314772",
"2909269298",
"2093668041",
"2602284421",
"",
"",
"",
"2963707073",
"2040550846",
"",
"2085186189",
"2107101459",
"2964228251",
"1598405519"
],
"abstract": [
"We develop a higher regularity theory for general quasilinear elliptic equations and systems in divergence form with random coefficients. The main result is a large-scale L∞-type estimate for the gradient of a solution. The estimate is proved with optimal stochastic integrability under a one-parameter family of mixing assumptions, allowing for very weak mixing with non-integrable correlations to very strong mixing (for example finite range of dependence). We also prove a quenched L2 estimate for the error in homogenization of Dirichlet problems. The approach is based on subadditive arguments which rely on a variational formulation of general quasilinear divergence-form equations.",
"We consider uniformly elliptic coecient elds that are randomly distributed according to a stationary ensemble of a nite range of dependence. We show that the gradient r of the corrector , when spatially averaged over a scale R 1 decays like R for any < d 2 . We establish these rates on the level of Gaussian bounds in terms of the stochastic integrability.",
"Divergence-form operators with random coefficients homogenize over large scales. Over the last decade, an intensive research effort focused on turning this asymptotic statement into quantitative estimates. The goal of this note is to review one approach for doing so based on the idea of renormalization. The discussion is highly informal, with pointers to mathematically precise statements.",
"We study the continuum scaling limit of some statistical mechanical models defined by convex Hamiltonians which are gradient perturbations of a massless free field. By proving a central limit theorem for these models, we show that their long distance behavior is identical to a new (homogenized) continuum massless free field. We shall also obtain some new bounds on the 2-point correlation functions of these models.",
"This is the second article of a series of papers on stochastic homogenization of discrete elliptic equations. We consider a discrete elliptic equation on the @math -dimensional lattice @math with random coefficients @math of the simplest type: They are identically distributed and independent from edge to edge. On scales large w. r. t. the lattice spacing (i. e. unity), the solution operator is known to behave like the solution operator of a (continuous) elliptic equation with constant deterministic coefficients. This symmetric ''homogenized'' matrix @math is characterized by @math for any direction @math , where the random field @math (the ''corrector'') is the unique solution of @math in @math such that @math , @math is stationary and @math , @math denoting the ensemble average (or expectation). In order to approximate the homogenized coefficients @math , the corrector problem is usually solved in a box @math of size @math with periodic boundary conditions, and the space averaged energy on @math defines an approximation @math of @math . Although the statistics is modified (independence is replaced by periodic correlations) and the ensemble average is replaced by a space average, the approximation @math converges almost surely to @math as @math . In this paper, we give estimates on both errors. To be more precise, we do not consider periodic boundary conditions on a box of size @math , but replace the elliptic operator by @math with (typically) @math , as standard in the homogenization literature. We then replace the ensemble average by a space average on @math , and estimate the overall error on the homogenized coefficients in terms of @math and @math .",
"",
"",
"",
"We derive optimal estimates in stochastic homogenization of linear elliptic equations in divergence form in dimensions @math . In previous works we studied the model problem of a discrete elliptic equation on @math . Under the assumption that a spectral gap estimate holds in probability, we proved that there exists a stationary corrector field in dimensions @math and that the energy density of that corrector behaves as if it had finite range of correlation in terms of the variance of spatial averages - the latter decays at the rate of the central limit theorem. In this article we extend these results, and several other estimates, to the case of a continuum linear elliptic equation whose (not necessarily symmetric) coefficient field satisfies a continuum version of the spectral gap estimate. In particular, our results cover the example of Poisson random inclusions.",
"We consider a discrete elliptic equation with random coefficients @math , which (to fix ideas) are identically distributed and independent from grid point to grid point @math . On scales large w. r. t. the grid size (i. e. unity), the solution operator is known to behave like the solution operator of a (continuous) elliptic equation with constant deterministic coefficients. These symmetric ''homogenized'' coefficients @math are characterized by @math where the random field @math is the unique stationary solution of the ''corrector problem'' @math and @math denotes the ensemble average. It is known (''by ergodicity'') that the above ensemble average of the energy density @math , which is a stationary random field, can be recovered by a system average. We quantify this by proving that the variance of a spatial average of @math on length scales @math is estimated as follows: @math where the averaging function (i. e. @math , @math ) has to be smooth in the sense that @math . In two space dimensions (i. e. @math ), there is a logarithmic correction. In other words, smooth averages of the energy density @math behave like as if @math would be independent from grid point to grid point (which it is not for @math ). This result is of practical significance, since it allows to estimate the error when numerically computing @math .",
"",
"We consider a random, uniformly elliptic coefficient field (a(x) ) on the (d )-dimensional integer lattice ( Z ^d ). We are interested in the spatial decay of the quenched elliptic Green function (G(a;x,y) ). Next to stationarity, we assume that the spatial correlation of the coefficient field decays sufficiently fast to the effect that a logarithmic Sobolev inequality holds for the ensemble ( ). We prove that all stochastic moments of the first and second mixed derivatives of the Green function, that is, ( | _x G(x,y)|^p ) and ( | _x _y G(x,y)|^p ), have the same decay rates in (|x-y| 1 ) as for the constant coefficient Green function, respectively. This result relies on and substantially extends the one by Delmotte and Deuschel (Probab Theory Relat Fields 133:358–390, 2005), which optimally controls second moments for the first derivatives and first moments of the second mixed derivatives of (G ), that is, ( | _x G(x,y)|^2 ) and ( | _x _y G(x,y)| ). As an application, we are able to obtain optimal estimates on the random part of the homogenization error even for large ellipticity contrast.",
"We study quantitatively the effective large-scale behavior of discrete elliptic equations on the lattice ( Z^d ) with random coefficients. The theory of stochastic homogenization relates the random, stationary, and ergodic field of coefficients with a deterministic matrix of effective coefficients. This is done via the corrector problem, which can be viewed as a highly degenerate elliptic equation on the infinite-dimensional space of admissible coefficient fields. In this contribution we develop new quantitative methods for the corrector problem based on the assumption that ergodicity holds in the quantitative form of a Spectral Gap Estimate w.r.t. a Glauber dynamics on coefficient fields—as it is the case for independent and identically distributed coefficients. As a main result we prove an optimal decay in time of the semigroup associated with the corrector problem (i.e. of the generator of the process called “random environment as seen from the particle”). As a corollary we recover existence of stationary correctors (in dimensions (d>2 )) and prove new optimal estimates for regularized versions of the corrector (in dimensions (d 2 )). We also give a self-contained proof of a new estimate on the gradient of the parabolic, variable-coefficient Green’s function, which is a crucial analytic ingredient in our approach. As an application of these results, we prove the first (and optimal) estimates for the approximation of the homogenized coefficients by the popular periodization method in case of independent and identically distributed coefficients.",
"Nous presentons des resultats quantitatifs pour l'homogeneisation de fonctionnelles integrales uniformement convexes avec coefficients aleatoires sous hypotheses d'independance. Le resultat principal est une estimation d'erreur pour le probleme de Dirichlet qui est algebrique (mais sous-optimale) en la taille de l'erreur, mais optimale en integrabilite stochastique. Comme application, nous obtenons des estimees C^0,1 pour les minimiseurs locaux de telles fonctionnelles d'energie.",
""
]
} |
1905.06751 | 2945610077 | We present an efficient method for the computation of homogenized coefficients of divergence-form operators with random coefficients. The approach is based on a multiscale representation of the homogenized coefficients. We then implement the method numerically using a finite-element method with hierarchical hybrid grids, which is a semi-implicit method allowing for significant gains in memory usage and execution time. Finally, we demonstrate the efficiency of our approach on two- and three-dimensional examples, for piecewise-constant coefficients with corner discontinuities. For moderate ellipticity contrast and for a precision of a few percentage points, our method allows to compute the homogenized coefficients on a laptop computer in a few seconds, in two dimensions, or in a few minutes, in three dimensions. | It has been observed long ago that inappropriate boundary conditions for approximate cell problems'' can cause important resonant errors'', and initial attempts at bypassing the problem involved the notion of oversampling @cite_5 @cite_11 @cite_45 @cite_57 . A powerful approach has been studied in @cite_32 @cite_42 @cite_7 @cite_26 @cite_40 @cite_31 @cite_28 @cite_16 , based on the introduction of a small zero-order term in the equation. The method we propose here, by combining this idea with a multiscale decomposition, enables to take fuller advantage of this idea. We refer to @cite_39 for a detailed comparison between the single-scale and the multiscale approaches. As is shown in @cite_53 , the benefits of the multiscale approach can be seen even in the setting of periodic coefficient fields, if we operate under the constraint that the lattice of periods is unknown. | {
"cite_N": [
"@cite_26",
"@cite_7",
"@cite_28",
"@cite_53",
"@cite_42",
"@cite_32",
"@cite_39",
"@cite_57",
"@cite_40",
"@cite_45",
"@cite_5",
"@cite_31",
"@cite_16",
"@cite_11"
],
"mid": [
"2010942428",
"2602284421",
"2121999212",
"2951618924",
"2040550846",
"2008588630",
"2964167860",
"",
"",
"2124939526",
"2029918260",
"",
"2132741004",
""
],
"abstract": [
"We introduce and analyze a numerical strategy to approximate effective coefficients in stochastic homogenization of discrete elliptic equations. In particular, we consider the simplest case possible: An elliptic equation on the @math -dimensional lattice @math with independent and identically distributed conductivities on the associated edges. Recent results by Otto and the author quantify the error made by approximating the homogenized coefficient by the averaged energy of a regularized corrector (with parameter @math ) on some box of finite size @math . In this article, we replace the regularized corrector (which is the solution of a problem posed on @math ) by some practically computable proxy on some box of size @math , and quantify the associated additional error. In order to improve the convergence, one may also consider @math independent realizations of the computable proxy, and take the arithmetic mean of the associated approximate homogenized coefficients. A natural optimization problem consists in properly choosing @math and @math in order to reduce the error at given computational complexity. Our analysis is sharp and allows us to give a clear answer to this question. In particular, we propose and analyze a numerical algorithm to approximate the homogenized coefficients, taking advantage of the (nearly) optimal scalings of the errors we derive. The efficiency of the approach is illustrated by a numerical study in dimension 2.",
"This is the second article of a series of papers on stochastic homogenization of discrete elliptic equations. We consider a discrete elliptic equation on the @math -dimensional lattice @math with random coefficients @math of the simplest type: They are identically distributed and independent from edge to edge. On scales large w. r. t. the lattice spacing (i. e. unity), the solution operator is known to behave like the solution operator of a (continuous) elliptic equation with constant deterministic coefficients. This symmetric ''homogenized'' matrix @math is characterized by @math for any direction @math , where the random field @math (the ''corrector'') is the unique solution of @math in @math such that @math , @math is stationary and @math , @math denoting the ensemble average (or expectation). In order to approximate the homogenized coefficients @math , the corrector problem is usually solved in a box @math of size @math with periodic boundary conditions, and the space averaged energy on @math defines an approximation @math of @math . Although the statistics is modified (independence is replaced by periodic correlations) and the ensemble average is replaced by a space average, the approximation @math converges almost surely to @math as @math . In this paper, we give estimates on both errors. To be more precise, we do not consider periodic boundary conditions on a box of size @math , but replace the elliptic operator by @math with (typically) @math , as standard in the homogenization literature. We then replace the ensemble average by a space average on @math , and estimate the overall error on the homogenized coefficients in terms of @math and @math .",
"This article deals with the numerical approximation of effective coefficients in stochastic homogenization of discrete linear elliptic equations. The originality of this work is the use of a well-known abstract spectral representation formula to design and analyze effective and computable approximations of the homogenized coefficients. In particular, we show that information on the edge of the spectrum of the generator of the environment viewed by the particle projected on the local drift yields bounds on the approximation error, and conversely. Combined with results by Otto and the first author in low dimension, and results by the second author in high dimension, this allows us to prove that for any dimension d ≥ 2, there exists an explicit numerical strategy to approximate homogenized coefficients which converges at the rate of the central limit theorem.",
"Abstract This paper presents two new approaches for finding the homogenized coefficients of multiscale elliptic PDEs. Standard approaches for computing the homogenized coefficients suffer from the so-called resonance error, originating from a mismatch between the true and the computational boundary conditions. Our new methods, based on solutions of parabolic and elliptic cell problems, result in an exponential decay of the resonance error.",
"We consider a discrete elliptic equation with random coefficients @math , which (to fix ideas) are identically distributed and independent from grid point to grid point @math . On scales large w. r. t. the grid size (i. e. unity), the solution operator is known to behave like the solution operator of a (continuous) elliptic equation with constant deterministic coefficients. These symmetric ''homogenized'' coefficients @math are characterized by @math where the random field @math is the unique stationary solution of the ''corrector problem'' @math and @math denotes the ensemble average. It is known (''by ergodicity'') that the above ensemble average of the energy density @math , which is a stationary random field, can be recovered by a system average. We quantify this by proving that the variance of a spatial average of @math on length scales @math is estimated as follows: @math where the averaging function (i. e. @math , @math ) has to be smooth in the sense that @math . In two space dimensions (i. e. @math ), there is a logarithmic correction. In other words, smooth averages of the energy density @math behave like as if @math would be independent from grid point to grid point (which it is not for @math ). This result is of practical significance, since it allows to estimate the error when numerically computing @math .",
"In quasi-periodic or nonlinear periodic homogenization, the corrector problem must be in general set on the whole space. Numerically computing the homogenization coefficient therefore implies a truncation error, due to the fact that the problem is approximated on a bounded, large domain. We present here an approach that improves the rate of convergence of this approximation.",
"The main goal of this paper is to define and study new methods for the computation of effective coefficients in the homogenization of divergence-form operators with random coefficients. The methods introduced here are proved to have optimal computational complexity and are shown numerically to display small constant prefactors. In the spirit of multiscale methods, the main idea is to rely on a progressive coarsening of the problem, which we implement via a generalization of the Green–Kubo formula. The technique can be applied more generally to compute the effective diffusivity of any additive functional of a Markov process. In this broader context, we also discuss the alternative possibility of using Monte Carlo sampling and show how a simple one-step extrapolation can considerably improve the performance of this alternative method.",
"",
"",
"Many multiscale methods are based on the idea of extracting macroscopic behavior of solutions by solving an array of microscale models over small domains. A key ingredient in such multiscale methods is the boundary condition and the size of the computational domain over which the microscale problems are solved. This problem is systematically investigated in the present paper in the context of modeling strongly heterogeneous media. Three different boundary conditions are considered: the periodic boundary condition, Dirichlet boundary condition, and the Neumann boundary condition. Each is applied to several benchmark problems: the random checker-board problem, periodic problem with isotropic macroscale behavior, periodic problem with anisotropic macroscale behavior and periodic laminated media. In each case, convergence studies are conducted as the domain size for the microscale problem is changed. Convergence rates as well as the size of fluctuations in the computed effective coefficients are compared for the different formulations. In addition, we will discuss a mixed Dirichlet-Neumann boundary condition that is often used in porous medium modeling. We explain why that leads to unsatisfactory results and how it can be corrected. Also discussed are the different averaging methods used in extracting the effective coefficients.",
"In this paper, we study a multiscale finite element method for solving a class of elliptic problems arising from composite materials and flows in porous media, which contain many spatial scales. The method is designed to efficiently capture the large scale behavior of the solution without resolving all the small scale features. This is accomplished by constructing the multiscale finite element base functions that are adaptive to the local property of the differential operator. Our method is applicable to general multiple-scale problems without restrictive assumptions. The construction of the base functions is fully decoupled from element to element; thus, the method is perfectly parallel and is naturally adapted to massively parallel computers. For the same reason, the method has the ability to handle extremely large degrees of freedom due to highly heterogeneous media, which are intractable by conventional finite element (difference) methods. In contrast to some empirical numerical upscaling methods, the multiscale method is systematic and self- consistent, which makes it easier to analyze. We give a brief analysis of the method, with emphasis on the “resonant sampling” effect. Then, we propose an oversampling technique to remove the resonance effect. We demonstrate the accuracy and efficiency of our method through extensive numerical experiments, which include problems with random coefficients and problems with continuous scales. Parallel implementation and performance of the method are also addressed.",
"",
"This article is concerned with numerical methods to approximate effective coefficients in stochastic homogenization of discrete linear elliptic equations, and their numerical analysis --- which has been made possible by recent contributions on quantitative stochastic homogenization theory by two of us and by Otto. This article makes the connection between our theoretical results and computations. We give a complete picture of the numerical methods found in the literature, compare them in terms of known (or expected) convergence rates, and study them numerically. Two types of methods are presented: methods based on the corrector equation, and methods based on random walks in random environments. The numerical study confirms the sharpness of the analysis (which it completes by making precise the prefactors, next to the convergence rates), supports some of our conjectures, and calls for new theoretical developments.",
""
]
} |
1905.06751 | 2945610077 | We present an efficient method for the computation of homogenized coefficients of divergence-form operators with random coefficients. The approach is based on a multiscale representation of the homogenized coefficients. We then implement the method numerically using a finite-element method with hierarchical hybrid grids, which is a semi-implicit method allowing for significant gains in memory usage and execution time. Finally, we demonstrate the efficiency of our approach on two- and three-dimensional examples, for piecewise-constant coefficients with corner discontinuities. For moderate ellipticity contrast and for a precision of a few percentage points, our method allows to compute the homogenized coefficients on a laptop computer in a few seconds, in two dimensions, or in a few minutes, in three dimensions. | One alternative method for computing homogenized coefficients, based on the idea of an embedded corrector problem'', is proposed in @cite_12 @cite_3 . Well-separated spherical inclusions are considered in the numerical examples. This allows for fairly different approaches to practical calculations than what is pursued in the present paper (and also produces solutions that are more regular than in our examples with corner discontinuities). | {
"cite_N": [
"@cite_3",
"@cite_12"
],
"mid": [
"2896408894",
"2884768316"
],
"abstract": [
"This contribution is the numerically oriented companion article of the work [E. Cances, V. Ehrlacher, F. Legoll, B. Stamm and S. Xiang, arxiv preprint 1807.05131]. We focus here on the numerical resolution of the embedded corrector problem introduced in [E. Cances, V. Ehrlacher, F. Legoll and B. Stamm, CRAS 2015; E. Cances, V. Ehrlacher, F. Legoll, B. Stamm and S. Xiang, arxiv preprint 1807.05131] in the context of homogenization of diffusion equations. Our approach consists in considering a corrector-type problem, posed on the whole space, but with a diffusion matrix which is constant outside some bounded domain. In [E. Cances, V. Ehrlacher, F. Legoll, B. Stamm and S. Xiang, arxiv preprint 1807.05131], we have shown how to define three approximate homogenized diffusion coefficients on the basis of the embedded corrector problems. We have also proved that these approximations all converge to the exact homogenized coefficients when the size of the bounded domain increases. We show here that, under the assumption that the diffusion matrix is piecewise constant, the corrector problem to solve can be recast as an integral equation. In case of spherical inclusions with isotropic materials, we explain how to efficiently discretize this integral equation using spherical harmonics, and how to use the fast multipole method (FMM) to compute the resulting matrix-vector products at a cost which scales only linearly with respect to the number of inclusions. Numerical tests illustrate the performance of our approach in various settings.",
"This article is the first part of a two-fold study, the objective of which is the theoretical analysis and numerical investigation of new approximate corrector problems in the context of stochastic homogenization. We present here three new alternatives for the approximation of the homogenized matrix for diffusion problems with highly-oscillatory coefficients. These different approximations all rely on the use of an embedded corrector problem (that we previously introduced in [Cances, Ehrlacher, Legoll and Stamm, C. R. Acad. Sci. Paris, 2015]), where a finite-size domain made of the highly oscillatory material is embedded in a homogeneous infinite medium whose diffusion coefficients have to be appropriately determined. The motivation for considering such embedded corrector problems is made clear in the companion article [Cances, Ehrlacher, Legoll, Stamm and Xiang, in preparation], where a very efficient algorithm is presented for the resolution of such problems for particular heterogeneous materials. In the present article, we prove that the three different approximations we introduce converge to the homogenized matrix of the medium when the size of the embedded domain goes to infinity."
]
} |
1905.06751 | 2945610077 | We present an efficient method for the computation of homogenized coefficients of divergence-form operators with random coefficients. The approach is based on a multiscale representation of the homogenized coefficients. We then implement the method numerically using a finite-element method with hierarchical hybrid grids, which is a semi-implicit method allowing for significant gains in memory usage and execution time. Finally, we demonstrate the efficiency of our approach on two- and three-dimensional examples, for piecewise-constant coefficients with corner discontinuities. For moderate ellipticity contrast and for a precision of a few percentage points, our method allows to compute the homogenized coefficients on a laptop computer in a few seconds, in two dimensions, or in a few minutes, in three dimensions. | Several techniques have been explored to reduce the size of the fluctuations of estimators for the homogenized matrix. In particular, control variate techniques and the selection of special realizations of the coefficient field, called quasi-random structures'', have been explored, see @cite_34 @cite_9 for surveys. The latter approach, inspired by @cite_17 @cite_54 and, in the context of the homogenization of elliptic operators, advocated for in @cite_19 , has recently received a spectacular theoretical foundation in @cite_25 . We would find it very interesting to investigate how these techniques can be combined with those discussed in the present paper. | {
"cite_N": [
"@cite_54",
"@cite_9",
"@cite_19",
"@cite_34",
"@cite_25",
"@cite_17"
],
"mid": [
"1980370048",
"2535698354",
"2963487200",
"2132500562",
"2811077605",
"2082675789"
],
"abstract": [
"Structural models used in calculations of properties of substitutionally random ital A sub 1 minus ital x B sub ital x alloys are usually constructed by randomly occupying each of the ital N sites of a periodic cell by ital A or ital B . We show that it is possible to design special quasirandom structures'' (SQS's) that mimic for small ital N (even ital N =8) the first few, physically most relevant radial correlation functions of a perfectly random structure far better than the standard technique does. We demonstrate the usefulness of these SQS's by calculating optical and thermodynamic properties of a number of semiconductor alloys in the local-density formalism.",
"We overview a series of recent works addressing numerical simulations of partial differential equations in the presence of some elements of randomness. The specific equations manipulated are linear elliptic, and arise in the context of multiscale problems, but the purpose is more general. On a set of prototypical situations, we investigate two critical issues present in many settings: variance reduction techniques to obtain sufficiently accurate results at a limited computational cost when solving PDEs with random coefficients, and finite element techniques that are sufficiently flexible to carry over to geometries with random fluctuations. Some elements of theoretical analysis and numerical analysis are briefly mentioned. Numerical experiments, although simple, provide convincing evidence of the efficiency of the approaches.",
"We adapt and study a variance reduction approach for the homogenization of elliptic equations in divergence form. The approach, borrowed from atomistic simulations and solid-state science [23], [24], [25], consists in selecting random realizations that best satisfy some statistical properties (such as the volume fraction of each phase in a composite material) usually only obtained asymptotically. We study the approach theoretically in some simplified settings (one-dimensional setting, perturbative setting in higher dimensions), and numerically demonstrate its efficiency in more general cases.",
"We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here.",
"The effective large-scale properties of materials with random heterogeneities on a small scale are typically determined by the method of representative volumes: A sample of the random material is chosen - the representative volume - and its effective properties are computed by the cell formula. Intuitively, for a fixed sample size it should be possible to increase the accuracy of the method by choosing a material sample which captures the statistical properties of the material particularly well: For example, for a composite material consisting of two constituents, one would select a representative volume in which the volume fraction of the constituents matches closely with their volume fraction in the overall material. Inspired by similar attempts in material science, Le Bris, Legoll, and Minvielle have designed a selection approach for representative volumes which performs remarkably well in numerical examples of linear materials with moderate contrast. In the present work, we provide a rigorous analysis of this selection approach for representative volumes in the context of stochastic homogenization of linear elliptic equations. In particular, we prove that the method essentially never performs worse than a random selection of the material sample and may perform much better if the selection criterion for the material samples is chosen suitably.",
"Structural models needed in calculations of properties of substitutionally random A ] B alloys are usually constructed by randomly occupying each of the X sites of a periodic cell by 3 or B. We show that it is possible to design \"special quasirandom structures\" (SQS's) that mimic for small N (even =8) the first few, physically most relevant radial correlation functions of an infinite, perfectly random structure far better than the standard technique does. These SQS's are shown to be short-period superlattices of 4-16 atoms ce11 whose layers are stacked in rather nonstandard orientations (e.g. , [113],[331],and [115]). Since these SQS's mimic well the local atomic structure of the random alloy, their electronic properties, calculable via first-principles techniques, provide a representation of the electronic structure of the alloy. We demonstrate the usefulness of these SQS's by applying them to semiconductor alloys. We calculate their electronic structure, total energy, and equilibrium geometry, and compare the results to experimental data."
]
} |
1905.06751 | 2945610077 | We present an efficient method for the computation of homogenized coefficients of divergence-form operators with random coefficients. The approach is based on a multiscale representation of the homogenized coefficients. We then implement the method numerically using a finite-element method with hierarchical hybrid grids, which is a semi-implicit method allowing for significant gains in memory usage and execution time. Finally, we demonstrate the efficiency of our approach on two- and three-dimensional examples, for piecewise-constant coefficients with corner discontinuities. For moderate ellipticity contrast and for a precision of a few percentage points, our method allows to compute the homogenized coefficients on a laptop computer in a few seconds, in two dimensions, or in a few minutes, in three dimensions. | In a different direction, several works have considered the question of designing and effectively computing certain expansions of the homogenized matrix, in situations where the random medium can be seen as a small perturbation of a reference medium. The most typical scenario is that of a homogeneous medium with a small density of inclusions @cite_8 @cite_20 . We refer to @cite_22 @cite_6 @cite_23 @cite_29 @cite_49 @cite_14 @cite_50 @cite_51 @cite_46 @cite_52 for works in this area. | {
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_8",
"@cite_29",
"@cite_52",
"@cite_6",
"@cite_23",
"@cite_49",
"@cite_50",
"@cite_46",
"@cite_51",
"@cite_20"
],
"mid": [
"2088424242",
"2003713930",
"",
"",
"2963583757",
"",
"",
"2963794691",
"2025646382",
"",
"",
"2152318224"
],
"abstract": [
"We consider a large number of randomly dispersed spherical, identical, inclusions in a bounded domain, with conductivity different than that of the host medium. In the dilute limit, with some mild assumptions on the first few marginal probability densities (no periodicity or stationarity is assumed), we prove convergence in the H 1 norm of the expectation of the solution of the steady state heat equation to the solution of an effective medium problem, where the conductivity is given by the Clausius–Mossotti formula. Error estimates are provided as well.",
"CONTENTS Introduction ??1. Asymptotic expansion of Laplace's variational integrals ??2. Computation of dispersive media ??3. Extremal property of the hexagonal distribution of discs ??4. Random chess structure ??5. Asymptotic expansion of the effective conductivity for a small concentration of the non-conducting cellsConcluding remarks References",
"",
"",
"We consider a large number of randomly dispersed spherical, identical, perfectly conducting inclusions (of infinite conductivity) in a bounded domain. The host medium's conductivity is finite and can be inhomogeneous. In the dilute limit, with some boundedness assumption on a large number (proportional to the global volume fraction raised to the power of @math ) of marginal probability densities, we prove convergence in @math norm of the expectation of the solution of the steady state heat equation, to the solution of an effective medium problem, where the conductivity is given by the Clausius--Mossotti formula. Error estimates are provided as well.",
"",
"",
"This work is a follow-up to our previous work \"A numerical approach related to defect-type theories for some weakly random problems in homogenization\" (preprint available on this archive). It extends and complements, both theoretically and experimentally, the results presented there. Under consideration is the homogenization of a model of a weakly random heterogeneous material. The material consists of a reference periodic material randomly perturbed by another periodic material, so that its homogenized behavior is close to that of the reference material. We consider laws for the random perturbations more general than in our previous work cited above. We prove the validity of an asymptotic expansion in a certain class of settings. We also extend the formal approach introduced in our former work. Our perturbative approach shares common features with a defect-type theory of solid state physics. The computational efficiency of the approach is demonstrated.",
"We consider a medium composed of randomly dispersed spherical, identical inclusions in a bounded domain, with conductivity different than that of the host medium. We study the limit where the number inclusions tends to infinity but their volume fraction remains fixed. For small volume fractions, we prove convergence, in the @math norm ( @math ), of the expectation of the solution of the steady state heat equation to the solution of an effective medium problem, where the conductivity is given by the Clausius--Mossotti formula. This improves a previous result which required that the volume fraction tend to zero as the inclusion number goes to infinity.",
"",
"",
""
]
} |
1905.06751 | 2945610077 | We present an efficient method for the computation of homogenized coefficients of divergence-form operators with random coefficients. The approach is based on a multiscale representation of the homogenized coefficients. We then implement the method numerically using a finite-element method with hierarchical hybrid grids, which is a semi-implicit method allowing for significant gains in memory usage and execution time. Finally, we demonstrate the efficiency of our approach on two- and three-dimensional examples, for piecewise-constant coefficients with corner discontinuities. For moderate ellipticity contrast and for a precision of a few percentage points, our method allows to compute the homogenized coefficients on a laptop computer in a few seconds, in two dimensions, or in a few minutes, in three dimensions. | To conclude this introduction, we mention that the homogenized matrix can also be of use as part of a modified scheme of multigrid type for computing solutions of elliptic equations with rapidly oscillating coefficients. In short, the idea is to use the homogenized operator when operating on the coarser grids @cite_39 . | {
"cite_N": [
"@cite_39"
],
"mid": [
"2964167860"
],
"abstract": [
"The main goal of this paper is to define and study new methods for the computation of effective coefficients in the homogenization of divergence-form operators with random coefficients. The methods introduced here are proved to have optimal computational complexity and are shown numerically to display small constant prefactors. In the spirit of multiscale methods, the main idea is to rely on a progressive coarsening of the problem, which we implement via a generalization of the Green–Kubo formula. The technique can be applied more generally to compute the effective diffusivity of any additive functional of a Markov process. In this broader context, we also discuss the alternative possibility of using Monte Carlo sampling and show how a simple one-step extrapolation can considerably improve the performance of this alternative method."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.