aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
|---|---|---|---|---|
1811.07497
|
2901051291
|
The geolocation of online information is an essential component in any geospatial application. While most of the previous work on geolocation has focused on Twitter, in this paper we quantify and compare the performance of text-based geolocation methods on social media data drawn from both Blogger and Twitter. We introduce a novel set of location specific features that are both highly informative and easily interpretable, and show that we can achieve error rate reductions of up to 12.5 with respect to the best previously proposed geolocation features. We also show that despite posting longer text, Blogger users are significantly harder to geolocate than Twitter users. Additionally, we investigate the effect of training and testing on different media (cross-media predictions), or combining multiple social media sources (multi-media predictions). Finally, we explore the geolocability of social media in relation to three user dimensions: state, gender, and industry.
|
While one of the earliest content-based geolocation studies sought to determine the geographical focus based on the toponyms mentioned in blogs @cite_5 , most of the subsequent work focused on Twitter datasets @cite_24 @cite_20 .
|
{
"cite_N": [
"@cite_24",
"@cite_5",
"@cite_20"
],
"mid": [
"",
"2111185764",
"2142889507"
],
"abstract": [
"",
"Mashups showing the geographic location of the authors of social media content are popular. They generally depend on the authors reporting their own location. For blogs, automated geolocation strategies using IP address and domain name are not adequate for determining an author’s location. Instead, we detail textual geolocation techniques suitable for tagging social media data, facilitating development of geographic mashups and spatial reasoning tools.",
"The rapid growth of geotagged social media raises new computational possibilities for investigating geographic linguistic variation. In this paper, we present a multi-level generative model that reasons jointly about latent topics and geographical regions. High-level topics such as \"sports\" or \"entertainment\" are rendered differently in each geographic region, revealing topic-specific regional distinctions. Applied to a new dataset of geotagged microblogs, our model recovers coherent topics and their regional variants, while identifying geographic areas of linguistic consistency. The model also enables prediction of an author's geographic location from raw text, outperforming both text regression and supervised topic models."
]
}
|
1811.07497
|
2901051291
|
The geolocation of online information is an essential component in any geospatial application. While most of the previous work on geolocation has focused on Twitter, in this paper we quantify and compare the performance of text-based geolocation methods on social media data drawn from both Blogger and Twitter. We introduce a novel set of location specific features that are both highly informative and easily interpretable, and show that we can achieve error rate reductions of up to 12.5 with respect to the best previously proposed geolocation features. We also show that despite posting longer text, Blogger users are significantly harder to geolocate than Twitter users. Additionally, we investigate the effect of training and testing on different media (cross-media predictions), or combining multiple social media sources (multi-media predictions). Finally, we explore the geolocability of social media in relation to three user dimensions: state, gender, and industry.
|
On other types of social media, Popescu and Grefenstette @cite_39 analyze the tags on Flickr photos to infer the users' location and gender, and Wing and Baldridge @cite_31 use data from Twitter, Wikipedia, and Flickr to create a model based on logistic regression and geotag text to grid granularity similarly to @cite_40 . Finally, @cite_33 combine the network- and text- based methods into a hybrid approach that uses logistic regression and label propagation; they measure the 100-mile accuracy, mean, and median error on three different Twitter datasets. Similar hybrid approaches leverage Graph Convolutional Networks @cite_18 and Gaussian mixture models @cite_37 to further increase geolocation performance.
|
{
"cite_N": [
"@cite_18",
"@cite_37",
"@cite_33",
"@cite_39",
"@cite_40",
"@cite_31"
],
"mid": [
"2798819286",
"2793483901",
"2253640982",
"",
"2137435333",
""
],
"abstract": [
"Social media user geolocation is vital to many applications such as event detection. In this paper, we propose GCN, a multiview geolocation model based on Graph Convolutional Networks, that uses both text and network context. We compare GCN to the state-of-the-art, and to two baselines we propose, and show that our model achieves or is competitive with the state- of-the-art over three benchmark geolocation datasets when sufficient supervision is available. We also evaluate GCN under a minimal supervision scenario, and show it outperforms baselines. We find that highway network gates are essential for controlling the amount of useful neighbourhood expansion in GCN.",
"Geotagging Twitter messages is an important tool for event detection and enrichment. Despite the availability of both social media content and user network information, these two features are generally utilized separately in the methodology. In this article, we create a hybrid method that uses Twitter content and network information jointly as model features. We use Gaussian mixture models to map the raw spatial distribution of the model features to a predicted field. This approach is scalable to large datasets and provides a natural representation of model confidence. Our method is tested against other approaches and we achieve greater prediction accuracy. The model also improves both precision and coverage.",
"Research on automatically geolocating social media users has conventionally been based on the text content of posts from a given user or the social network of the user, with very little crossover between the two, and no bench-marking of the two approaches over compara- ble datasets. We bring the two threads of research together in first proposing a text-based method based on adaptive grids, followed by a hybrid network- and text-based method. Evaluating over three Twitter datasets, we show that the empirical difference between text- and network-based methods is not great, and that hybridisation of the two is superior to the component methods, especially in contexts where the user graph is not well connected. We achieve state-of-the-art results on all three datasets.",
"",
"The geographical properties of words have recently begun to be exploited for geolocating documents based solely on their text, often in the context of social media and online content. One common approach for geolocating texts is rooted in information retrieval. Given training documents labeled with latitude longitude coordinates, a grid is overlaid on the Earth and pseudo-documents constructed by concatenating the documents within a given grid cell; then a location for a test document is chosen based on the most similar pseudo-document. Uniform grids are normally used, but they are sensitive to the dispersion of documents over the earth. We define an alternative grid construction using k-d trees that more robustly adapts to data, especially with larger training sets. We also provide a better way of choosing the locations for pseudo-documents. We evaluate these strategies on existing Wikipedia and Twitter corpora, as well as a new, larger Twitter corpus. The adaptive grid achieves competitive results with a uniform grid on small training sets and outperforms it on the large Twitter corpus. The two grid constructions can also be combined to produce consistently strong results across all training sets.",
""
]
}
|
1811.07497
|
2901051291
|
The geolocation of online information is an essential component in any geospatial application. While most of the previous work on geolocation has focused on Twitter, in this paper we quantify and compare the performance of text-based geolocation methods on social media data drawn from both Blogger and Twitter. We introduce a novel set of location specific features that are both highly informative and easily interpretable, and show that we can achieve error rate reductions of up to 12.5 with respect to the best previously proposed geolocation features. We also show that despite posting longer text, Blogger users are significantly harder to geolocate than Twitter users. Additionally, we investigate the effect of training and testing on different media (cross-media predictions), or combining multiple social media sources (multi-media predictions). Finally, we explore the geolocability of social media in relation to three user dimensions: state, gender, and industry.
|
Previous work has verified that simple generative models with appropriate feature engineering can indeed outperform more sophisticated methods @cite_35 @cite_10 , including deep learning @cite_19 Some of the most recent deep learning attempts yield promising performance @cite_0 @cite_44 , but these results are inherently more challenging to analyze and interpret, more expensive to acquire (computationally, time-wise as well as optimizing the architecture) and neural nets are extremely data hungry (customarily requiring millions of examples) making them less attractive in qualitative studies especially when targeting text from social media that are less prevalent than Twitter where data is not so abundant. . In this paper, we adopt this guideline and introduce new feature weighting and selection methods that improve both the accuracy and effectiveness of geolocation algorithms. Furthermore, unlike most previous research, we target both a blogging and a microblogging platform, and examine individual, mixed and cross-media geolocation performance.
|
{
"cite_N": [
"@cite_35",
"@cite_44",
"@cite_19",
"@cite_0",
"@cite_10"
],
"mid": [
"",
"2782843968",
"2251535681",
"2748160576",
"2142191319"
],
"abstract": [
"",
"Inferring the location of a user has been a valuable step for many applications that leverage social media, such as marketing, security monitoring and recommendation systems. Motivated by the recent success of Deep Learning techniques for many other tasks such as computer vision, speech recognition, and natural language processing, we study the application of neural networks to the problem of geolocation prediction and experiment with multiple techniques to improve neural networks for geolocation inference based solely on text. Experimental results on three Twitter datasets suggest that choosing appropriate network architecture, activation function, and performing Batch Normalization, can all increase performance on this task.",
"Only very few users disclose their physical locations, which may be valuable and useful in applications such as marketing and security monitoring; in order to automatically detect their locations, many approaches have been proposed using various types of information, including the tweets posted by the users. It is not easy to infer the original locations from textual data, because text tends to be noisy, particularly in social media. Recently, deep learning techniques have been shown to reduce the error rate of many machine learning tasks, due to their ability to learn meaningful representations of input data. We investigate the potential of building a deep-learning architecture to infer the location of Twitter users based merely on their tweets. We find that stacked denoising auto-encoders are well suited for this task, with results comparable to state-of-the-art models.",
"We propose a method for embedding two-dimensional locations in a continuous vector space using a neural network-based model incorporating mixtures of Gaussian distributions, presenting two model variants for text-based geolocation and lexical dialectology. Evaluated over Twitter data, the proposed model outperforms conventional regression-based geolocation and provides a better estimate of uncertainty. We also show the effectiveness of the representation for predicting words from location in lexical dialectology, and evaluate it using the DARE dataset.",
"Geographical location is vital to geospatial applications like local search and event detection. In this paper, we investigate and improve on the task of text-based geolocation prediction of Twitter users. Previous studies on this topic have typically assumed that geographical references (e.g., gazetteer terms, dialectal words) in a text are indicative of its author's location. However, these references are often buried in informal, ungrammatical, and multilingual data, and are therefore non-trivial to identify and exploit. We present an integrated geolocation prediction framework and investigate what factors impact on prediction accuracy. First, we evaluate a range of feature selection methods to obtain \"location indicative words\". We then evaluate the impact of nongeotagged tweets, language, and user-declared metadata on geolocation prediction. In addition, we evaluate the impact of temporal variance on model generalisation, and discuss how users differ in terms of their geolocatability. We achieve state-of-the-art results for the text-based Twitter user geolocation task, and also provide the most extensive exploration of the task to date. Our findings provide valuable insights into the design of robust, practical text-based geolocation prediction systems."
]
}
|
1811.07489
|
2901362140
|
Generalizing manipulation skills to new situations requires extracting invariant patterns from demonstrations. For example, the robot needs to understand the demonstrations at a higher level while being invariant to the appearance of the objects, geometric aspects of objects such as its position, size, orientation and viewpoint of the observer in the demonstrations. In this paper, we propose an algorithm that learns a joint probability density function of the demonstrations with invariant formulations of hidden semi-Markov models to extract invariant segments (also termed as sub-goals or options), and smoothly follow the generated sequence of states with a linear quadratic tracking controller. The algorithm takes as input the demonstrations with respect to different coordinate systems describing virtual landmarks or objects of interest with a task-parameterized formulation, and adapt the segments according to the environmental changes in a systematic manner. We present variants of this algorithm in latent space with low-rank covariance decompositions, semi-tied covariances, and non-parametric online estimation of model parameters under small variance asymptotics; yielding considerably low sample and model complexity for acquiring new manipulation skills. The algorithm allows a Baxter robot to learn a pick-and-place task while avoiding a movable obstacle based on only 4 kinesthetic demonstrations.
|
A number of variants of HMMs have been proposed to address some of its shortcomings, including: 1) how to bias learning towards models with longer self-dwelling states, 2) how to robustly estimate the parameters with high-dimensional noisy data, 3) how to adapt the model with newly observed data, and 4) how to estimate the number of states that the model should possess. For example, @cite_31 used HMMs to incrementally group whole-body motions based on their relative distance in HMM space. @cite_8 presented an iterative motion primitive refinement approach with HMMs. @cite_24 used the Beta Process Autoregressive HMM for learning from unstructured demonstrations. used the transformation invariant covariance matrix for encoding tasks with a Bayesian non-parametric HMM @cite_12 .
|
{
"cite_N": [
"@cite_24",
"@cite_31",
"@cite_12",
"@cite_8"
],
"mid": [
"2022760091",
"2043152589",
"2765784210",
"2045080324"
],
"abstract": [
"We present a novel method for segmenting demonstrations, recognizing repeated skills, and generalizing complex tasks from unstructured demonstrations. This method combines many of the advantages of recent automatic segmentation methods for learning from demonstration into a single principled, integrated framework. Specifically, we use the Beta Process Autoregressive Hidden Markov Model and Dynamic Movement Primitives to learn and generalize a multi-step task on the PR2 mobile manipulator and to demonstrate the potential of our framework to learn a large library of skills over time.",
"This paper describes a novel approach for autonomous and incremental learning of motion pattern primitives by observation of human motion. Human motion patterns are abstracted into a dynamic stochastic model, which can be used for both subsequent motion recognition and generation, analogous to the mirror neuron hypothesis in primates. The model size is adaptable based on the discrimination requirements in the associated region of the current knowledge base. A new algorithm for sequentially training the Markov chains is developed, to reduce the computation cost during model adaptation. As new motion patterns are observed, they are incrementally grouped together using hierarchical agglomerative clustering based on their relative distance in the model space. The clustering algorithm forms a tree structure, with specialized motions at the tree leaves, and generalized motions closer to the root. The generated tree structure will depend on the type of training data provided, so that the most specialized motions will be those for which the most training has been received. Tests with motion capture data for a variety of motion primitives demonstrate the efficacy of the algorithm.",
"In this work, we tackle the problem of transform-invariant unsupervised learning in the space of Covariance matrices and applications thereof. We begin by introducing the Spectral Polytope Covariance Matrix (SPCM) Similarity function; a similarity function for Covariance matrices, invariant to any type of transformation. We then derive the SPCM-CRP mixture model, a transform-invariant non-parametric clustering approach for Covariance matrices that leverages the proposed similarity function, spectral embedding and the distance-dependent Chinese Restaurant Process (dd-CRP) (Blei and Frazier, 2011). The scalability and applicability of these two contributions is extensively validated on real-world Covariance matrix datasets from diverse research fields. Finally, we couple the SPCM-CRP mixture model with the Bayesian non-parametric Indian Buffet Process (IBP) - Hidden Markov Model (HMM) (, 2009), to jointly segment and discover transform-invariant action primitives from complex sequential data. Resulting in a topic-modeling inspired hierarchical model for unsupervised time-series data analysis which we call ICSC-HMM (IBP Coupled SPCM-CRP Hidden Markov Model). The ICSC-HMM is validated on kinesthetic demonstrations of uni-manual and bi-manual cooking tasks; achieving unsupervised human-level decomposition of complex sequential tasks.",
"We present an approach for kinesthetic teaching of motion primitives for a humanoid robot. The proposed teaching method allows for iterative execution and motion refinement using a forgetting factor. During the iterative motion refinement, a confidence value specifies an area of allowed refinement around the nominal trajectory. A novel method for continuous generation of motions from a hidden Markov model (HMM) representation of motion primitives is proposed, which incorporates relative time information for each state. On the real-time control level, the kinesthetic teaching is handled by a customized impedance controller, which combines tracking performance with soft physical interaction and allows to implement soft boundaries for the motion refinement. The proposed methods were implemented and tested using DLR's humanoid upper-body robot Justin."
]
}
|
1811.07223
|
2900592995
|
Anonymity forms an integral and important part of our digital life. It enables us to express our true selves without the fear of judgment. In this paper, we investigate the different aspects of anonymity in the social Q&A site Quora. The choice of Quora is motivated by the fact that this is one of the rare social Q&A sites that allow users to explicitly post anonymous questions and such activity in this forum has become normative rather than a taboo. Through an analysis of 5.1 million questions, we observe that at a global scale almost no difference manifests between the linguistic structure of the anonymous and the non-anonymous questions. We find that topical mixing at the global scale to be the primary reason for the absence. However, the differences start to feature once we "deep dive" and (topically) cluster the questions and compare the clusters that have high volumes of anonymous questions with those that have low volumes of anonymous questions. In particular, we observe that the choice to post the question as anonymous is dependent on the user's perception of anonymity and they often choose to speak about depression, anxiety, social ties and personal issues under the guise of anonymity. We further perform personality trait analysis and observe that the anonymous group of users has positive correlation with extraversion, agreeableness, and negative correlation with openness. Subsequently, to gain further insights, we build an anonymity grid to identify the differences in the perception on anonymity of the user posting the question and the community of users answering it. We also look into the first response time of the questions and observe that it is lowest for topics which talk about personal and sensitive issues, which hints toward a higher degree of community support and user engagement.
|
There are several studies focusing on the negative aspects of anonymity, such as cyberbullying @cite_9 , aggressive behaviour @cite_8 , encouraging suicidal individuals to follow through with their threats @cite_14 , hate sites @cite_18 , and many more.
|
{
"cite_N": [
"@cite_9",
"@cite_14",
"@cite_18",
"@cite_8"
],
"mid": [
"2003927191",
"1993581568",
"2033615083",
"2082747743"
],
"abstract": [
"Cyberbullying, as a serious kind of repeated, intentional, and harmful aggressive behavior, cannot be ignored. In light of the limited studies and inconsistent findings on the matter, this study explores cyberbullying's frequency and other factors (gender, academic achievement, types of technologies used, and anonymity) relevant to both the issue itself and the East Asian context. The interrelationship of different roles (bullies, victims, and bystanders) in cyberbullying is also examined. A survey was conducted with 545 Taiwan junior high school students. The results indicate that male students were more likely to bully others in cyberspace and that cyberbullying was not affected by one's level of academic achievement. Regarding the various technologies and various country-specific cyberbullying forms pertinent to technology users, instant messenger (IM) users experienced significantly more cyberbullying than users of other technologies. The survey results also indicate that the anonymity of cyberbullying was not a pertinent factor. The study found that the dominant attitude toward cyberbullying was indifference, raising alarms about the lack of cyberbullying prevention. Peers, who were the people most teenagers would likely turn to when experiencing cyberbullying, usually took no action because of their tendency to avoid conflicts and to maintain group harmony. In its interpretation of the findings, this study emphasizes Taiwan's context, including Confucian philosophy.",
"This study examines 21 cases in which crowds were present when a disturbed person threatened to jump off a building, bridge, or tower. Baiting or jeering occurred in 10 of the cases. Analysis of newspaper accounts of the episodes suggests several deindividuation factors that might contribute to the baiting phenomenon: membership in a large crowd, the cover of nighttime, and physical distance between crowd and victim (all factors associated with anonymity). The baiting phenomenon was also associated with warm temperatures and long duration of episode. These factors suggest leads for more systematic investigation. Language: en",
"Blogs, often treated as the equivalence of online personal diaries, have become one of the fastest growing types of Web-based media. Everyone is free to express their opinions and emotions very easily through blogs. In the blogosphere, many communities have emerged, which include hate groups and racists that are trying to share their ideology, express their views, or recruit new group members. It is important to analyze these virtual communities, defined based on membership and subscription linkages, in order to monitor for activities that are potentially harmful to society. While many Web mining and network analysis techniques have been used to analyze the content and structure of the Web sites of hate groups on the Internet, these techniques have not been applied to the study of hate groups in blogs. To address this issue, we have proposed a semi-automated approach in this research. The proposed approach consists of four modules, namely blog spider, information extraction, network analysis, and visualization. We applied this approach to identify and analyze a selected set of 28 anti-Blacks hate groups (820 bloggers) on Xanga, one of the most popular blog hosting sites. Our analysis results revealed some interesting demographical and topological characteristics in these groups, and identified at least two large communities on top of the smaller ones. The study also demonstrated the feasibility in applying the proposed approach in the study of hate groups and other related communities in blogs.",
"In this research we set out to discover why and how people seek anonymity in their online interactions. Our goal is to inform policy and the design of future Internet architecture and applications. We interviewed 44 people from America, Asia, Europe, and Africa who had sought anonymity and asked them about their experiences. A key finding of our research is the very large variation in interviewees' past experiences and life situations leading them to seek anonymity, and how they tried to achieve it. Our results suggest implications for the design of online communities, challenges for policy, and ways to improve anonymity tools and educate users about the different routes and threats to anonymity on the Internet."
]
}
|
1811.07223
|
2900592995
|
Anonymity forms an integral and important part of our digital life. It enables us to express our true selves without the fear of judgment. In this paper, we investigate the different aspects of anonymity in the social Q&A site Quora. The choice of Quora is motivated by the fact that this is one of the rare social Q&A sites that allow users to explicitly post anonymous questions and such activity in this forum has become normative rather than a taboo. Through an analysis of 5.1 million questions, we observe that at a global scale almost no difference manifests between the linguistic structure of the anonymous and the non-anonymous questions. We find that topical mixing at the global scale to be the primary reason for the absence. However, the differences start to feature once we "deep dive" and (topically) cluster the questions and compare the clusters that have high volumes of anonymous questions with those that have low volumes of anonymous questions. In particular, we observe that the choice to post the question as anonymous is dependent on the user's perception of anonymity and they often choose to speak about depression, anxiety, social ties and personal issues under the guise of anonymity. We further perform personality trait analysis and observe that the anonymous group of users has positive correlation with extraversion, agreeableness, and negative correlation with openness. Subsequently, to gain further insights, we build an anonymity grid to identify the differences in the perception on anonymity of the user posting the question and the community of users answering it. We also look into the first response time of the questions and observe that it is lowest for topics which talk about personal and sensitive issues, which hints toward a higher degree of community support and user engagement.
|
Another set of studies detail the positive aspects of anonymity mainly involving self-disclosure and degree of intimacy. In some research, the absence of interviewer has been found to increase the duration of self-disclosure for participants who were presented with intimate questions @cite_7 . Privacy has been shown to have positive effect on human well-being in @cite_17 . find that anonymity helps users to discuss topics which are considered to be stigma in the real world. People regularly use the protection of anonymity to reduce the social risks and to create different persona online than they exhibit offline @cite_2 @cite_1 . In , the authors find that the users use online dating services in identity exploration and recreation process. Anonymity also helps in the protection of informants (e.g., whistle-blowers or news sources).
|
{
"cite_N": [
"@cite_1",
"@cite_2",
"@cite_7",
"@cite_17"
],
"mid": [
"2105205336",
"2047691574",
"2587999266",
"2119591092"
],
"abstract": [
"Those who feel better able to express their “true selves” in Internet rather than face-to-face interaction settings are more likely to form close relationships with people met on the Internet (McKenna, Green, & Gleason, this issue). Building on these correlational findings from survey data, we conducted three laboratory experiments to directly test the hypothesized causal role of differential self-expression in Internet relationship formation. Experiments 1 and 2, using a reaction time task, found that for university undergraduates, the true-self concept is more accessible in memory during Internet interactions, and the actual self more accessible during face-to-face interactions. Experiment 3 confirmed that people randomly assigned to interact over the Internet (vs. face to face) were better able to express their true-self qualities to their partners.",
"The growth of the Internet at a means of communication has sparked the interest of researchers in several fields (e.g. communication, social psychology, industrial-organizational psychology) to investigate the issues surrounding the expression of different social behaviors in this unique social context. Of special interest to researchers is the increased importance that anonymity seems to play in computer-mediated communication (CMC). This paper reviews the literature related to the issues of anonymity within the social context, particularly in CMC, demonstrating the usefulness of established social psychological theory to explain behavior in CMC and discussing the evolution of the current theoretical explanations in explaining the effects of anonymity in social behavior in CMC environments. Several suggestions for future research are proposed in an attempt to provide researchers with new avenues to investigate how anonymity can play both positive and negative roles in CMC.",
"This qualitative study examines privacy practices and concerns among contributors to open collaboration projects. We collected interview data from people who use the anonymity network Tor who also contribute to online projects and from Wikipedia editors who are concerned about their privacy to better understand how privacy concerns impact participation in open collaboration projects. We found that risks perceived by contributors to open collaboration projects include threats of surveillance, violence, harassment, opportunity loss, reputation loss, and fear for loved ones. We explain participants' operational and technical strategies for mitigating these risks and how these strategies affect their contributions. Finally, we discuss chilling effects associated with privacy loss, the need for open collaboration projects to go beyond attracting and educating participants to consider their privacy, and some of the social and technical approaches that could be explored to mitigate risk at a project or community level.",
"This article overviews a program of research that has explored the implications of a transactional worldview for research on personal relationships. In particular, the present article emphasizes the role of the physical environment in relationships. It briefly describes our theoretical perspective and delineates the methods by which we study personal relationships. The main body of the article focuses on three kinds of relationship (acquaintance, family, neighbors), emphasizing the significance of the physical and social environments for individual and relational viability."
]
}
|
1811.07223
|
2900592995
|
Anonymity forms an integral and important part of our digital life. It enables us to express our true selves without the fear of judgment. In this paper, we investigate the different aspects of anonymity in the social Q&A site Quora. The choice of Quora is motivated by the fact that this is one of the rare social Q&A sites that allow users to explicitly post anonymous questions and such activity in this forum has become normative rather than a taboo. Through an analysis of 5.1 million questions, we observe that at a global scale almost no difference manifests between the linguistic structure of the anonymous and the non-anonymous questions. We find that topical mixing at the global scale to be the primary reason for the absence. However, the differences start to feature once we "deep dive" and (topically) cluster the questions and compare the clusters that have high volumes of anonymous questions with those that have low volumes of anonymous questions. In particular, we observe that the choice to post the question as anonymous is dependent on the user's perception of anonymity and they often choose to speak about depression, anxiety, social ties and personal issues under the guise of anonymity. We further perform personality trait analysis and observe that the anonymous group of users has positive correlation with extraversion, agreeableness, and negative correlation with openness. Subsequently, to gain further insights, we build an anonymity grid to identify the differences in the perception on anonymity of the user posting the question and the community of users answering it. We also look into the first response time of the questions and observe that it is lowest for topics which talk about personal and sensitive issues, which hints toward a higher degree of community support and user engagement.
|
@cite_3 , the authors perform a detailed analysis of Quora. They show that heterogeneity in the user and question graphs are significant contributors to the quality of Quora's knowledge base. They show that the user-topic follow graph generates user interest in browsing and answering general questions, while the related question graph helps concentrate user attention on the most relevant topics. Finally, the user-to-user social network attracts views, and leverages social ties to encourage votes and additional high quality answers. study the dynamics of temporal growth of topics in Quora and propose a regression model to predict the popularity of a topic. In Patil and Lee , the authors analyze the behavior of experts and non-experts in five popular topics and extract several features to develop a statistical model which automatically detect experts. In , the authors study how the non-Q &A social activities of Quorans can be used to gain insight into their answering behavior. In , the authors find that the use of language while writing the question text can be a very effective means to characterize answerability and it can help in predicting early if a question would eventually be answered.
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"1730818938"
],
"abstract": [
"Efforts such as Wikipedia have shown the ability of user communities to collect, organize and curate information on the Internet. Recently, a number of question and answer (Q&A) sites have successfully built large growing knowledge repositories, each driven by a wide range of questions and answers from its users community. While sites like Yahoo Answers have stalled and begun to shrink, one site still going strong is Quora, a rapidly growing service that augments a regular Q&A system with social links between users. Despite its success, however, little is known about what drives Quora's growth, and how it continues to connect visitors and experts to the right questions as it grows. In this paper, we present results of a detailed analysis of Quora using measurements. We shed light on the impact of three different connection networks (or graphs) inside Quora, a graph connecting topics to users, a social graph connecting users, and a graph connecting related questions. Our results show that heterogeneity in the user and question graphs are significant contributors to the quality of Quora's knowledge base. One drives the attention and activity of users, and the other directs them to a small set of popular and interesting questions."
]
}
|
1811.07485
|
2901063069
|
User emotion analysis toward videos is to automatically recognize the general emotional status of viewers from the multimedia content embedded in the online video stream. Existing works fall in two categories: 1) visual-based methods, which focus on visual content and extract a specific set of features of videos. However, it is generally hard to learn a mapping function from low-level video pixels to high-level emotion space due to great intra-class variance. 2) textual-based methods, which focus on the investigation of user-generated comments associated with videos. The learned word representations by traditional linguistic approaches typically lack emotion information and the global comments usually reflect viewers' high-level understandings rather than instantaneous emotions. To address these limitations, in this paper, we propose to jointly utilize video content and user-generated texts simultaneously for emotion analysis. In particular, we introduce exploiting a new type of user-generated texts, i.e., "danmu", which are real-time comments floating on the video and contain rich information to convey viewers' emotional opinions. To enhance the emotion discriminativeness of words in textual feature extraction, we propose Emotional Word Embedding (EWE) to learn text representations by jointly considering their semantics and emotions. Afterwards, we propose a novel visual-textual emotion analysis model with Deep Coupled Video and Danmu Neural networks (DCVDN), in which visual and textual features are synchronously extracted and fused to form a comprehensive representation by deep-canonically-correlated-autoencoder-based multi-view learning. Through extensive experiments on a self-crawled real-world video-danmu dataset, we prove that DCVDN significantly outperforms the state-of-the-art baselines.
|
There are also quite a lot of works on visual sentiment analysis. For example, @cite_37 @cite_18 use low-level image properties, including pixel-level color histogram and Scale-Invariant Feature Transform (SIFT), as the features to predict the emotion of images. @cite_4 @cite_20 employ middle-level features, such as visual entities and attributes, as the features for emotion analysis. @cite_21 @cite_14 utilize Convolutional Neural Networks (CNNs) to extract high-level features through a series of nonlinear transform, which have been proved surpassing other models with low-level and mid-level features @cite_14 . @cite_31 think that the local areas are pretty relevant to human's emotional response to the whole image and proposed model to utilize the recent studies attention mechanism to jointly discover relevant local regions and build a sentiment classifier on top of these local regions. @cite_15 presented a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from the unannotated text.
|
{
"cite_N": [
"@cite_37",
"@cite_18",
"@cite_4",
"@cite_14",
"@cite_15",
"@cite_21",
"@cite_31",
"@cite_20"
],
"mid": [
"2110700950",
"1930223417",
"",
"2953116260",
"",
"2253891449",
"2604737966",
"2046682605"
],
"abstract": [
"In this paper we study the connection between sentiment of images expressed in metadata and their visual content in the social photo sharing environment Flickr. To this end, we consider the bag-of-visual words representation as well as the color distribution of images, and make use of the SentiWordNet thesaurus to extract numerical values for their sentiment from accompanying textual metadata. We then perform a discriminative feature analysis based on information theoretic methods, and apply machine learning techniques to predict the sentiment of images. Our large-scale empirical study on a set of over half a million Flickr images shows a considerable correlation between sentiment and visual features, and promising results towards estimating the polarity of sentiment in images.",
"Social media has been a convenient platform for voicing opinions through posting messages, ranging from tweeting a short text to uploading a media file, or any combination of messages. Understanding the perceived emotions inherently underlying these user-generated contents (UGC) could bring light to emerging applications such as advertising and media analytics. Existing research efforts on affective computation are mostly dedicated to single media, either text captions or visual content. Few attempts for combined analysis of multiple media are made, despite that emotion can be viewed as an expression of multimodal experience. In this paper, we explore the learning of highly non-linear relationships that exist among low-level features across different modalities for emotion prediction. Using the deep Bolzmann machine (DBM), a joint density model over the space of multimodal inputs, including visual, auditory, and textual modalities, is developed. The model is trained directly using UGC data without any labeling efforts. While the model learns a joint representation over multimodal inputs, training samples in absence of certain modalities can also be leveraged. More importantly, the joint representation enables emotion-oriented cross-modal retrieval, for example, retrieval of videos using the text query “crazy cat”. The model does not restrict the types of input and output, and hence, in principle, emotion prediction and retrieval on any combinations of media are feasible. Extensive experiments on web videos and images show that the learnt joint representation could be very compact and be complementary to hand-crafted features, leading to performance improvement in both emotion classification and cross-modal retrieval.",
"",
"Psychological research results have confirmed that people can have different emotional reactions to different visual stimuli. Several papers have been published on the problem of visual emotion analysis. In particular, attempts have been made to analyze and predict people's emotional reaction towards images. To this end, different kinds of hand-tuned features are proposed. The results reported on several carefully selected and labeled small image data sets have confirmed the promise of such features. While the recent successes of many computer vision related tasks are due to the adoption of Convolutional Neural Networks (CNNs), visual emotion analysis has not achieved the same level of success. This may be primarily due to the unavailability of confidently labeled and relatively large image data sets for visual emotion analysis. In this work, we introduce a new data set, which started from 3+ million weakly labeled images of different emotions and ended up 30 times as large as the current largest publicly available visual emotion data set. We hope that this data set encourages further research on visual emotion analysis. We also perform extensive benchmarking analyses on this large data set using the state of the art methods including CNNs.",
"",
"Sentiment analysis of online user generated content is important for many social media analytics tasks. Researchers have largely relied on textual sentiment analysis to develop systems to predict political elections, measure economic indicators, and so on. Recently, social media users are increasingly using images and videos to express their opinions and share their experiences. Sentiment analysis of such large scale visual content can help better extract user sentiments toward events or topics, such as those in image tweets, so that prediction of sentiment from visual content is complementary to textual sentiment analysis. Motivated by the needs in leveraging large scale yet noisy training data to solve the extremely challenging problem of image sentiment analysis, we employ Convolutional Neural Networks (CNN). We first design a suitable CNN architecture for image sentiment analysis. We obtain half a million training samples by using a baseline sentiment algorithm to label Flickr images. To make use of such noisy machine labeled data, we employ a progressive strategy to fine-tune the deep network. Furthermore, we improve the performance on Twitter images by inducing domain transfer with a small number of manually labeled Twitter images. We have conducted extensive experiments on manually labeled Twitter images. The results show that the proposed CNN can achieve better performance in image sentiment analysis than competing algorithms.",
"Visual sentiment analysis, which studies the emotional response of humans on visual stimuli such as images and videos, has been an interesting and challenging problem. It tries to understand the high-level content of visual data. The success of current models can be attributed to the development of robust algorithms from computer vision. Most of the existing models try to solve the problem by proposing either robust features or more complex models. In particular, visual features from the whole image or video are the main proposed inputs. Little attention has been paid to local areas, which we believe is pretty relevant to human's emotional response to the whole image. In this work, we study the impact of local image regions on visual sentiment analysis. Our proposed model utilizes the recent studied attention mechanism to jointly discover the relevant local regions and build a sentiment classifier on top of these local regions. The experimental results suggest that 1) our model is capable of automatically discovering sentimental local regions of given images and 2) it outperforms existing state-of-the-art algorithms to visual sentiment analysis.",
"Visual content analysis has always been important yet challenging. Thanks to the popularity of social networks, images become an convenient carrier for information diffusion among online users. To understand the diffusion patterns and different aspects of the social images, we need to interpret the images first. Similar to textual content, images also carry different levels of sentiment to their viewers. However, different from text, where sentiment analysis can use easily accessible semantic and context information, how to extract and interpret the sentiment of an image remains quite challenging. In this paper, we propose an image sentiment prediction framework, which leverages the mid-level attributes of an image to predict its sentiment. This makes the sentiment classification results more interpretable than directly using the low-level features of an image. To obtain a better performance on images containing faces, we introduce eigenface-based facial expression detection as an additional mid-level attributes. An empirical study of the proposed framework shows improved performance in terms of prediction accuracy. More importantly, by inspecting the prediction results, we are able to discover interesting relationships between mid-level attribute and image sentiment."
]
}
|
1811.07126
|
2966307781
|
Object detection has been a building block in computer vision. Though considerable progress has been made, there still exist challenges for objects with small size, arbitrary direction, and dense distribution. Apart from natural images, such issues are especially pronounced for aerial images of great importance. This paper presents a novel multi-category rotation detector for small, cluttered and rotated objects, namely SCRDet. Specifically, a sampling fusion network is devised which fuses multi-layer feature with effective anchor sampling, to improve the sensitivity to small objects. Meanwhile, the supervised pixel attention network and the channel attention network are jointly explored for small and cluttered object detection by suppressing the noise and highlighting the objects feature. For more accurate rotation estimation, the IoU constant factor is added to the smooth L1 loss to address the boundary problem for the rotating bounding box. Extensive experiments on two remote sensing public datasets DOTA, NWPU VHR-10 as well as natural image datasets COCO, VOC2007 and scene text data ICDAR2015 show the state-of-the-art performance of our detector. The code and models will be available at https: github.com DetectionTeamUCAS.
|
Horizontal region object detection. Many advanced object detection algorithms are based on deep convolutional neural networks (CNNs) @cite_0 @cite_34 @cite_17 @cite_8 . Girshick al @cite_22 proposed a multi-stage R-CNN detection network structure and achieved amazing results. Subsequently, region-based models such as Fast R-CNN @cite_32 , Faster R-CNN @cite_27 , and R-FCN @cite_6 were proposed, which improved the detection speed while reducing computational storage. SSD @cite_30 and YOLO @cite_3 are regression-based object detection methods, and the single-stage structure allows them to have faster detection speeds. Many scholars have applied these methods to the field of remote sensing. Han al @cite_11 proposed the R-P-Faster R-CNN framework and achieved satisfactory performance in small datasets. Xu al @cite_46 combined both deformable convolution layers @cite_23 and region-based fully convolutional networks (R-FCN) to improve detection accuracy further. Ren al @cite_19 adopted top-down and skipped connections to produce a single high-level feature map of a fine resolution, improving the performance of the deformable Faster R-CNN model. However, the large degree of freedom scale, orientation, and density of the object make these horizontal region detection methods unable to achieve better results in large-scale complex scene datasets.
|
{
"cite_N": [
"@cite_30",
"@cite_11",
"@cite_22",
"@cite_8",
"@cite_32",
"@cite_6",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_27",
"@cite_23",
"@cite_46",
"@cite_34",
"@cite_17"
],
"mid": [
"2193145675",
"2733535455",
"2102605133",
"",
"2031489346",
"2407521645",
"2963037989",
"1861492603",
"2890319410",
"639708223",
"2950477723",
"2774989306",
"",
""
],
"abstract": [
"We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.",
"Geospatial object detection from high spatial resolution (HSR) remote sensing imagery is a significant and challenging problem when further analyzing object-related information for civil and engineering applications. However, the computational efficiency and the separate region generation and localization steps are two big obstacles for the performance improvement of the traditional convolutional neural network (CNN)-based object detection methods. Although recent object detection methods based on CNN can extract features automatically, these methods still separate the feature extraction and detection stages, resulting in high time consumption and low efficiency. As a significant influencing factor, the acquisition of a large quantity of manually annotated samples for HSR remote sensing imagery objects requires expert experience, which is expensive and unreliable. Despite the progress made in natural image object detection fields, the complex object distribution makes it difficult to directly deal with the HSR remote sensing imagery object detection task. To solve the above problems, a highly efficient and robust integrated geospatial object detection framework based on faster region-based convolutional neural network (Faster R-CNN) is proposed in this paper. The proposed method realizes the integrated procedure by sharing features between the region proposal generation stage and the object detection stage. In addition, a pre-training mechanism is utilized to improve the efficiency of the multi-class geospatial object detection by transfer learning from the natural imagery domain to the HSR remote sensing imagery domain. Extensive experiments and comprehensive evaluations on a publicly available 10-class object detection dataset were conducted to evaluate the proposed method.",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"",
"The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection. This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.",
"We present region-based, fully convolutional networks for accurate and efficient object detection. In contrast to previous region-based detectors such as Fast Faster R-CNN [7, 19] that apply a costly per-region subnetwork hundreds of times, our region-based detector is fully convolutional with almost all computation shared on the entire image. To achieve this goal, we propose position-sensitive score maps to address a dilemma between translation-invariance in image classification and translation-variance in object detection. Our method can thus naturally adopt fully convolutional image classifier backbones, such as the latest Residual Networks (ResNets) [10], for object detection. We show competitive results on the PASCAL VOC datasets (e.g., 83.6 mAP on the 2007 set) with the 101-layer ResNet. Meanwhile, our result is achieved at a test-time speed of 170ms per image, 2.5-20x faster than the Faster R-CNN counterpart. Code is made publicly available at: https: github.com daijifeng001 r-fcn.",
"We present YOLO, a new approach to object detection. Prior work on object detection repurposes classifiers to perform detection. Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Our unified architecture is extremely fast. Our base YOLO model processes images in real-time at 45 frames per second. A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork.",
"We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.",
"",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features—using the recently popular terminology of neural networks with ’attention’ mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps ( including all steps ) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.",
"Convolutional neural networks (CNNs) are inherently limited to model geometric transformations due to the fixed geometric structures in its building modules. In this work, we introduce two new modules to enhance the transformation modeling capacity of CNNs, namely, deformable convolution and deformable RoI pooling. Both are based on the idea of augmenting the spatial sampling locations in the modules with additional offsets and learning the offsets from target tasks, without additional supervision. The new modules can readily replace their plain counterparts in existing CNNs and can be easily trained end-to-end by standard back-propagation, giving rise to deformable convolutional networks. Extensive experiments validate the effectiveness of our approach on sophisticated vision tasks of object detection and semantic segmentation. The code would be released.",
"Convolutional neural networks (CNNs) have demonstrated their ability object detection of very high resolution remote sensing images. However, CNNs have obvious limitations for modeling geometric variations in remote sensing targets. In this paper, we introduced a CNN structure, namely deformable ConvNet, to address geometric modeling in object recognition. By adding offsets to the convolution layers, feature mapping of CNN can be applied to unfixed locations, enhancing CNNs’ visual appearance understanding. In our work, a deformable region-based fully convolutional networks (R-FCN) was constructed by substituting the regular convolution layer with a deformable convolution layer. To efficiently use this deformable convolutional neural network (ConvNet), a training mechanism is developed in our work. We first set the pre-trained R-FCN natural image model as the default network parameters in deformable R-FCN. Then, this deformable ConvNet was fine-tuned on very high resolution (VHR) remote sensing images. To remedy the increase in lines like false region proposals, we developed aspect ratio constrained non maximum suppression (arcNMS). The precision of deformable ConvNet for detecting objects was then improved. An end-to-end approach was then developed by combining deformable R-FCN, a smart fine-tuning strategy and aspect ratio constrained NMS. The developed method was better than a state-of-the-art benchmark in object detection without data augmentation.",
"",
""
]
}
|
1811.07126
|
2966307781
|
Object detection has been a building block in computer vision. Though considerable progress has been made, there still exist challenges for objects with small size, arbitrary direction, and dense distribution. Apart from natural images, such issues are especially pronounced for aerial images of great importance. This paper presents a novel multi-category rotation detector for small, cluttered and rotated objects, namely SCRDet. Specifically, a sampling fusion network is devised which fuses multi-layer feature with effective anchor sampling, to improve the sensitivity to small objects. Meanwhile, the supervised pixel attention network and the channel attention network are jointly explored for small and cluttered object detection by suppressing the noise and highlighting the objects feature. For more accurate rotation estimation, the IoU constant factor is added to the smooth L1 loss to address the boundary problem for the rotating bounding box. Extensive experiments on two remote sensing public datasets DOTA, NWPU VHR-10 as well as natural image datasets COCO, VOC2007 and scene text data ICDAR2015 show the state-of-the-art performance of our detector. The code and models will be available at https: github.com DetectionTeamUCAS.
|
Arbitrary-oriented object detection. A series of arbitrary-oriented text detection models @cite_21 @cite_43 are proposed in the field of scene text detection. In contrast, aerial object detection is more challenging: first, many text detection models are only limited to for single-object detection @cite_2 @cite_13 @cite_20 , which is not applicable to multi-category object detection of aerial images. Second, there is often a large gap between texts, while the objects in the aerial image are very close, so the segmentation based detection algorithm @cite_4 @cite_12 may not achieve good results. Third, aerial image object detection requires higher performance of the algorithm because of a large number of small objects. In the field of remote sensing, most of the rotational detection methods are designed for specific objects, such as vehicle detection @cite_41 , ship detection @cite_40 @cite_39 @cite_44 @cite_26 @cite_37 , aircraft detection @cite_47 and so on. Multi-categories rotational region detection algorithms @cite_33 are still rare in the field of remote sensing, mainly due to interference from factors such as scale, angle, density, and scene complexity. This paper considers these factors comprehensively, and proposes a general algorithm for multi-categories arbitrary-oriented object detection in aerial images.
|
{
"cite_N": [
"@cite_37",
"@cite_26",
"@cite_4",
"@cite_33",
"@cite_41",
"@cite_21",
"@cite_39",
"@cite_44",
"@cite_43",
"@cite_40",
"@cite_2",
"@cite_47",
"@cite_13",
"@cite_12",
"@cite_20"
],
"mid": [
"",
"",
"2781529199",
"2951759393",
"2768931328",
"2725486421",
"",
"",
"",
"2788202095",
"2605982830",
"2770383210",
"",
"",
""
],
"abstract": [
"",
"",
"Most state-of-the-art scene text detection algorithms are deep learning based methods that depend on bounding box regression and perform at least two kinds of predictions: text non-text classification and location regression. Regression plays a key role in the acquisition of bounding boxes in these methods, but it is not indispensable because text non-text prediction can also be considered as a kind of semantic segmentation that contains full location information in itself. However, text instances in scene images often lie very close to each other, making them very difficult to separate via semantic segmentation. Therefore, instance segmentation is needed to address this problem. In this paper, PixelLink, a novel scene text detection algorithm based on instance segmentation, is proposed. Text instances are first segmented out by linking pixels within the same instance together. Text bounding boxes are then extracted directly from the segmentation result without location regression. Experiments show that, compared with regression-based methods, PixelLink can achieve better or comparable performance on several benchmarks, while requiring many fewer training iterations and less training data.",
"Automatic multi-class object detection in remote sensing images in unconstrained scenarios is of high interest for several applications including traffic monitoring and disaster management. The huge variation in object scale, orientation, category, and complex backgrounds, as well as the different camera sensors pose great challenges for current algorithms. In this work, we propose a new method consisting of a novel joint image cascade and feature pyramid network with multi-size convolution kernels to extract multi-scale strong and weak semantic features. These features are fed into rotation-based region proposal and region of interest networks to produce object detections. Finally, rotational non-maximum suppression is applied to remove redundant detections. During training, we minimize joint horizontal and oriented bounding box loss functions, as well as a novel loss that enforces oriented boxes to be rectangular. Our method achieves 68.16 mAP on horizontal and 72.45 mAP on oriented bounding box detection tasks on the challenging DOTA dataset, outperforming all published methods by a large margin (+6 and +12 absolute improvement, respectively). Furthermore, it generalizes to two other datasets, NWPU VHR-10 and UCAS-AOD, and achieves competitive results with the baselines even when trained on DOTA. Our method can be deployed in multi-class object detection applications, regardless of the image and object scales and orientations, making it a great choice for unconstrained aerial and satellite imagery.",
"Vehicle detection with orientation estimation in aerial images has received widespread interest as it is important for intelligent traffic management. This is a challenging task, not only because of the complex background and relatively small size of the target, but also the various orientations of vehicles in aerial images captured from the top view. The existing methods for oriented vehicle detection need several post-processing steps to generate final detection results with orientation, which are not efficient enough. Moreover, they can only get discrete orientation information for each target. In this paper, we present an end-to-end single convolutional neural network to generate arbitrarily-oriented detection results directly. Our approach, named Oriented_SSD (Single Shot MultiBox Detector, SSD), uses a set of default boxes with various scales on each feature map location to produce detection bounding boxes. Meanwhile, offsets are predicted for each default box to better match the object shape, which contain the angle parameter for oriented bounding boxes’ generation. Evaluation results on the public DLR Vehicle Aerial dataset and Vehicle Detection in Aerial Imagery (VEDAI) dataset demonstrate that our method can detect both the location and orientation of the vehicle with high accuracy and fast speed. For test images in the DLR Vehicle Aerial dataset with a size of 5616 × 3744 , our method achieves 76.1 average precision (AP) and 78.7 correct direction classification at 5.17 s on an NVIDIA GTX-1060.",
"In this paper, we propose a novel method called Rotational Region CNN (R2CNN) for detecting arbitrary-oriented texts in natural scene images. The framework is based on Faster R-CNN [1] architecture. First, we use the Region Proposal Network (RPN) to generate axis-aligned bounding boxes that enclose the texts with different orientations. Second, for each axis-aligned text box proposed by RPN, we extract its pooled features with different pooled sizes and the concatenated features are used to simultaneously predict the text non-text score, axis-aligned box and inclined minimum area box. At last, we use an inclined non-maximum suppression to get the detection results. Our approach achieves competitive results on text detection benchmarks: ICDAR 2015 and ICDAR 2013.",
"",
"",
"",
"Ship detection has been playing a significant role in the field of remote sensing for a long time, but it is still full of challenges. The main limitations of traditional ship detection methods usually lie in the complexity of application scenarios, the difficulty of intensive object detection, and the redundancy of the detection region. In order to solve these problems above, we propose a framework called Rotation Dense Feature Pyramid Networks (R-DFPN) which can effectively detect ships in different scenes including ocean and port. Specifically, we put forward the Dense Feature Pyramid Network (DFPN), which is aimed at solving problems resulting from the narrow width of the ship. Compared with previous multiscale detectors such as Feature Pyramid Network (FPN), DFPN builds high-level semantic feature-maps for all scales by means of dense connections, through which feature propagation is enhanced and feature reuse is encouraged. Additionally, in the case of ship rotation and dense arrangement, we design a rotation anchor strategy to predict the minimum circumscribed rectangle of the object so as to reduce the redundant detection region and improve the recall. Furthermore, we also propose multiscale region of interest (ROI) Align for the purpose of maintaining the completeness of the semantic and spatial information. Experiments based on remote sensing images from Google Earth for ship detection show that our detection method based on R-DFPN representation has state-of-the-art performance.",
"Previous approaches for scene text detection have already achieved promising performances across various benchmarks. However, they usually fall short when dealing with challenging scenarios, even when equipped with deep neural network models, because the overall performance is determined by the interplay of multiple stages and components in the pipelines. In this work, we propose a simple yet powerful pipeline that yields fast and accurate text detection in natural scenes. The pipeline directly predicts words or text lines of arbitrary orientations and quadrilateral shapes in full images, eliminating unnecessary intermediate steps (e.g., candidate aggregation and word partitioning), with a single neural network. The simplicity of our pipeline allows concentrating efforts on designing loss functions and neural network architecture. Experiments on standard datasets including ICDAR 2015, COCO-Text and MSRA-TD500 demonstrate that the proposed algorithm significantly outperforms state-of-the-art methods in terms of both accuracy and efficiency. On the ICDAR 2015 dataset, the proposed algorithm achieves an F-score of 0.7820 at 13.2fps at 720p resolution.",
"Detection of arbitrarily rotated objects is a challenging task due to the difficulties of locating the multi-angle objects and separating them effectively from the background. The existing methods are not robust to angle varies of the objects because of the use of traditional bounding box, which is a rotation variant structure for locating rotated objects. In this article, a new detection method is proposed which applies the newly defined rotatable bounding box (RBox). The proposed detector (DRBox) can effectively handle the situation where the orientation angles of the objects are arbitrary. The training of DRBox forces the detection networks to learn the correct orientation angle of the objects, so that the rotation invariant property can be achieved. DRBox is tested to detect vehicles, ships and airplanes on satellite images, compared with Faster R-CNN and SSD, which are chosen as the benchmark of the traditional bounding box based methods. The results shows that DRBox performs much better than traditional bounding box based methods do on the given tasks, and is more robust against rotation of input image and target objects. Besides, results show that DRBox correctly outputs the orientation angles of the objects, which is very useful for locating multi-angle objects efficiently. The code and models are available at this https URL",
"",
"",
""
]
}
|
1811.06868
|
2901505172
|
Foveation, the ability to sequentially acquire high-acuity regions of a scene viewed initially at low-acuity, is a key property of biological vision systems. In a computer vision system, foveation is also desired to increase data efficiency and derive task-relevant features. Yet, most existing deep learning models lack the ability to foveate. In this paper, we propose a deep reinforcement learning-based foveation model, DRIFT, and apply it to challenging fine-grained classification tasks. Training of DRIFT requires only image-level category labels and encourages fixations to contain discriminative information while maintaining data efficiency. Specifically, we formulate foveation as a sequential decision-making process and train a foveation actor network with a novel Deep Deterministic Policy Gradient by Conditioned Critic and Coaching (DDPGC3) algorithm. In addition, we propose to shape the reward to provide informative feedback after each fixation to better guide the RL training. We demonstrate the effectiveness of our method on five fine-grained classification benchmark datasets, and show that the proposed approach achieves state-of-the-art performance using an order-of-magnitude fewer pixels.
|
Different from @cite_38 @cite_18 , which take blurred inputs, Almeida al @cite_10 and Recasens al @cite_20 proposed generating attention maps from standard input images, and using them to either down-sample backgrounds @cite_10 or up-sample foregrounds @cite_20 . The approach of generating attention maps falls into a broader family of attention models, which has been broadly applied to image classification @cite_28 @cite_22 @cite_11 , segmentation @cite_9 , visual question answering @cite_13 @cite_1 , detection @cite_17 , image captioning @cite_36 , and so forth.
|
{
"cite_N": [
"@cite_13",
"@cite_38",
"@cite_18",
"@cite_22",
"@cite_28",
"@cite_36",
"@cite_9",
"@cite_1",
"@cite_17",
"@cite_10",
"@cite_20",
"@cite_11"
],
"mid": [
"2179022885",
"",
"2218109590",
"2737725206",
"",
"2950178297",
"2788250925",
"2798786641",
"2798791651",
"2777320413",
"2889469641",
""
],
"abstract": [
"We present a method that learns to answer visual questions by selecting image regions relevant to the text-based query. Our method exhibits significant improvements in answering questions such as \"what color,\" where it is necessary to evaluate a specific location, and \"what room,\" where it selectively identifies informative image regions. Our model is tested on the VQA dataset which is the largest human-annotated visual question answering dataset to our knowledge.",
"",
"We propose a new method for turning an Internet-scale corpus of categorized images into a small set of human-interpretable discriminative visual elements using powerful tools based on deep learning. A key challenge with deep learning methods is generating human-interpretable models. To address this, we propose a new technique that uses bubble images -- images where most of the content has been obscured -- to identify spatially localized, discriminative content in each image. By modifying the model training procedure to use both the source imagery and these bubble images, we can arrive at final models which retain much of the original classification performance, but are much more amenable to identifying interpretable visual elements. We apply our algorithm to a wide variety of datasets, including two new Internet-scale datasets of people and places, and show applications to visual mining and discovery. Our method is simple, scalable, and produces visual elements that are highly representative compared to prior work.",
"Recognizing fine-grained categories (e.g., bird species) is difficult due to the challenges of discriminative region localization and fine-grained feature learning. Existing approaches predominantly solve these challenges independently, while neglecting the fact that region detection and fine-grained feature learning are mutually correlated and thus can reinforce each other. In this paper, we propose a novel recurrent attention convolutional neural network (RA-CNN) which recursively learns discriminative region attention and region-based feature representation at multiple scales in a mutual reinforced way. The learning at each scale consists of a classification sub-network and an attention proposal sub-network (APN). The APN starts from full images, and iteratively generates region attention from coarse to fine by taking previous prediction as a reference, while the finer scale network takes as input an amplified attended region from previous scale in a recurrent way. The proposed RA-CNN is optimized by an intra-scale classification loss and an inter-scale ranking loss, to mutually learn accurate region attention and fine-grained representation. RA-CNN does not need bounding box part annotations and can be trained end-to-end. We conduct comprehensive experiments and show that RA-CNN achieves the best performance in three fine-grained tasks, with relative accuracy gains of 3.3 , 3.7 , 3.8 , on CUB Birds, Stanford Dogs and Stanford Cars, respectively.",
"",
"Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.",
"Weakly supervised learning with only coarse labels can obtain visual explanations of deep neural network such as attention maps by back-propagating gradients. These attention maps are then available as priors for tasks such as object localization and semantic segmentation. In one common framework we address three shortcomings of previous approaches in modeling such attention maps: We (1) first time make attention maps an explicit and natural component of the end-to-end training, (2) provide self-guidance directly on these maps by exploring supervision form the network itself to improve them, and (3) seamlessly bridge the gap between using weak and extra supervision if available. Despite its simplicity, experiments on the semantic segmentation task demonstrate the effectiveness of our methods. We clearly surpass the state-of-the-art on Pascal VOC 2012 val. and test set. Besides, the proposed framework provides a way not only explaining the focus of the learner but also feeding back with direct guidance towards specific tasks. Under mild assumptions our method can also be understood as a plug-in to existing weakly supervised learners to improve their generalization performance.",
"Recent insights on language and vision with neural networks have been successfully applied to simple single-image visual question answering. However, to tackle real-life question answering problems on multimedia collections such as personal photos, we have to look at whole collections with sequences of photos or videos. When answering questions from a large collection, a natural problem is to identify snippets to support the answer. In this paper, we describe a novel neural network called Focal Visual-Text Attention network (FVTA) for collective reasoning in visual question answering, where both visual and text sequence information such as images and text metadata are presented. FVTA introduces an end-to-end approach that makes use of a hierarchical process to dynamically determine what media and what time to focus on in the sequential data to answer the question. FVTA can not only answer the questions well but also provides the justifications which the system results are based upon to get the answers. FVTA achieves state-of-the-art performance on the MemexQA dataset and competitive results on the MovieQA dataset.",
"Effective convolutional features play an important role in saliency estimation but how to learn powerful features for saliency is still a challenging task. FCN-based methods directly apply multi-level convolutional features without distinction, which leads to sub-optimal results due to the distraction from redundant details. In this paper, we propose a novel attention guided network which selectively integrates multi-level contextual information in a progressive manner. Attentive features generated by our network can alleviate distraction of background thus achieve better performance. On the other hand, it is observed that most of existing algorithms conduct salient object detection by exploiting side-output features of the backbone feature extraction network. However, shallower layers of backbone network lack the ability to obtain global semantic information, which limits the effective feature learning. To address the problem, we introduce multi-path recurrent feedback to enhance our proposed progressive attention driven framework. Through multi-path recurrent connections, global semantic information from the top convolutional layer is transferred to shallower layers, which intrinsically refines the entire network. Experimental results on six benchmark datasets demonstrate that our algorithm performs favorably against the state-of-the-art approaches.",
"Visual attention plays a central role in natural and artificial systems to control perceptual resources. The classic artificial visual attention systems uses salient features of the image obtained from the information given by predefined filters. Recently, deep neural networks have been developed for recognizing thousands of objects and autonomously generate visual characteristics optimized by training with large data sets. Besides being used for object recognition, these features have been very successful in other visual problems such as object segmentation, tracking and recently, visual attention. In this work we propose a biologically inspired object classification and localization framework that combines Deep Convolutional Neural Networks with foveal vision. First, a feed-forward pass is performed to obtain the predicted class labels. Next, we get the object location proposals by applying a segmentation mask on the saliency map calculated through a top-down backward pass. The main contribution of our work lies in the evaluation of the performances obtained with different non-uniform resolutions. We were able to establish a relationship between performance and the different levels of information preserved by each of the sensing configurations. The results demonstrate that we do not need to store and transmit all the information present on high-resolution images since, beyond a certain amount of preserved information, the performance in the classification and localization task saturates.",
"We introduce a saliency-based distortion layer for convolutional neural networks that helps to improve the spatial sampling of input data for a given task. Our differentiable layer can be added as a preprocessing block to existing task networks and trained altogether in an end-to-end fashion. The effect of the layer is to efficiently estimate how to sample from the original data in order to boost task performance. For example, for an image classification task in which the original data might range in size up to several megapixels, but where the desired input images to the task network are much smaller, our layer learns how best to sample from the underlying high resolution data in a manner which preserves task-relevant information better than uniform downsampling. This has the effect of creating distorted, caricature-like intermediate images, in which idiosyncratic elements of the image that improve task performance are zoomed and exaggerated. Unlike alternative approaches such as spatial transformer networks, our proposed layer is inspired by image saliency, computed efficiently from uniformly downsampled data, and degrades gracefully to a uniform sampling strategy under uncertainty. We apply our layer to improve existing networks for the tasks of human gaze estimation and fine-grained object classification. Code for our method is available in: http: github.com recasens Saliency-Sampler.",
""
]
}
|
1811.06868
|
2901505172
|
Foveation, the ability to sequentially acquire high-acuity regions of a scene viewed initially at low-acuity, is a key property of biological vision systems. In a computer vision system, foveation is also desired to increase data efficiency and derive task-relevant features. Yet, most existing deep learning models lack the ability to foveate. In this paper, we propose a deep reinforcement learning-based foveation model, DRIFT, and apply it to challenging fine-grained classification tasks. Training of DRIFT requires only image-level category labels and encourages fixations to contain discriminative information while maintaining data efficiency. Specifically, we formulate foveation as a sequential decision-making process and train a foveation actor network with a novel Deep Deterministic Policy Gradient by Conditioned Critic and Coaching (DDPGC3) algorithm. In addition, we propose to shape the reward to provide informative feedback after each fixation to better guide the RL training. We demonstrate the effectiveness of our method on five fine-grained classification benchmark datasets, and show that the proposed approach achieves state-of-the-art performance using an order-of-magnitude fewer pixels.
|
Unlike existing attention models, this paper focuses on automatically inferring fixations from extremely low-acuity inputs (e.g. @math ), where traditional attention models fail to produce meaningful attention maps (see Sec. ). Instead, we take a sequential and additive approach: The proposed DRIFT model is able to accumulate knowledge, recursively refine its fixations, and finally produce fixation locations that are optimized for classification accuracy as well as data efficiency. Benefiting from a reinforcement learning formulation, DRIFT avoids the exhaustive searching behavior, thus thus it is superior to the brute-force approach in @cite_18 .
|
{
"cite_N": [
"@cite_18"
],
"mid": [
"2218109590"
],
"abstract": [
"We propose a new method for turning an Internet-scale corpus of categorized images into a small set of human-interpretable discriminative visual elements using powerful tools based on deep learning. A key challenge with deep learning methods is generating human-interpretable models. To address this, we propose a new technique that uses bubble images -- images where most of the content has been obscured -- to identify spatially localized, discriminative content in each image. By modifying the model training procedure to use both the source imagery and these bubble images, we can arrive at final models which retain much of the original classification performance, but are much more amenable to identifying interpretable visual elements. We apply our algorithm to a wide variety of datasets, including two new Internet-scale datasets of people and places, and show applications to visual mining and discovery. Our method is simple, scalable, and produces visual elements that are highly representative compared to prior work."
]
}
|
1811.07083
|
2901489275
|
Convolutional neural networks (CNNs) have shown remarkable performance in various computer vision tasks in recent years. However, the increasing model size has raised challenges in adopting them in real-time applications as well as mobile and embedded vision applications. Many works try to build networks as small as possible while still have acceptable performance. The state-of-the-art architecture is MobileNets. They use Depthwise Separable Convolution (DWConvolution) in place of standard Convolution to reduce the size of networks. This paper describes an improved version of MobileNet, called Pyramid Mobile Network. Instead of using just a @math kernel size for DWConvolution like in MobileNet, the proposed network uses a pyramid kernel size to capture more spatial information. The proposed architecture is evaluated on two highly competitive object recognition benchmark datasets (CIFAR-10, CIFAR-100). The experiments demonstrate that the proposed network achieves better performance compared with MobileNet as well as other state-of-the-art networks. Additionally, it is more flexible in fine-tuning the trade-off between accuracy, latency and model size than MobileNets.
|
Nowadays, there are many efficient neural network architectures @cite_11 @cite_15 @cite_17 @cite_42 use Depthwise Separable Convolutions (DWConvolution) as the key building block. The basic idea of DWConvolution is to replace a standard convolutional layer with two separate layers. The first layer uses a depthwise convolution operator. It applies a single convolutional filter per input channel to capture the spatial information in each channel. Then the second layer employs a pointwise convolution, means a @math convolution, to capture the cross-channel information. Suppose the input tensor @math has size @math , the output tensor @math has size @math . So, the standard Convolution needs to apply a convolutional kernel @math , where @math is the size of kernel. Therefore, it has the computation cost of @math . In case of DWConvolution, the depthwise convolution layer costs @math and the @math pointwise convolution costs @math . Hence, the total computational cost of DWConvolution is @math . Effectively, the computational cost of DWConvolution is smaller than the standard Convolution by a factor of @math .
|
{
"cite_N": [
"@cite_15",
"@cite_42",
"@cite_17",
"@cite_11"
],
"mid": [
"2612445135",
"",
"2796438033",
"2951583185"
],
"abstract": [
"We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.",
"",
"In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3. The MobileNetV2 architecture is based on an inverted residual structure where the input and output of the residual block are thin bottleneck layers opposite to traditional residual models which use expanded representations in the input an MobileNetV2 uses lightweight depthwise convolutions to filter features in the intermediate expansion layer. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on Imagenet classification, COCO object detection, VOC image segmentation. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as the number of parameters",
"We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution). In this light, a depthwise separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset (which Inception V3 was designed for), and significantly outperforms Inception V3 on a larger image classification dataset comprising 350 million images and 17,000 classes. Since the Xception architecture has the same number of parameters as Inception V3, the performance gains are not due to increased capacity but rather to a more efficient use of model parameters."
]
}
|
1811.07092
|
2900621592
|
We consider the problem of recognizing mentions of human senses in text. Our contribution is a method for acquiring labeled data, and a learning method that is trained on this data. Experiments show the effectiveness of our proposed data labeling approach and our learning model on the task of sense recognition in text.
|
Our task is related to entity recognition however in this paper we focused on novel types of entities, which can be used to improve extraction of common sense knowledge. Entity recognition systems are traditionally based on a sequential model, for example a CRF, and involve feature engineering @cite_2 @cite_0 . More recently, neural approaches have been used for named entity recognition @cite_10 @cite_17 @cite_12 @cite_11 @cite_14 . Like other neural approaches, our approach does not require feature engineering, the only features we use are word and character embeddings. Related to our proposed recurrence in the output layer is the work of @cite_3 which introduced a CRF on top of LSTM for the task of named entity recognition.
|
{
"cite_N": [
"@cite_11",
"@cite_14",
"@cite_3",
"@cite_0",
"@cite_2",
"@cite_10",
"@cite_12",
"@cite_17"
],
"mid": [
"2179519966",
"2340196666",
"",
"2004763266",
"2147880316",
"2042188227",
"1951325712",
"2158899491"
],
"abstract": [
"Named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance. In this paper, we present a novel neural network architecture that automatically detects word- and character-level features using a hybrid bidirectional LSTM and CNN architecture, eliminating the need for most feature engineering. We also propose a novel method of encoding partial lexicon matches in neural networks and compare it to existing approaches. Extensive evaluation shows that, given only tokenized text and publicly available word embeddings, our system is competitive on the CoNLL-2003 dataset and surpasses the previously reported state of the art performance on the OntoNotes 5.0 dataset by 2.13 F1 points. By using two lexicons constructed from publicly-available sources, we establish new state of the art performance with an F1 score of 91.62 on CoNLL-2003 and 86.28 on OntoNotes, surpassing systems that employ heavy feature engineering, proprietary lexicons, and rich entity linking information.",
"In this work we propose a novel attention-based neural network model for the task of fine-grained entity type classification that unlike previously proposed models recursively composes representations of entity mention contexts. Our model achieves state-of-the-art performance with 74.94 loose micro F1-score on the well-established FIGER dataset, a relative improvement of 2.59 . We also investigate the behavior of the attention mechanism of our model and observe that it can learn contextual linguistic expressions that indicate the fine-grained category memberships of an entity.",
"",
"We analyze some of the fundamental design challenges and misconceptions that underlie the development of an efficient and robust NER system. In particular, we address issues such as the representation of text chunks, the inference approach needed to combine local NER decisions, the sources of prior knowledge and how to use them within an NER system. In the process of comparing several solutions to these challenges we reach some surprising conclusions, as well as develop an NER system that achieves 90.8 F1 score on the CoNLL-2003 NER shared task, the best reported result for this dataset.",
"We present conditional random fields , a framework for building probabilistic models to segment and label sequence data. Conditional random fields offer several advantages over hidden Markov models and stochastic grammars for such tasks, including the ability to relax strong independence assumptions made in those models. Conditional random fields also avoid a fundamental limitation of maximum entropy Markov models (MEMMs) and other discriminative Markov models based on directed graphical models, which can be biased towards states with few successor states. We present iterative parameter estimation algorithms for conditional random fields and compare the performance of the resulting models to HMMs and MEMMs on synthetic and natural-language data.",
"In this approach to named entity recognition, a recurrent neural network, known as Long Short-Term Memory, is applied. The network is trained to perform 2 passes on each sentence, outputting its decisions on the second pass. The first pass is used to acquire information for disambiguation during the second pass. SARDNET, a self-organising map for sequences is used to generate representations for the lexical items presented to the LSTM network, whilst orthogonal representations are used to represent the part of speech and chunk tags.",
"Most state-of-the-art named entity recognition (NER) systems rely on handcrafted features and on the output of other NLP tasks such as part-of-speech (POS) tagging and text chunking. In this work we propose a language-independent NER system that uses automatically learned features only. Our approach is based on the CharWNN deep neural network, which uses word-level and character-level representations (embeddings) to perform sequential classification. We perform an extensive number of experiments using two annotated corpora in two different languages: HAREM I corpus, which contains texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in Spanish. Our experimental results shade light on the contribution of neural character embeddings for NER. Moreover, we demonstrate that the same neural network which has been successfully applied to POS tagging can also achieve state-of-the-art results for language-independet NER, using the same hyperparameters, and without any handcrafted features. For the HAREM I corpus, CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score for the total scenario (ten NE classes), and by 7.2 points in the F1 for the selective scenario (five NE classes).",
"We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements."
]
}
|
1811.07125
|
2900976614
|
One of the most prominent problems in machine learning in the age of deep learning is the availability of sufficiently large annotated datasets. While for standard problem domains (ImageNet classification), appropriate datasets exist, for specific domains, classification of animal species, a long-tail distribution means that some classes are observed and annotated insufficiently. Challenges like iNaturalist show that there is a strong interest in species recognition. Acquiring additional labels can be prohibitively expensive. First, since domain experts need to be involved, and second, because acquisition of new data might be costly. Although there exist methods for data augmentation, which not always lead to better performance of the classifier, there is more additional information available that is to the best of our knowledge not exploited accordingly. In this paper, we propose to make use of existing class hierarchies like WordNet to integrate additional domain knowledge into classification. We encode the properties of such a class hierarchy into a probabilistic model. From there, we derive a special label encoding together with a corresponding loss function. Using a convolutional neural network, on the ImageNet and NABirds datasets our method offers a relative improvement of 10.4 and 9.6 in accuracy over the baseline respectively. After less than a third of training time, it is already able to match the baseline's fine-grained recognition performance. Both results show that our suggested method is efficient and effective.
|
Typical image classification datasets rarely offer hierarchical information. There are exceptions such as the iNaturalist challenge dataset @cite_2 where a class hierarchy is derived from biological taxonomy. Exceptions also include specific hierarchical classification benchmarks, @cite_4 @cite_13 as well as datasets where the labels originate from a hierarchy such as ImageNet @cite_16 . The Visual Genome dataset @cite_23 is another notable exception, with available metadata including attributes, relationships, visual question answers, bounding boxes and more, all mapped to elements from WordNet.
|
{
"cite_N": [
"@cite_4",
"@cite_23",
"@cite_2",
"@cite_16",
"@cite_13"
],
"mid": [
"756166754",
"2277195237",
"2736618577",
"2108598243",
"2145607950"
],
"abstract": [
"LSHTC is a series of challenges which aims to assess the performance of classification systems in large-scale classification in a a large number of classes (up to hundreds of thousands). This paper describes the dataset that have been released along the LSHTC series. The paper details the construction of the datsets and the design of the tracks as well as the evaluation measures that we implemented and a quick overview of the results. All of these datasets are available online and runs may still be submitted on the online server of the challenges.",
"Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image. When asked \"What vehicle is the person riding?\", computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) to answer correctly that \"the person is riding a horse-drawn carriage.\" In this paper, we present the Visual Genome dataset to enable the modeling of such relationships. We collect dense annotations of objects, attributes, and relationships within each image to learn these models. Specifically, our dataset contains over 108K images where each image has an average of @math 35 objects, @math 26 attributes, and @math 21 pairwise relationships between objects. We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets. Together, these annotations represent the densest and largest dataset of image descriptions, objects, attributes, relationships, and question answer pairs.",
"Existing image classification datasets used in computer vision tend to have an even number of images for each object category. In contrast, the natural world is heavily imbalanced, as some species are more abundant and easier to photograph than others. To encourage further progress in challenging real world conditions we present the iNaturalist Challenge 2017 dataset - an image classification benchmark consisting of 675,000 images with over 5,000 different species of plants and animals. It features many visually similar species, captured in a wide variety of situations, from all over the world. Images were collected with different camera types, have varying image quality, have been verified by multiple citizen scientists, and feature a large class imbalance. We discuss the collection of the dataset and present baseline results for state-of-the-art computer vision classification models. Results show that current non-ensemble based methods achieve only 64 top one classification accuracy, illustrating the difficulty of the dataset. Finally, we report results from a competition that was held with the data.",
"The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.",
"With the advent of the Internet, billions of images are now freely available online and constitute a dense sampling of the visual world. Using a variety of non-parametric methods, we explore this world with the aid of a large dataset of 79,302,017 images collected from the Internet. Motivated by psychophysical results showing the remarkable tolerance of the human visual system to degradations in image resolution, the images in the dataset are stored as 32 x 32 color images. Each image is loosely labeled with one of the 75,062 non-abstract nouns in English, as listed in the Wordnet lexical database. Hence the image database gives a comprehensive coverage of all object categories and scenes. The semantic information from Wordnet can be used in conjunction with nearest-neighbor methods to perform object classification over a range of semantic levels minimizing the effects of labeling noise. For certain classes that are particularly prevalent in the dataset, such as people, we are able to demonstrate a recognition performance comparable to class-specific Viola-Jones style detectors."
]
}
|
1811.07125
|
2900976614
|
One of the most prominent problems in machine learning in the age of deep learning is the availability of sufficiently large annotated datasets. While for standard problem domains (ImageNet classification), appropriate datasets exist, for specific domains, classification of animal species, a long-tail distribution means that some classes are observed and annotated insufficiently. Challenges like iNaturalist show that there is a strong interest in species recognition. Acquiring additional labels can be prohibitively expensive. First, since domain experts need to be involved, and second, because acquisition of new data might be costly. Although there exist methods for data augmentation, which not always lead to better performance of the classifier, there is more additional information available that is to the best of our knowledge not exploited accordingly. In this paper, we propose to make use of existing class hierarchies like WordNet to integrate additional domain knowledge into classification. We encode the properties of such a class hierarchy into a probabilistic model. From there, we derive a special label encoding together with a corresponding loss function. Using a convolutional neural network, on the ImageNet and NABirds datasets our method offers a relative improvement of 10.4 and 9.6 in accuracy over the baseline respectively. After less than a third of training time, it is already able to match the baseline's fine-grained recognition performance. Both results show that our suggested method is efficient and effective.
|
Several methods of knowledge transfer between object classes aimed at scalability towards large numbers of classes are presented in @cite_20 . The authors note that while knowledge transfer does not generally improve classification in settings where training data is available for all classes, it is valuable in zero-shot learning scenarios @cite_7 , where some classes do not have any labeled training examples. One of their methods performs knowledge transfer based on the WordNet hierarchy underlying the ImageNet challenge dataset they use. In a zero-shot setting, it outperforms other methods based on part attributes and semantic similarity.
|
{
"cite_N": [
"@cite_7",
"@cite_20"
],
"mid": [
"2150295085",
"2077071968"
],
"abstract": [
"We consider the problem of zero-shot learning, where the goal is to learn a classifier f : X → Y that must predict novel values of Y that were omitted from the training set. To achieve this, we define the notion of a semantic output code classifier (SOC) which utilizes a knowledge base of semantic properties of Y to extrapolate to novel classes. We provide a formalism for this type of classifier and study its theoretical properties in a PAC framework, showing conditions under which the classifier can accurately predict novel classes. As a case study, we build a SOC classifier for a neural decoding task and show that it can often predict words that people are thinking about from functional magnetic resonance images (fMRI) of their neural activity, even without training examples for those words.",
"While knowledge transfer (KT) between object classes has been accepted as a promising route towards scalable recognition, most experimental KT studies are surprisingly limited in the number of object classes considered. To support claims of KT w.r.t. scalability we thus advocate to evaluate KT in a large-scale setting. To this end, we provide an extensive evaluation of three popular approaches to KT on a recently proposed large-scale data set, the ImageNet Large Scale Visual Recognition Competition 2010 data set. In a first setting they are directly compared to one-vs-all classification often neglected in KT papers and in a second setting we evaluate their ability to enable zero-shot learning. While none of the KT methods can improve over one-vs-all classification they prove valuable for zero-shot learning, especially hierarchical and direct similarity based KT. We also propose and describe several extensions of the evaluated approaches that are necessary for this large-scale study."
]
}
|
1811.06663
|
2963137185
|
Adaptive bitrate (ABR) streaming is the de facto solution for achieving smooth viewing experiences under unstable network conditions. However, most of the existing rate adaptation approaches for ABR are content-agnostic, without considering the semantic information of the video content. Nevertheless, semantic information largely determines the informativeness and interestingness of the video content, and consequently affects the QoE for video streaming. One common case is that the user may expect higher quality for the parts of video content that are more interesting or informative so as to reduce overall subjective quality loss. This creates two main challenges for such a problem: First, how to determine which parts of the video content are more interesting? Second, how to allocate bitrate budgets for different parts of the video content with different significances? To address these challenges, we propose a Content-of-Interest (CoI) based rate adaptation scheme for ABR. We first design a deep learning approach for recognizing the interestingness of the video content, and then design a Deep Q-Network (DQN) approach for rate adaptation by incorporating video interestingness information. The experimental results show that our method can recognize video interestingness precisely, and the bitrate allocation for ABR can be aligned with the interestingness of video content while not compromising the performances on objective QoE metrics.
|
Huang @cite_14 designed a buffer-based approach by considering the current buffer occupancy. Li @cite_1 designed a client-side rate adaptation algorithm by envisioning a general probe-and-adapt principle. Yin @cite_9 proposed a Model Predictive Control (MPC) approach by jointly considering buffer occupancy and bandwidth. Bokani @cite_17 and Zhou @cite_12 adopted Markov Decision Process (MDP) for rate adaptation. Spiteri @cite_3 adopted Lyapunov framework to design an online algorithm to minimize rebuffering and maximize QoE, without requiring bandwidth information. Qin @cite_0 proposed a PID based method for rate adaptation, and Mao @cite_19 adopted deep reinforcement learning for rate adaptation. In this line of works, they mainly considered the objective QoE metrics, aiming to improve the performances on rebuffering time, average bitrate, and video quality variation.
|
{
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_1",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_12",
"@cite_17"
],
"mid": [
"2167407752",
"1976944900",
"2017146017",
"2963191323",
"2761144083",
"2744628735",
"2300178424",
"2185395384"
],
"abstract": [
"Existing ABR algorithms face a significant challenge in estimating future capacity: capacity can vary widely over time, a phenomenon commonly observed in commercial services. In this work, we suggest an alternative approach: rather than presuming that capacity estimation is required, it is perhaps better to begin by using only the buffer, and then ask when capacity estimation is needed. We test the viability of this approach through a series of experiments spanning millions of real users in a commercial service. We start with a simple design which directly chooses the video rate based on the current buffer occupancy. Our own investigation reveals that capacity estimation is unnecessary in steady state; however using simple capacity estimation (based on immediate past throughput) is important during the startup phase, when the buffer itself is growing from empty. This approach allows us to reduce the rebuffer rate by 10-20 compared to Netflix's then-default ABR algorithm, while delivering a similar average video rate, and a higher video rate in steady state.",
"User-perceived quality-of-experience (QoE) is critical in Internet video applications as it impacts revenues for content providers and delivery systems. Given that there is little support in the network for optimizing such measures, bottlenecks could occur anywhere in the delivery system. Consequently, a robust bitrate adaptation algorithm in client-side players is critical to ensure good user experience. Previous studies have shown key limitations of state-of-art commercial solutions and proposed a range of heuristic fixes. Despite the emergence of several proposals, there is still a distinct lack of consensus on: (1) How best to design this client-side bitrate adaptation logic (e.g., use rate estimates vs. buffer occupancy); (2) How well specific classes of approaches will perform under diverse operating regimes (e.g., high throughput variability); or (3) How do they actually balance different QoE objectives (e.g., startup delay vs. rebuffering). To this end, this paper makes three key technical contributions. First, to bring some rigor to this space, we develop a principled control-theoretic model to reason about a broad spectrum of strategies. Second, we propose a novel model predictive control algorithm that can optimally combine throughput and buffer occupancy information to outperform traditional approaches. Third, we present a practical implementation in a reference video player to validate our approach using realistic trace-driven emulations.",
"Today, the technology for video streaming over the Internet is converging towards a paradigm named HTTP-based adaptive streaming (HAS), which brings two new features. First, by using HTTP TCP, it leverages network-friendly TCP to achieve both firewall NAT traversal and bandwidth sharing. Second, by pre-encoding and storing the video in a number of discrete rate levels, it introduces video bitrate adaptivity in a scalable way so that the video encoding is excluded from the closed-loop adaptation. A conventional wisdom in HAS design is that since the TCP throughput observed by a client would indicate the available network bandwidth, it could be used as a reliable reference for video bitrate selection. We argue that this is no longer true when HAS becomes a substantial fraction of the total network traffic. We show that when multiple HAS clients compete at a network bottleneck, the discrete nature of the video bitrates results in difficulty for a client to correctly perceive its fair-share bandwidth. Through analysis and test bed experiments, we demonstrate that this fundamental limitation leads to video bitrate oscillation and other undesirable behaviors that negatively impact the video viewing experience. We therefore argue that it is necessary to design at the application layer using a \"probe and adapt\" principle for video bitrate adaptation (where \"probe\" refers to trial increment of the data rate, instead of sending auxiliary piggybacking traffic), which is akin, but also orthogonal to the transport-layer TCP congestion control. We present PANDA - a client-side rate adaptation algorithm for HAS - as a practical embodiment of this principle. Our test bed results show that compared to conventional algorithms, PANDA is able to reduce the instability of video bitrate selection by over 75 without increasing the risk of buffer underrun.",
"Modern video players employ complex algorithms to adapt the bitrate of the video that is shown to the user. Bitrate adaptation requires a tradeoff between reducing the probability that the video freezes and enhancing the quality of the video shown to the user. A bitrate that is too high leads to frequent video freezes (i.e., rebuffering), while a bitrate that is too low leads to poor video quality. Video providers segment the video into short chunks and encode each chunk at multiple bitrates. The video player adaptively chooses the bitrate of each chunk that is downloaded, possibly choosing different bitrates for successive chunks. While bitrate adaptation holds the key to a good quality of experience for the user, current video players use ad-hoc algorithms that are poorly understood. We formulate bitrate adaptation as a utility maximization problem and devise an online control algorithm called BOLA that uses Lyapunov optimization techniques to minimize rebuffering and maximize video quality. We prove that BOLA achieves a time-average utility that is within an additive term O(1 V) of the optimal value, for a control parameter V related to the video buffer size. Further, unlike prior work, our algorithm does not require any prediction of available network bandwidth. We empirically validate our algorithm in a simulated network environment using an extensive collection of network traces. We show that our algorithm achieves near-optimal utility and in many cases significantly higher utility than current state-of-the-art algorithms. Our work has immediate impact on real-world video players and BOLA is part of the reference player implementation for the evolving DASH standard for video transmission.",
"Adaptive bitrate streaming (ABR) has become the de facto technique for video streaming over the Internet. Despite a flurry of techniques, achieving high quality ABR streaming over cellular networks remains a tremendous challenge. ABR streaming can be naturally modeled as a feedback control problem. There has been some initial work on using PID, a widely used feedback control technique, for ABR streaming. Existing studies, however, either use PID control directly without fully considering the special requirements of ABR streaming, leading to suboptimal results, or conclude that PID is not a suitable approach. In this paper, we take a fresh look at PID-based control for ABR streaming. We design a framework called PIA that strategically leverages PID control concepts and incorporates several novel strategies to account for the various requirements of ABR streaming. We evaluate PIA using simulation based on real LTE network traces, as well as using real DASH implementation. The results demonstrate that PIA outperforms state-of-the-art schemes in providing high average bitrate with significantly lower bitrate changes (reduction up to 40 ) and stalls (reduction up to 85 ), while incurring very small runtime overhead.",
"Client-side video players employ adaptive bitrate (ABR) algorithms to optimize user quality of experience (QoE). Despite the abundance of recently proposed schemes, state-of-the-art ABR algorithms suffer from a key limitation: they use fixed control rules based on simplified or inaccurate models of the deployment environment. As a result, existing schemes inevitably fail to achieve optimal performance across a broad set of network conditions and QoE objectives. We propose Pensieve, a system that generates ABR algorithms using reinforcement learning (RL). Pensieve trains a neural network model that selects bitrates for future video chunks based on observations collected by client video players. Pensieve does not rely on pre-programmed models or assumptions about the environment. Instead, it learns to make ABR decisions solely through observations of the resulting performance of past decisions. As a result, Pensieve automatically learns ABR algorithms that adapt to a wide range of environments and QoE metrics. We compare Pensieve to state-of-the-art ABR algorithms using trace-driven and real world experiments spanning a wide variety of network conditions, QoE metrics, and video properties. In all considered scenarios, Pensieve outperforms the best state-of-the-art scheme, with improvements in average QoE of 12 --25 . Pensieve also generalizes well, outperforming existing schemes even on networks for which it was not explicitly trained.",
"Dynamic adaptive streaming over HTTP (DASH) has recently been widely deployed in the Internet. It, however, does not impose any adaptation logic for selecting the quality of video fragments requested by clients. In this paper, we propose a novel Markov decision-based rate adaptation scheme for DASH aiming to maximize the quality of user experience under time-varying channel conditions. To this end, our proposed method takes into account those key factors that make a critical impact on visual quality, including video playback quality, video rate switching frequency and amplitude, buffer overflow underflow, and buffer occupancy. Besides, to reduce computational complexity, we propose a low-complexity sub-optimal greedy algorithm which is suitable for real-time video streaming. Our experiments in network test-bed and real-world Internet all demonstrate the good performance of the proposed method in both objective and subjective visual quality.",
"Hypertext transfer protocol (HTTP) is the fundamental mechanics supporting web browsing on the Internet. An HTTP server stores large volumes of contents and delivers specific pieces to the clients when requested. There is a recent move to use HTTP for video streaming as well, which promises seamless integration of video delivery to existing HTTP-based server platforms. This is achieved by segmenting the video into many small chunks and storing these chunks as separate files on the server. For adaptive streaming, the server stores different quality versions of the same chunk in different files to allow real-time quality adaptation of the video due to network bandwidth variation experienced by a client. For each chunk of the video, which quality version to download, therefore, becomes a major decision-making challenge for the streaming client, especially in vehicular environment with significant uncertainty in mobile bandwidth. In this paper, we demonstrate that for such decision making, the Markov decision process (MDP) is superior to previously proposed non-MDP solutions. Using publicly available video and bandwidth datasets, we show that the MDP achieves up to a 15x reduction in playback deadline miss compared to a well-known non-MDP solution when the MDP has the prior knowledge of the bandwidth model. We also consider a model-free MDP implementation that uses Q-learning to gradually learn the optimal decisions by continuously observing the outcome of its decision making. We find that the MDP with Q-learning significantly outperforms the MDP that uses bandwidth models."
]
}
|
1811.06663
|
2963137185
|
Adaptive bitrate (ABR) streaming is the de facto solution for achieving smooth viewing experiences under unstable network conditions. However, most of the existing rate adaptation approaches for ABR are content-agnostic, without considering the semantic information of the video content. Nevertheless, semantic information largely determines the informativeness and interestingness of the video content, and consequently affects the QoE for video streaming. One common case is that the user may expect higher quality for the parts of video content that are more interesting or informative so as to reduce overall subjective quality loss. This creates two main challenges for such a problem: First, how to determine which parts of the video content are more interesting? Second, how to allocate bitrate budgets for different parts of the video content with different significances? To address these challenges, we propose a Content-of-Interest (CoI) based rate adaptation scheme for ABR. We first design a deep learning approach for recognizing the interestingness of the video content, and then design a Deep Q-Network (DQN) approach for rate adaptation by incorporating video interestingness information. The experimental results show that our method can recognize video interestingness precisely, and the bitrate allocation for ABR can be aligned with the interestingness of video content while not compromising the performances on objective QoE metrics.
|
Cavallaro @cite_18 showed that the use of semantic video analysis prior to encoding for adaptive content delivery reduces bandwidth requirements. Hu @cite_6 proposed a semantics-aware adaptation scheme for ABR streaming by semantic analysis for soccer video. Fan @cite_5 utilized various features collected from streaming services to determine if a video segment attracts viewers for optimizing live game streaming. Dong @cite_13 designed a personalized emotion-aware video streaming system based on the user's emotional status. In this line of works, they considered different subjective factors for optimizing video streaming services to improve QoE.
|
{
"cite_N": [
"@cite_5",
"@cite_18",
"@cite_13",
"@cite_6"
],
"mid": [
"2541487934",
"2160881085",
"2327187098",
"2753909025"
],
"abstract": [
"Live game streaming is tremendously popular, and recent reports indicate that such platforms impose high traffic volume, leading to degraded user experience. In this paper, we propose a Segment-of-Interest (SoI) driven platform, so as to optimize live game streaming. Our platform uses various features collected from streamers and viewers to determine if the current segments of gameplays attract viewers. Upon determining the importance of individual segments, the limited bandwidth is allocated to the interested viewers in a Rate-Distortion (R-D) optimized manner, where the levels of segment importance are used as weights of game streaming quality. The underlaying intuition is: viewer experience is degraded only when the game streaming degradation is noticed by viewers. Simulation results show the benefits of our proposed solution: (i) it improves viewing quality by up to 5 dB, (ii) it saves bandwidth by up to 50 Gbps, and (iii) it efficiently performs resource allocation and scales to many viewers. Our presented testbed is opensource and can be leveraged by researchers and engineers to further improve live game streaming platforms.",
"We present an encoding framework which exploits semantics for video content delivery. The video content is organized based on the idea of main content message. In the work reported in this paper, the main content message is extracted from the video data through semantic video analysis, an application-dependent process that separates relevant information from non relevant information. We use here semantic analysis and the corresponding content annotation under a new perspective: the results of the analysis are exploited for object-based encoders, such as MPEG-4, as well as for frame-based encoders, such as MPEG-1. Moreover, the use of MPEG-7 content descriptors in conjunction with the video is used for improving content visualization for narrow channels and devices with limited capabilities. Finally, we analyze and evaluate the impact of semantic video analysis in video encoding and show that the use of semantic video analysis prior to encoding sensibly reduces the bandwidth requirements compared to traditional encoders not only for an object-based encoder but also for a frame-based encoder.",
"As a useful tool for improving the user’s quality of experience (QoE), delay announcement has received substantial attention recently. However, how to make a simple and efficient delay announcement in the cloud mobile media environment is still an open and challenging problem. Unfortunately, traditional convex and stochastic optimization-based methods cannot address this issue due to the subjective user response with respect to the announced delay. To resolve this problem, this paper analytically studies the characteristics of delay announcement by analyzing the components of the user response and designs a QoE-driven delay announcement scheme by establishing an objective user response function. On the methodology end, the user response associated with the announced delay is approximated in the framework of fluid model, where the interaction between the system performance and delay announcement is well described by a series of mathematical functions. On the technology end, this paper develops a novel state-dependent announcement scheme that is more reliable than the other competing ones and can improve the user’s QoE dramatically. Extensive simulation results validate the efficiency of the proposed delay announcement scheme.",
"In recent years, quality of experience (QoE) has been investigated and proved to have both influential factors on user's visual quality and perceptual quality, while the perceptual quality means user's requirement on personalized content should be acquired in optimized quality. That's to say, those segments holding user interested content such as highlights need to be allocated more network resource in a resource-limited streaming scenario. However, all the existing HTTP-based adaptive methods only focus the content-agnostic bitrate adaptation according to limited network resources or energy resource, since they ignored user perceived semantics on some important segments, which suffered less quality on the important segments than on those ordinary ones, so as to hurt the overall QoE. In this paper, we have proposed a new semantic-aware adaptation scheme for MPEG-DASH services, which decides how to preserve bandwidth and buffering time depending on content descriptors for the perceived important content to users. Further, a semantic-aware probe and adaptation (SMA-PANDA) algorithm has been implemented in a DASH client to compare with conventional bitrate adaptions. Preliminary results show that SMA-PANDA achieves better QoE and flexibility on streaming user's interested content on MPEG-DASH platform, and it also aggressively helps user interested content compete more resource to deliver high quality presentation."
]
}
|
1811.06846
|
2900867487
|
In this work, we investigate if previously proposed CNNs for fingerprint pore detection overestimate the number of required model parameters for this task. We show that this is indeed the case by proposing a fully convolutional neural network that has significantly fewer parameters. We evaluate this model using a rigorous and reproducible protocol, which was, prior to our work, not available to the community. Using our protocol, we show that the proposed model, when combined with post-processing, performs better than previous methods, albeit being much more efficient. All our code is available at this https URL
|
Wang al @cite_1 use a U-Net to detect pores in fingerprint images. The U-Net is trained to classify each spatial location in the image into one of three categories: pore centroid, ridge or background. To detect pores, 20 patches are extracted from the input image and each is forwarded through the trained CNN. This results in 3 probability maps per patch, one indicating the probability of pores, the other of ridges and the last of background region. The ridge probability map for each patch is used to post-process the one for pores. Afterward, the pore probability maps are binarized. To combine the predictions for each patch, a boolean or'' operation is performed with all of them.
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"2765940700"
],
"abstract": [
"The public demand for personal safety is increasing rapidly. Fingerprint features as the most commonly used bio-signature need to improve their safety continuously. The third level features of fingerprint (especially the sweat pores) can be added to the automatic fingerprint recognition system to increase the accuracy of fingerprint identification in a variety of environments. Due to perspiration activities, the shape and size sweat of pores are varying spatially and temporally. Extraction of fingerprint pores is both critical and challenging. In this paper, we adapt a novel fully convolutional neural network called U-net for ridges and sweat pores extraction. The PolyU High-Resolution-Fingerprint (HRF) database is used for testing of the proposed method. The results show the validity of the proposed method. With the majority of the pores correctly extracted, the proposed method can serve for fingerprint recognition using Level 3 features."
]
}
|
1811.06837
|
2901813505
|
Code generation maps a program description to executable source code in a programming language. Existing approaches mainly rely on a recurrent neural network (RNN) as the decoder. However, we find that a program contains significantly more tokens than a natural language sentence, and thus it may be inappropriate for RNN to capture such a long sequence. In this paper, we propose a grammar-based structural convolutional neural network (CNN) for code generation. Our model generates a program by predicting the grammar rules of the programming language; we design several CNN modules, including the tree-based convolution and pre-order convolution, whose information is further aggregated by dedicated attentive pooling layers. Experimental results on the HearthStone benchmark dataset show that our CNN code generator significantly outperforms the previous state-of-the-art method by 5 percentage points; additional experiments on several semantic parsing tasks demonstrate the robustness of our model. We also conduct in-depth ablation test to better understand each component of our model.
|
Early studies on code generation mostly focus on domain specific languages @cite_22 @cite_25 @cite_18 . They are largely based on rules and human defined features, and thus are highly restricted. Recently, researchers introduce neural networks to generate code in a general-purpose programming language. adopt a sequence-to-sequence model, but enhance it with multiple predictors. Other studies generate programs along abstract syntax trees @cite_21 @cite_2 @cite_20 . However, their decoders are all based on RNNs, which are shown improper for code generation in our experiments. CNNs are origianlly used in classification tasks @cite_17 @cite_0 . propose a tree-based CNN to capture structural information. Such idea can be extended to general graphs, e.g., molecule analysis @cite_6 . Recently, researchers develop deep CNNs for decoders @cite_24 @cite_15 . In our paper, we incorporate the idea of structure-sensitive CNN and CNN for generation, and design a grammar-based structural CNN for code generation.
|
{
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_21",
"@cite_6",
"@cite_0",
"@cite_24",
"@cite_2",
"@cite_15",
"@cite_25",
"@cite_20",
"@cite_17"
],
"mid": [
"",
"1496189301",
"2224454470",
"",
"2163605009",
"2613904329",
"2610002206",
"2799201288",
"",
"2605887895",
"1538131130"
],
"abstract": [
"",
"This paper addresses the problem of mapping natural language sentences to lambda–calculus encodings of their meaning. We describe a learning algorithm that takes as input a training set of sentences labeled with expressions in the lambda calculus. The algorithm induces a grammar for the problem, along with a log-linear model that represents a distribution over syntactic and semantic analyses conditioned on the input sentence. We apply the method to the task of learning natural language interfaces to databases and show that the learned parsers outperform previous methods in two benchmark database domains.",
"Semantic parsing aims at mapping natural language to machine interpretable meaning representations. Traditional approaches rely on high-quality lexicons, manually-built templates, and linguistic features which are either domain- or representation-specific. In this paper we present a general method based on an attention-enhanced encoder-decoder model. We encode input utterances into vector representations, and generate their logical forms by conditioning the output sequences or trees on the encoding vectors. Experimental results on four datasets show that our approach performs competitively without using hand-engineered features and is easy to adapt across domains and meaning representations.",
"",
"We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5 and 17.0 which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3 , compared to 26.2 achieved by the second-best entry.",
"The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.",
"Tasks like code generation and semantic parsing require mapping unstructured (or partially structured) inputs to well-formed, executable outputs. We introduce abstract syntax networks, a modeling framework for these problems. The outputs are represented as abstract syntax trees (ASTs) and constructed by a decoder with a dynamically-determined modular structure paralleling the structure of the output tree. On the benchmark Hearthstone dataset for code generation, our model obtains 79.2 BLEU and 22.7 exact match accuracy, compared to previous state-of-the-art values of 67.1 and 6.1 . Furthermore, we perform competitively on the Atis, Jobs, and Geo semantic parsing datasets with no task-specific engineering.",
"",
"",
"We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing data-driven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.",
""
]
}
|
1811.06817
|
2901453561
|
A rise in popularity of Deep Neural Networks (DNNs), attributed to more powerful GPUs and widely available datasets, has seen them being increasingly used within safety-critical domains. One such domain, self-driving, has benefited from significant performance improvements, with millions of miles having been driven with no human intervention. Despite this, crashes and erroneous behaviours still occur, in part due to the complexity of verifying the correctness of DNNs and a lack of safety guarantees. In this paper, we demonstrate how quantitative measures of uncertainty can be extracted in real-time, and their quality evaluated in end-to-end controllers for self-driving cars. To this end we utilise a recent method for gathering approximate uncertainty information from DNNs without changing the network's architecture. We propose evaluation techniques for the uncertainty on two separate architectures which use the uncertainty to predict crashes up to five seconds in advance. We find that mutual information, a measure of uncertainty in classification networks, is a promising indicator of forthcoming crashes.
|
With the exception of @cite_1 @cite_23 , DNN-based approaches for autonomous driving do not often consider model uncertainty. Recent work by @cite_4 has seen the addition of discrete speed control prediction, along with steering angle prediction, to end-to-end controllers for self-driving cars. Their work aims to make DNN-based controllers more viable, as steering angle alone is not sufficient for vehicle control. The resulting multi-modal multi-task network was shown to predict both steering angle and speed commands accurately, but does not include the use of uncertainty for any means.
|
{
"cite_N": [
"@cite_4",
"@cite_1",
"@cite_23"
],
"mid": [
"2785147712",
"2103328396",
"2279895976"
],
"abstract": [
"Convolutional Neural Networks (CNN) have been successfully applied to autonomous driving tasks, many in an end-to-end manner. Previous end-to-end steering control methods take an image or an image sequence as the input and directly predict the steering angle with CNN. Although single task learning on steering angles has reported good performances, the steering angle alone is not sufficient for vehicle control. In this work, we propose a multi-task learning framework to predict the steering angle and speed control simultaneously in an end-to-end manner. Since it is nontrivial to predict accurate speed values with only visual inputs, we first propose a network to predict discrete speed commands and steering angles with image sequences. Moreover, we propose a multi-modal multi-task network to predict speed values and steering angles by taking previous feedback speeds and visual recordings as inputs. Experiments are conducted on the public Udacity dataset and a newly collected SAIC dataset. Results show that the proposed model predicts steering angles and speed values accurately. Furthermore, we improve the failure data synthesis methods to solve the problem of error accumulation in real road tests.",
"We present a deep learning framework for probabilistic pixel-wise semantic segmentation, which we term Bayesian SegNet. Semantic segmentation is an important tool for visual scene understanding and a meaningful measure of uncertainty is essential for decision making. Our contribution is a practical system which is able to predict pixel-wise class labels with a measure of model uncertainty. We achieve this by Monte Carlo sampling with dropout at test time to generate a posterior distribution of pixel class labels. In addition, we show that modelling uncertainty improves segmentation performance by 2-3 across a number of state of the art architectures such as SegNet, FCN and Dilation Network, with no additional parametrisation. We also observe a significant improvement in performance for smaller datasets where modelling uncertainty is more effective. We benchmark Bayesian SegNet on the indoor SUN Scene Understanding and outdoor CamVid driving scenes datasets.",
"We present a robust and real-time monocular six degree of freedom visual relocalization system. We use a Bayesian convolutional neural network to regress the 6-DOF camera pose from a single RGB image. It is trained in an end-to-end manner with no need of additional engineering or graph optimisation. The algorithm can operate indoors and outdoors in real time, taking under 6ms to compute. It obtains approximately 2m and 6 degrees accuracy for very large scale outdoor scenes and 0.5m and 10 degrees accuracy indoors. Using a Bayesian convolutional neural network implementation we obtain an estimate of the model's relocalization uncertainty and improve state of the art localization accuracy on a large scale outdoor dataset. We leverage the uncertainty measure to estimate metric relocalization error and to detect the presence or absence of the scene in the input image. We show that the model's uncertainty is caused by images being dissimilar to the training dataset in either pose or appearance."
]
}
|
1811.06817
|
2901453561
|
A rise in popularity of Deep Neural Networks (DNNs), attributed to more powerful GPUs and widely available datasets, has seen them being increasingly used within safety-critical domains. One such domain, self-driving, has benefited from significant performance improvements, with millions of miles having been driven with no human intervention. Despite this, crashes and erroneous behaviours still occur, in part due to the complexity of verifying the correctness of DNNs and a lack of safety guarantees. In this paper, we demonstrate how quantitative measures of uncertainty can be extracted in real-time, and their quality evaluated in end-to-end controllers for self-driving cars. To this end we utilise a recent method for gathering approximate uncertainty information from DNNs without changing the network's architecture. We propose evaluation techniques for the uncertainty on two separate architectures which use the uncertainty to predict crashes up to five seconds in advance. We find that mutual information, a measure of uncertainty in classification networks, is a promising indicator of forthcoming crashes.
|
's paper on pixel-wise semantic segmentation @cite_1 utilised model uncertainty to improve segmentation performance. In addition to this, they were able to show that the highest areas of uncertainty occurred on class boundaries. These results were reinforced by @cite_11 which considers several other methods and also concludes that uncertainty maps are a good measure of uncertainty in segmented images.
|
{
"cite_N": [
"@cite_1",
"@cite_11"
],
"mid": [
"2103328396",
"2480078828"
],
"abstract": [
"We present a deep learning framework for probabilistic pixel-wise semantic segmentation, which we term Bayesian SegNet. Semantic segmentation is an important tool for visual scene understanding and a meaningful measure of uncertainty is essential for decision making. Our contribution is a practical system which is able to predict pixel-wise class labels with a measure of model uncertainty. We achieve this by Monte Carlo sampling with dropout at test time to generate a posterior distribution of pixel class labels. In addition, we show that modelling uncertainty improves segmentation performance by 2-3 across a number of state of the art architectures such as SegNet, FCN and Dilation Network, with no additional parametrisation. We also observe a significant improvement in performance for smaller datasets where modelling uncertainty is more effective. We benchmark Bayesian SegNet on the indoor SUN Scene Understanding and outdoor CamVid driving scenes datasets.",
"We propose a deep Convolutional Neural Network (CNN) for land cover mapping in remote sensing images, with a focus on urban areas. In remote sensing, class imbalance represents often a problem for tasks like land cover mapping, as small objects get less prioritised in an effort to achieve the best overall accuracy. We propose a novel approach to achieve high overall accuracy, while still achieving good accuracy for small objects. Quantifying the uncertainty on a pixel scale is another challenge in remote sensing, especially when using CNNs. In this paper we use recent advances in measuring uncertainty for CNNs and evaluate their quality both qualitatively and quantitatively in a remote sensing context. We demonstrate our ideas on different deep architectures including patch-based and so-called pixel-to-pixel approaches, as well as their combination, by classifying each pixel in a set of aerial images covering Vaihingen, Germany. The results show that we obtain an overall classification accuracy of 87 . The corresponding F1- score for the small object class \"car\" is 80.6 , which is higher than state-of-the art for this dataset."
]
}
|
1811.06817
|
2901453561
|
A rise in popularity of Deep Neural Networks (DNNs), attributed to more powerful GPUs and widely available datasets, has seen them being increasingly used within safety-critical domains. One such domain, self-driving, has benefited from significant performance improvements, with millions of miles having been driven with no human intervention. Despite this, crashes and erroneous behaviours still occur, in part due to the complexity of verifying the correctness of DNNs and a lack of safety guarantees. In this paper, we demonstrate how quantitative measures of uncertainty can be extracted in real-time, and their quality evaluated in end-to-end controllers for self-driving cars. To this end we utilise a recent method for gathering approximate uncertainty information from DNNs without changing the network's architecture. We propose evaluation techniques for the uncertainty on two separate architectures which use the uncertainty to predict crashes up to five seconds in advance. We find that mutual information, a measure of uncertainty in classification networks, is a promising indicator of forthcoming crashes.
|
In 2016, Kendall and Cipolla @cite_23 developed tools for the localisation of a car given a forward facing photo. They found that model uncertainty correlated to positional error; test photos with strong occlusion resulted in high uncertainty and the uncertainty displayed a linearly increasing trend with the distance from the training set.
|
{
"cite_N": [
"@cite_23"
],
"mid": [
"2279895976"
],
"abstract": [
"We present a robust and real-time monocular six degree of freedom visual relocalization system. We use a Bayesian convolutional neural network to regress the 6-DOF camera pose from a single RGB image. It is trained in an end-to-end manner with no need of additional engineering or graph optimisation. The algorithm can operate indoors and outdoors in real time, taking under 6ms to compute. It obtains approximately 2m and 6 degrees accuracy for very large scale outdoor scenes and 0.5m and 10 degrees accuracy indoors. Using a Bayesian convolutional neural network implementation we obtain an estimate of the model's relocalization uncertainty and improve state of the art localization accuracy on a large scale outdoor dataset. We leverage the uncertainty measure to estimate metric relocalization error and to detect the presence or absence of the scene in the input image. We show that the model's uncertainty is caused by images being dissimilar to the training dataset in either pose or appearance."
]
}
|
1811.06524
|
2949067929
|
In many domains, collecting sufficient labeled training data for supervised machine learning requires easily accessible but noisy sources, such as crowdsourcing services or tagged Web data. Noisy labels occur frequently in data sets harvested via these means, sometimes resulting in entire classes of data on which learned classifiers generalize poorly. For real world applications, we argue that it can be beneficial to avoid training on such classes entirely. In this work, we aim to explore the classes in a given data set, and guide supervised training to spend time on a class proportional to its learnability. By focusing the training process, we aim to improve model generalization on classes with a strong signal. To that end, we develop an online algorithm that works in conjunction with classifier and training algorithm, iteratively selecting training data for the classifier based on how well it appears to generalize on each class. Testing our approach on a variety of data sets, we show our algorithm learns to focus on classes for which the model has low generalization error relative to strong baselines, yielding a classifier with good performance on learnable classes.
|
Collecting sufficient data for training classifiers is a classic problem in machine learning. The Web is the canonical example of a noisy source of vast amounts of data, and has provided many data sets on which to train models, e.g @cite_6 @cite_8 @cite_9 . When labeling examples in these data sets, techniques like active learning @cite_12 can help limit the labeling that needs to be done by humans, only asking for labels on inputs a model cannot predict confidently. Our work is similar to active learning in that we have an algorithm guiding the selection of training data, but differs in the criteria for data selection, and we assume all data has already been labeled by some process.
|
{
"cite_N": [
"@cite_9",
"@cite_12",
"@cite_6",
"@cite_8"
],
"mid": [
"2155904486",
"",
"2108598243",
"2141362318"
],
"abstract": [
"Current computational approaches to learning visual object categories require thousands of training images, are slow, cannot learn in an incremental manner and cannot incorporate prior information into the learning process. In addition, no algorithm presented in the literature has been tested on more than a handful of object categories. We present an method for learning object categories from just a few training images. It is quick and it uses prior information in a principled way. We test it on a dataset composed of images of objects belonging to 101 widely varied categories. Our proposed method is based on making use of prior information, assembled from (unrelated) object categories which were previously learnt. A generative probabilistic model is used, which represents the shape and appearance of a constellation of features belonging to the object. The parameters of the model are learnt incrementally in a Bayesian manner. Our incremental algorithm is compared experimentally to an earlier batch Bayesian algorithm, as well as to one based on maximum-likelihood. The incremental and batch versions have comparable classification performance on small training sets, but incremental learning is significantly faster, making real-time learning feasible. Both Bayesian methods outperform maximum likelihood on small training sets.",
"",
"The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.",
"In this paper, we present a large-scale object retrieval system. The user supplies a query object by selecting a region of a query image, and the system returns a ranked list of images that contain the same object, retrieved from a large corpus. We demonstrate the scalability and performance of our system on a dataset of over 1 million images crawled from the photo-sharing site, Flickr [3], using Oxford landmarks as queries. Building an image-feature vocabulary is a major time and performance bottleneck, due to the size of our dataset. To address this problem we compare different scalable methods for building a vocabulary and introduce a novel quantization method based on randomized trees which we show outperforms the current state-of-the-art on an extensive ground-truth. Our experiments show that the quantization has a major effect on retrieval quality. To further improve query performance, we add an efficient spatial verification stage to re-rank the results returned from our bag-of-words model and show that this consistently improves search quality, though by less of a margin when the visual vocabulary is large. We view this work as a promising step towards much larger, \"web-scale \" image corpora."
]
}
|
1811.06524
|
2949067929
|
In many domains, collecting sufficient labeled training data for supervised machine learning requires easily accessible but noisy sources, such as crowdsourcing services or tagged Web data. Noisy labels occur frequently in data sets harvested via these means, sometimes resulting in entire classes of data on which learned classifiers generalize poorly. For real world applications, we argue that it can be beneficial to avoid training on such classes entirely. In this work, we aim to explore the classes in a given data set, and guide supervised training to spend time on a class proportional to its learnability. By focusing the training process, we aim to improve model generalization on classes with a strong signal. To that end, we develop an online algorithm that works in conjunction with classifier and training algorithm, iteratively selecting training data for the classifier based on how well it appears to generalize on each class. Testing our approach on a variety of data sets, we show our algorithm learns to focus on classes for which the model has low generalization error relative to strong baselines, yielding a classifier with good performance on learnable classes.
|
Techniques that handle noisy data with existing labels range from hand curating the data set @cite_22 , to adding model machinery intended to deal with noise @cite_26 , to including examples based on a dynamically calculated majority vote from a crowd @cite_6 . Most standard noise-handling techniques work at the instance level. Those that work at the label level @cite_22 @cite_14 usually remove data from a training set without explicitly being informed by the training process, but rather characteristics of the data (e.g. not having sufficient quality examples). Another approach is to create new categories that account for some noise @cite_28 @cite_27 . Works like @cite_6 @cite_13 keep track of how difficult a label appears based on crowd agreement @cite_20 to include or exclude various examples. In contrast, our work attempts to automatically determine good labels based on model performance, not data set size or crowd factors.
|
{
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_22",
"@cite_28",
"@cite_6",
"@cite_27",
"@cite_13",
"@cite_20"
],
"mid": [
"",
"1866072925",
"",
"2122528955",
"2108598243",
"2107698128",
"2949474740",
"1970381522"
],
"abstract": [
"",
"The availability of large labeled datasets has allowed Convolutional Network models to achieve impressive recognition results. However, in many settings manual annotation of the data is impractical; instead our data has noisy labels, i.e. there is some freely available label for each image which may or may not be accurate. In this paper, we explore the performance of discriminatively-trained Convnets when trained on such noisy data. We introduce an extra noise layer into the network which adapts the network outputs to match the noisy label distribution. The parameters of this noise layer can be estimated as part of the training process and involve simple modifications to current training infrastructures for deep networks. We demonstrate the approaches on several datasets, including large scale experiments on the ImageNet classification benchmark.",
"",
"We introduce a new descriptor for images which allows the construction of efficient and compact classifiers with good accuracy on object category recognition. The descriptor is the output of a large number of weakly trained object category classifiers on the image. The trained categories are selected from an ontology of visual concepts, but the intention is not to encode an explicit decomposition of the scene. Rather, we accept that existing object category classifiers often encode not the category per se but ancillary image characteristics; and that these ancillary characteristics can combine to represent visual classes unrelated to the constituent categories' semantic meanings. The advantage of this descriptor is that it allows object-category queries to be made against image databases using efficient classifiers (efficient at test time) such as linear support vector machines, and allows these queries to be for novel categories. Even when the representation is reduced to 200 bytes per image, classification accuracy on object category recognition is comparable with the state of the art (36 versus 42 ), but at orders of magnitude lower computational cost.",
"The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.",
"Obtaining effective mid-level representations has become an increasingly important task in computer vision. In this paper, we propose a fully automatic algorithm which harvests visual concepts from a large number of Internet images (more than a quarter of a million) using text-based queries. Existing approaches to visual concept learning from Internet images either rely on strong supervision with detailed manual annotations or learn image-level classifiers only. Here, we take the advantage of having massive well organized Google and Bing image data, visual concepts (around 14, 000) are automatically exploited from images using word-based queries. Using the learned visual concepts, we show state-of-the-art performances on a variety of benchmark datasets, which demonstrate the effectiveness of the learned mid-level representations: being able to generalize well to general natural images. Our method shows significant improvement over the competing systems in image classification, including those with strong supervision.",
"Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image. When asked \"What vehicle is the person riding?\", computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) in order to answer correctly that \"the person is riding a horse-drawn carriage\". In this paper, we present the Visual Genome dataset to enable the modeling of such relationships. We collect dense annotations of objects, attributes, and relationships within each image to learn these models. Specifically, our dataset contains over 100K images where each image has an average of 21 objects, 18 attributes, and 18 pairwise relationships between objects. We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets. Together, these annotations represent the densest and largest dataset of image descriptions, objects, attributes, relationships, and question answers.",
"Human linguistic annotation is crucial for many natural language processing tasks but can be expensive and time-consuming. We explore the use of Amazon's Mechanical Turk system, a significantly cheaper and faster method for collecting annotations from a broad base of paid non-expert contributors over the Web. We investigate five tasks: affect recognition, word similarity, recognizing textual entailment, event temporal ordering, and word sense disambiguation. For all five, we show high agreement between Mechanical Turk non-expert annotations and existing gold standard labels provided by expert labelers. For the task of affect recognition, we also show that using non-expert labels for training machine learning algorithms can be as effective as using gold standard annotations from experts. We propose a technique for bias correction that significantly improves annotation quality on two tasks. We conclude that many large labeling tasks can be effectively designed and carried out in this method at a fraction of the usual expense."
]
}
|
1811.06524
|
2949067929
|
In many domains, collecting sufficient labeled training data for supervised machine learning requires easily accessible but noisy sources, such as crowdsourcing services or tagged Web data. Noisy labels occur frequently in data sets harvested via these means, sometimes resulting in entire classes of data on which learned classifiers generalize poorly. For real world applications, we argue that it can be beneficial to avoid training on such classes entirely. In this work, we aim to explore the classes in a given data set, and guide supervised training to spend time on a class proportional to its learnability. By focusing the training process, we aim to improve model generalization on classes with a strong signal. To that end, we develop an online algorithm that works in conjunction with classifier and training algorithm, iteratively selecting training data for the classifier based on how well it appears to generalize on each class. Testing our approach on a variety of data sets, we show our algorithm learns to focus on classes for which the model has low generalization error relative to strong baselines, yielding a classifier with good performance on learnable classes.
|
Previous methods that oversee the data selection process for training classifiers have had a variety of goals. Many models are interested in minimizing time to convergence. @cite_15 present a curriculum learner that generates an ordering of training tasks based on task difficulty, the intuition being that easy examples help models earlier in training, while harder examples are more appropriate later. Our work follows this same core idea, yet the problem we aim to solve is fundamentally different. While they train a model to complete a pre-determined set of tasks, we learn to focus on tasks, i.e classes, that are easiest to learn for a model. Like us, @cite_3 are interested in manipulating the selection of training examples, though their goal is to choose examples based on suitability for transfer learning. In contrast our approach is more exploratory, as the only criteria for selecting classes is the resultant performance of a given model. Finally, @cite_2 propose to supervise model training with a deep reinforcement learning algorithm. However, rather than actively select batches of training data, they train a filter to ignore certain examples within a given mini-batch at each training step.
|
{
"cite_N": [
"@cite_15",
"@cite_3",
"@cite_2"
],
"mid": [
"2964327384",
"2736422900",
"2594061220"
],
"abstract": [
"",
"Domain similarity measures can be used to gauge adaptability and select suitable data for transfer learning, but existing approaches define ad hoc measures that are deemed suitable for respective tasks. Inspired by work on curriculum learning, we propose to data selection measures using Bayesian Optimization and evaluate them across models, domains and tasks. Our learned measures outperform existing domain similarity measures significantly on three tasks: sentiment analysis, part-of-speech tagging, and parsing. We show the importance of complementing similarity with diversity, and that learned measures are -- to some degree -- transferable across models, domains, and even tasks.",
"Machine learning is essentially the sciences of playing with data. An adaptive data selection strategy, enabling to dynamically choose different data at various training stages, can reach a more effective model in a more efficient way. In this paper, we propose a deep reinforcement learning framework, which we call eural ata ilter (), to explore automatic and adaptive data selection in the training process. In particular, NDF takes advantage of a deep neural network to adaptively select and filter important data instances from a sequential stream of training data, such that the future accumulative reward (e.g., the convergence speed) is maximized. In contrast to previous studies in data selection that is mainly based on heuristic strategies, NDF is quite generic and thus can be widely suitable for many machine learning tasks. Taking neural network training with stochastic gradient descent (SGD) as an example, comprehensive experiments with respect to various neural network modeling (e.g., multi-layer perceptron networks, convolutional neural networks and recurrent neural networks) and several applications (e.g., image classification and text understanding) demonstrate that NDF powered SGD can achieve comparable accuracy with standard SGD process by using less data and fewer iterations."
]
}
|
1811.06524
|
2949067929
|
In many domains, collecting sufficient labeled training data for supervised machine learning requires easily accessible but noisy sources, such as crowdsourcing services or tagged Web data. Noisy labels occur frequently in data sets harvested via these means, sometimes resulting in entire classes of data on which learned classifiers generalize poorly. For real world applications, we argue that it can be beneficial to avoid training on such classes entirely. In this work, we aim to explore the classes in a given data set, and guide supervised training to spend time on a class proportional to its learnability. By focusing the training process, we aim to improve model generalization on classes with a strong signal. To that end, we develop an online algorithm that works in conjunction with classifier and training algorithm, iteratively selecting training data for the classifier based on how well it appears to generalize on each class. Testing our approach on a variety of data sets, we show our algorithm learns to focus on classes for which the model has low generalization error relative to strong baselines, yielding a classifier with good performance on learnable classes.
|
In our approach, data is selected at each training step using a Multi-Armed Bandit algorithm, a problem first introduced in @cite_17 . The original formulation of the problem assumes that the reward for a given decision made by the algorithm is drawn from a fixed distribution every time that decision is made (i.e. a stochastic payoff function). In our setting, rewards are based on the state of a classifier at a given training step, which changes over time. Bandit algorithms developed for such scenarios are called MAB algorithms @cite_5 , and make no assumptions on the payoff structure of decisions. We additionally assume that the class set we are choosing from is large, rendering naive selection strategies ineffective. For these settings, it is common to assume a structure on the decision-space, specifically that there exists a means of measuring similarity between decisions @cite_25 . One such approach is to assume that the reward function changes smoothly over the decision space, i.e that similar decisions will have similar payoffs @cite_16 . We use a time-varying version of this algorithm presented in @cite_10 .
|
{
"cite_N": [
"@cite_5",
"@cite_16",
"@cite_10",
"@cite_25",
"@cite_17"
],
"mid": [
"2116067849",
"2951665052",
"2271627589",
"1620761767",
"1998498767"
],
"abstract": [
"In the multi-armed bandit problem, a gambler must decide which arm of K non-identical slot machines to play in a sequence of trials so as to maximize his reward. This classical problem has received much attention because of the simple model it provides of the trade-off between exploration (trying out each arm to find the best one) and exploitation (playing the arm believed to give the best payoff). Past solutions for the bandit problem have almost always relied on assumptions about the statistics of the slot machines. In this work, we make no statistical assumptions whatsoever about the nature of the process generating the payoffs of the slot machines. We give a solution to the bandit problem in which an adversary, rather than a well-behaved stochastic process, has complete control over the payoffs. In a sequence of T plays, we prove that the expected per-round payoff of our algorithm approaches that of the best arm at the rate O(T sup -1 3 ), and we give an improved rate of convergence when the best arm has fairly low payoff. We also consider a setting in which the player has a team of \"experts\" advising him on which arm to play; here, we give a strategy that will guarantee expected payoff close to that of the best expert. Finally, we apply our result to the problem of learning to play an unknown repeated matrix game against an all-powerful adversary.",
"Many applications require optimizing an unknown, noisy function that is expensive to evaluate. We formalize this task as a multi-armed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low RKHS norm. We resolve the important open problem of deriving regret bounds for this setting, which imply novel convergence rates for GP optimization. We analyze GP-UCB, an intuitive upper-confidence based algorithm, and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design. Moreover, by bounding the latter in terms of operator spectra, we obtain explicit sublinear regret bounds for many commonly used covariance functions. In some important cases, our bounds have surprisingly weak dependence on the dimensionality. In our experiments on real sensor data, GP-UCB compares favorably with other heuristical GP optimization approaches.",
"We consider the sequential Bayesian optimization problem with bandit feedback, adopting a formulation that allows for the reward function to vary with time. We model the reward function using a Gaussian process whose evolution obeys a simple Markov model. We introduce two natural extensions of the classical Gaussian process upper confidence bound (GP-UCB) algorithm. The first, R-GP-UCB, resets GP-UCB at regular intervals. The second, TV-GP-UCB, instead forgets about old data in a smooth fashion. Our main contribution comprises of novel regret bounds for these algorithms, providing an explicit characterization of the trade-off between the time horizon and the rate at which the function varies. We illustrate the performance of the algorithms on both synthetic and real data, and we find the gradual forgetting of TV-GP-UCB to perform favorably compared to the sharp resetting of R-GP-UCB. Moreover, both algorithms significantly outperform classical GP-UCB, since it treats stale and fresh data equally.",
"In a multi-armed bandit problem, an online algorithm chooses from a set of strategies in a sequence of trials so as to maximize the total payoff of the chosen strategies. While the performance of bandit algorithms with a small finite strategy set is quite well understood, bandit problems with large strategy sets are still a topic of very active investigation, motivated by practical applications such as online auctions and web advertisement. The goal of such research is to identify broad and natural classes of strategy sets and payoff functions which enable the design of efficient solutions. In this work we study a very general setting for the multi-armed bandit problem in which the strategies form a metric space, and the payoff function satisfies a Lipschitz condition with respect to the metric. We refer to this problem as the \"Lipschitz MAB problem\". We present a solution for the multi-armed bandit problem in this setting. That is, for every metric space we define an isometry invariant which bounds from below the performance of Lipschitz MAB algorithms for this metric space, and we present an algorithm which comes arbitrarily close to meeting this bound. Furthermore, our technique gives even better results for benign payoff functions. We also address the full-feedback (\"best expert\") version of the problem, where after every round the payoffs from all arms are revealed.",
"Until recently, statistical theory has been restricted to the design and analysis of sampling experiments in which the size and composition of the samples are completely determined before the experimentation begins. The reasons for this are partly historical, dating back to the time when the statistician was consulted, if at all, only after the experiment was over, and partly intrinsic in the mathematical difficulty of working with anything but a fixed number of independent random variables. A major advance now appears to be in the making with the creation of a theory of the sequential design of experiments, in which the size and composition of the samples are not fixed in advance but are functions of the observations themselves."
]
}
|
1811.06823
|
2901067419
|
Treasure hunt is the task of finding an inert target by a mobile agent in an unknown environment. We consider treasure hunt in geometric terrains with obstacles. Both the terrain and the obstacles are modeled as polygons and both the agent and the treasure are modeled as points. The agent navigates in the terrain, avoiding obstacles, and finds the treasure when there is a segment of length at most 1 between them, unobstructed by the boundary of the terrain or by the obstacles. The cost of finding the treasure is the length of the trajectory of the agent. We investigate the amount of information that the agent needs a priori in order to find the treasure at cost @math , where @math is the length of the shortest path in the terrain from the initial position of the agent to the treasure, avoiding obstacles. Following the paradigm of algorithms with advice, this information is given to the agent in advance as a binary string, by an oracle cooperating with the agent and knowing the whole environment: the terrain, the position of the treasure and the initial position of the agent. Information complexity of treasure hunt is the minimum length of the advice string (up to multiplicative constants) that enables the agent to find the treasure at cost @math . We first consider treasure hunt in regular terrains which are defined as convex polygons with convex @math -fat obstacles, for some constant @math . A polygon is @math -fat if the ratio of the radius of the smallest disc containing it to the radius of the largest disc contained in it is at most @math . For the class of regular terrains, we establish the exact information complexity of treasure hunt. We then show that information complexity of treasure hunt for the class of arbitrary terrains (even for non-convex polygons without obstacles, and even for those with only horizontal or vertical sides) is exponentially larger than for regular terrains.
|
@cite_31 , the authors gave a @math -competitive algorithm for rectilinear polygon exploration with unlimited vision. The case of non-rectilinear polygons (without obstacles) was also studied in @cite_42 @cite_21 and a competitive algorithm was given in this case.
|
{
"cite_N": [
"@cite_31",
"@cite_42",
"@cite_21"
],
"mid": [
"2050037634",
"2126572568",
"2075002099"
],
"abstract": [
"We consider the problem faced by a robot that must explore and learn an unknown room with obstacles in it. We seek algorithms that achieve a bounded ratio of the worst-case distance traversed in order to see all visible points of the environment (thus creating a map), divided by the optimum distance needed to verify the map, if we had it in the beginning. The situation is complicated by the fact that the latter off-line problem (the problem of optimally verifying a map) is NP-hard. Although we show that there is no such “competitive” algorithm for general obstacle courses, we give a competitive algorithm for the case of a polygonal room with a bounded number of obstacles in it. We restrict ourselves to the rectilinear case, where each side of the obstacles and the room is parallel to one of the coordinates, and the robot must also move either parallel or perpendicular to the sides. (In a subsequent paper, we will discuss the extension to polygons of general shapes.) We also discuss the off-line problem for simple rectilinear polygons and find an optimal solution (in the L 1 metric) in polynomial time, in the case where the entry and the exit are different points.",
"The authors consider the problem faced by a newborn that must explore and learn an unknown room with obstacles in it. They seek algorithms that achieve a bounded ratio of the worst-case distance traversed in order to see all visible points of the environment (thus creating a map), divided by the optimum distance needed to verify the map. The situation is complicated by the fact that the latter offline problem (optimally verifying a map) is NP-hard and thus must be solved approximately. Although the authors show that there is no such competitive algorithm for general obstacle courses, they give a competitive algorithm for the case of a polygonal room with a bounded number of obstacles in it. >",
"We present an on-line strategy that enables a mobile robot with vision to explore an unknown simple polygon. We prove that the resulting tour is less than 26.5 times as long as the shortest watchman tour that could be computed off-line. Our analysis is doubly founded on a novel geometric structure called angle hull. Let D be a connected region inside a simple polygon, P. We define the angle hull of D, @math , to be the set of all points in P that can see two points of D at a right angle. We show that the perimeter of @math cannot exceed in length the perimeter of D by more than a factor of 2. This upper bound is tight."
]
}
|
1811.06823
|
2901067419
|
Treasure hunt is the task of finding an inert target by a mobile agent in an unknown environment. We consider treasure hunt in geometric terrains with obstacles. Both the terrain and the obstacles are modeled as polygons and both the agent and the treasure are modeled as points. The agent navigates in the terrain, avoiding obstacles, and finds the treasure when there is a segment of length at most 1 between them, unobstructed by the boundary of the terrain or by the obstacles. The cost of finding the treasure is the length of the trajectory of the agent. We investigate the amount of information that the agent needs a priori in order to find the treasure at cost @math , where @math is the length of the shortest path in the terrain from the initial position of the agent to the treasure, avoiding obstacles. Following the paradigm of algorithms with advice, this information is given to the agent in advance as a binary string, by an oracle cooperating with the agent and knowing the whole environment: the terrain, the position of the treasure and the initial position of the agent. Information complexity of treasure hunt is the minimum length of the advice string (up to multiplicative constants) that enables the agent to find the treasure at cost @math . We first consider treasure hunt in regular terrains which are defined as convex polygons with convex @math -fat obstacles, for some constant @math . A polygon is @math -fat if the ratio of the radius of the smallest disc containing it to the radius of the largest disc contained in it is at most @math . For the class of regular terrains, we establish the exact information complexity of treasure hunt. We then show that information complexity of treasure hunt for the class of arbitrary terrains (even for non-convex polygons without obstacles, and even for those with only horizontal or vertical sides) is exponentially larger than for regular terrains.
|
For polygonal environments with an arbitrary number of polygonal obstacles, it was shown in @cite_31 that no competitive strategy exists, even if all obstacles are parallelograms. Later, this result was improved in @cite_16 by giving a lower bound in @math for the competitive ratio of any on-line algorithm exploring a polygon with @math obstacles. This bound remains true even for rectangular obstacles. Nevertheless, if the number of obstacles is bounded by a constant @math , then there exists a competitive algorithm with competitive ratio in @math @cite_42 .
|
{
"cite_N": [
"@cite_31",
"@cite_16",
"@cite_42"
],
"mid": [
"2050037634",
"2158277215",
"2126572568"
],
"abstract": [
"We consider the problem faced by a robot that must explore and learn an unknown room with obstacles in it. We seek algorithms that achieve a bounded ratio of the worst-case distance traversed in order to see all visible points of the environment (thus creating a map), divided by the optimum distance needed to verify the map, if we had it in the beginning. The situation is complicated by the fact that the latter off-line problem (the problem of optimally verifying a map) is NP-hard. Although we show that there is no such “competitive” algorithm for general obstacle courses, we give a competitive algorithm for the case of a polygonal room with a bounded number of obstacles in it. We restrict ourselves to the rectilinear case, where each side of the obstacles and the room is parallel to one of the coordinates, and the robot must also move either parallel or perpendicular to the sides. (In a subsequent paper, we will discuss the extension to polygons of general shapes.) We also discuss the off-line problem for simple rectilinear polygons and find an optimal solution (in the L 1 metric) in polynomial time, in the case where the entry and the exit are different points.",
"We study exploration problems where a robot has to construct a complete map of an unknown environment using a path that is as short as possible.",
"The authors consider the problem faced by a newborn that must explore and learn an unknown room with obstacles in it. They seek algorithms that achieve a bounded ratio of the worst-case distance traversed in order to see all visible points of the environment (thus creating a map), divided by the optimum distance needed to verify the map. The situation is complicated by the fact that the latter offline problem (optimally verifying a map) is NP-hard and thus must be solved approximately. Although the authors show that there is no such competitive algorithm for general obstacle courses, they give a competitive algorithm for the case of a polygonal room with a bounded number of obstacles in it. >"
]
}
|
1811.06823
|
2901067419
|
Treasure hunt is the task of finding an inert target by a mobile agent in an unknown environment. We consider treasure hunt in geometric terrains with obstacles. Both the terrain and the obstacles are modeled as polygons and both the agent and the treasure are modeled as points. The agent navigates in the terrain, avoiding obstacles, and finds the treasure when there is a segment of length at most 1 between them, unobstructed by the boundary of the terrain or by the obstacles. The cost of finding the treasure is the length of the trajectory of the agent. We investigate the amount of information that the agent needs a priori in order to find the treasure at cost @math , where @math is the length of the shortest path in the terrain from the initial position of the agent to the treasure, avoiding obstacles. Following the paradigm of algorithms with advice, this information is given to the agent in advance as a binary string, by an oracle cooperating with the agent and knowing the whole environment: the terrain, the position of the treasure and the initial position of the agent. Information complexity of treasure hunt is the minimum length of the advice string (up to multiplicative constants) that enables the agent to find the treasure at cost @math . We first consider treasure hunt in regular terrains which are defined as convex polygons with convex @math -fat obstacles, for some constant @math . A polygon is @math -fat if the ratio of the radius of the smallest disc containing it to the radius of the largest disc contained in it is at most @math . For the class of regular terrains, we establish the exact information complexity of treasure hunt. We then show that information complexity of treasure hunt for the class of arbitrary terrains (even for non-convex polygons without obstacles, and even for those with only horizontal or vertical sides) is exponentially larger than for regular terrains.
|
Exploration of polygons by a robot with limited vision has been studied, e.g., in @cite_27 @cite_7 @cite_0 . In @cite_27 the authors described an on-line algorithm with competitive ratio @math , where @math is a quantity depending on the perimeter of the polygon, @math is the area seen by the robot, and @math is the area of the polygon. In @cite_0 the author studied exploration of the boundary of a terrain with limited vision. The cost of exploration of arbitrary terrains with obstacles, both for limited and unlimited vision, was studied in @cite_30 .
|
{
"cite_N": [
"@cite_0",
"@cite_27",
"@cite_30",
"@cite_7"
],
"mid": [
"2043003284",
"1873425142",
"2044973122",
"2142617093"
],
"abstract": [
"We consider the motion planning problem for a point constrained to move along a smooth closed convex path of bounded curvature. The workspace of the moving point is bounded by a convex polygon with m vertices, containing an obstacle in a form of a simple polygon with n vertices. We present an O(m+n) time algorithm finding the path, going around the obstacle, whose curvature is the smallest possible.",
"The paper considers the problem of covering a continuous planar area by a square-shaped tool attached to a mobile robot. Using a tool-based approximation of the work-area, we present an algorithm that covers every point of the approximate area. The algorithm, called spanning tree covering (STC), subdivides the work-area into disjoint cells corresponding to the square-shaped tool, then follows a spanning tree of the graph induced by the cells, while covering every point precisely once. We present and analyze three versions of the STC algorithm. The first version is an off-line algorithm that computes an optimal covering path in linear time O(N), where N is the number of cells comprising the approximate area. The second version is an online or sensor based algorithm, that completes an optimal covering path in time O(N), but requires O(N) memory for its implementation. The third version of STC is \"ant\"-like, where the robot may leave pheromone-like markers during the coverage process. The ant-like STC algorithm runs in time O(N) and requires only O(1) memory. We present simulation results of the three STC algorithms, demonstrating their effectiveness in cases where the tool size is significantly smaller than the work-area characteristic dimension.",
"A mobile robot represented by a point moving in the plane has to explore an unknown flat terrain with impassable obstacles. Both the terrain and the obstacles are modeled as arbitrary polygons. We consider two scenarios: the unlimited vision, when the robot situated at a point p of the terrain explores (sees) all points q of the terrain for which the segment pq belongs to the terrain, and the limited vision, when we require additionally that the distance between p and q is at most 1. All points of the terrain (except obstacles) have to be explored and the performance of an exploration algorithm, called its complexity, is measured by the length of the trajectory of the robot. For unlimited vision we show an exploration algorithm with complexity O(P+Dk), where P is the total perimeter of the terrain (including perimeters of obstacles), D is the diameter of the convex hull of the terrain, and k is the number of obstacles. We do not assume knowledge of these parameters. We also prove a matching lower bound showing that the above complexity is optimal, even if the terrain is known to the robot. For limited vision we show exploration algorithms with complexity O(P+A+Ak), where A is the area of the terrain (excluding obstacles). Our algorithms work either for arbitrary terrains (if one of the parameters A or k is known) or for c-fat terrains, where c is any constant (unknown to the robot) and no additional knowledge is assumed. (A terrain T with obstacles is c-fat if R r=",
"The context of this work is the exploration of unknown polygonal environments with obstacles. Both the outer boundary and the boundaries of obstacles are piecewise linear. The boundaries can be nonconvex. The exploration problem can be motivated by the following application. Imagine that a robot has to explore the interior of a collapsed building, which has crumbled due to an earthquake, to search for human survivors. It is clearly impossible to have a knowledge of the building's interior geometry prior to the exploration. Thus, the robot must be able to see, with its onboard vision sensors, all points in the building's interior while following its exploration path. In this way, no potential survivors will be missed by the exploring robot. The exploratory path must clearly reflect the topology of the free space, and, therefore, such exploratory paths can be used to guide future robot excursions (such as would arise in our example from a rescue operation)."
]
}
|
1811.06823
|
2901067419
|
Treasure hunt is the task of finding an inert target by a mobile agent in an unknown environment. We consider treasure hunt in geometric terrains with obstacles. Both the terrain and the obstacles are modeled as polygons and both the agent and the treasure are modeled as points. The agent navigates in the terrain, avoiding obstacles, and finds the treasure when there is a segment of length at most 1 between them, unobstructed by the boundary of the terrain or by the obstacles. The cost of finding the treasure is the length of the trajectory of the agent. We investigate the amount of information that the agent needs a priori in order to find the treasure at cost @math , where @math is the length of the shortest path in the terrain from the initial position of the agent to the treasure, avoiding obstacles. Following the paradigm of algorithms with advice, this information is given to the agent in advance as a binary string, by an oracle cooperating with the agent and knowing the whole environment: the terrain, the position of the treasure and the initial position of the agent. Information complexity of treasure hunt is the minimum length of the advice string (up to multiplicative constants) that enables the agent to find the treasure at cost @math . We first consider treasure hunt in regular terrains which are defined as convex polygons with convex @math -fat obstacles, for some constant @math . A polygon is @math -fat if the ratio of the radius of the smallest disc containing it to the radius of the largest disc contained in it is at most @math . For the class of regular terrains, we establish the exact information complexity of treasure hunt. We then show that information complexity of treasure hunt for the class of arbitrary terrains (even for non-convex polygons without obstacles, and even for those with only horizontal or vertical sides) is exponentially larger than for regular terrains.
|
Navigation in a @math square room filled with rectangle obstacles aligned with sides of the square was considered in @cite_15 @cite_36 @cite_3 @cite_25 . It was shown in @cite_15 that the navigation from a corner to the center of a room can be performed with a competitive ratio @math , only using tactile information (i.e., the robot modeled as a point sees an obstacle only when it touches it). No deterministic algorithm can achieve better competitive ratio, even with unlimited vision @cite_15 . For navigation between any pair of points, there is a deterministic algorithm achieving a competitive ratio of @math @cite_3 . No deterministic algorithm can achieve a better competitive ratio @cite_25 . However, there is a randomized approach performing navigation with a competitive ratio of @math @cite_36 .
|
{
"cite_N": [
"@cite_36",
"@cite_15",
"@cite_25",
"@cite_3"
],
"mid": [
"1971116024",
"1983655539",
"1992876208",
"2095383780"
],
"abstract": [
"",
"We consider the problem of navigating through an unknownenvironment in which the obstacles are disjoint oriented rectanglesenclosed in an n x n square room. The task of navigatingalgorithm is to reach the center of the room starting from one of thecorners. While there always exists a path of length n , the best previously knownnavigating algorithm finds paths of length n 20 1nn . We give an efficient deterministicalgorithm which finds a path of length O ( n ln n ); this algorithm uses tactileinformation only. Moreover, we prove that any deterministic algorithmcan be forced to traverse a distance ofO( n ln n ), even if it uses visualinformation.",
"We study several versions of the shortest-path problem when the map is not known in advanced, but is specified dynamically. We are seeking dynamic decision rules that optimize the worst-case ratio of the distance covered to the length of the (statically) optimal path. We describe optimal decision rules for two cases: Layered graphs of bounded width, and two-dimensional scenes with unit square obstacles. The optimal rules turn out to be intuitive, common-sense heuristics. For slightly more general graphs and scenes, we show that no bounded ratio is possible. We also show that the computational problem of devising a strategy that achieves a given worst-case ratio to the optimum path in a graph is a universal two-person game, and thus PSPACE-complete, whereas optimizing the expected ratio is #P-hard.",
"Consider a robot that has to travel from a start location @math to a target @math in an environment with opaque obstacles that lie in its way. The robot always knows its current absolute position and that of the target. It does not, however, know the positions and extents of the obstacles in advance; rather, it finds out about obstacles as it encounters them. We compare the distance walked by the robot in going from @math to @math to the length of the shortest (obstacle-free) path between @math and @math in the scene. We describe and analyze robot strategies that minimize this ratio for different kinds of scenes. In particular, we consider the cases of rectangular obstacles aligned with the axes, rectangular obstacles in more general orientations, and wider classes of convex bodies both in two and three dimensions. For many of these situations, our algorithms are optimal up to constant factors. We study scenes with nonconvex obstacles, which are related to the study of maze traversal. We also show scenes where randomized algorithms are provably better than deterministic algorithms."
]
}
|
1811.06823
|
2901067419
|
Treasure hunt is the task of finding an inert target by a mobile agent in an unknown environment. We consider treasure hunt in geometric terrains with obstacles. Both the terrain and the obstacles are modeled as polygons and both the agent and the treasure are modeled as points. The agent navigates in the terrain, avoiding obstacles, and finds the treasure when there is a segment of length at most 1 between them, unobstructed by the boundary of the terrain or by the obstacles. The cost of finding the treasure is the length of the trajectory of the agent. We investigate the amount of information that the agent needs a priori in order to find the treasure at cost @math , where @math is the length of the shortest path in the terrain from the initial position of the agent to the treasure, avoiding obstacles. Following the paradigm of algorithms with advice, this information is given to the agent in advance as a binary string, by an oracle cooperating with the agent and knowing the whole environment: the terrain, the position of the treasure and the initial position of the agent. Information complexity of treasure hunt is the minimum length of the advice string (up to multiplicative constants) that enables the agent to find the treasure at cost @math . We first consider treasure hunt in regular terrains which are defined as convex polygons with convex @math -fat obstacles, for some constant @math . A polygon is @math -fat if the ratio of the radius of the smallest disc containing it to the radius of the largest disc contained in it is at most @math . For the class of regular terrains, we establish the exact information complexity of treasure hunt. We then show that information complexity of treasure hunt for the class of arbitrary terrains (even for non-convex polygons without obstacles, and even for those with only horizontal or vertical sides) is exponentially larger than for regular terrains.
|
Algorithms with advice. The paradigm of algorithms with advice was developed mostly for tasks in graphs. Providing arbitrary types of knowledge that can be used to increase efficiency of solutions to network problems has been proposed in @cite_38 @cite_2 @cite_20 @cite_4 @cite_1 @cite_18 @cite_19 @cite_29 @cite_10 @cite_24 @cite_35 @cite_13 @cite_32 @cite_8 @cite_5 @cite_11 . This approach was referred to as algorithms with advice . The advice is given either to the nodes of the network or to mobile agents performing some task in it. In the first case, instead of advice, the term informative labeling schemes is sometimes used if different nodes can get different information.
|
{
"cite_N": [
"@cite_13",
"@cite_38",
"@cite_18",
"@cite_35",
"@cite_4",
"@cite_8",
"@cite_29",
"@cite_1",
"@cite_32",
"@cite_24",
"@cite_19",
"@cite_2",
"@cite_5",
"@cite_10",
"@cite_20",
"@cite_11"
],
"mid": [
"",
"2025590344",
"2038319432",
"1983693678",
"1975011672",
"",
"",
"1971694274",
"2056295140",
"2034501275",
"1975595616",
"2046334554",
"2174013141",
"2251394987",
"2109659895",
"2045446569"
],
"abstract": [
"",
"We consider the following problem. Given a rooted tree @math , label the nodes of @math in the most compact way such that, given the labels of two nodes @math and @math , one can determine in constant time, by looking only at the labels, whether @math is ancestor of @math . The best known labeling scheme is rather straightforward and uses labels of length at most @math bits each, where @math is the number of nodes in the tree. Our main result in this paper is a labeling scheme with maximum label length @math . Our motivation for studying this problem is enhancing the performance of web search engines. In the context of this application each indexed document is a tree, and the labels of all trees are maintained in main memory. Therefore even small improvements in the maximum label length are important.",
"We study the amount of knowledge about the network that is required in order to efficiently solve a task concerning this network. The impact of available information on the efficiency of solving network problems, such as communication or exploration, has been investigated before but assumptions concerned availability of particular items of information about the network, such as the size, the diameter, or a map of the network. In contrast, our approach is quantitative: we investigate the minimum number of bits of information (bits of advice) that has to be given to an algorithm in order to perform a task with given efficiency. We illustrate this quantitative approach to available knowledge by the task of tree exploration. A mobile entity (robot) has to traverse all edges of an unknown tree, using as few edge traversals as possible. The quality of an exploration algorithm A is measured by its competitive ratio, i.e., by comparing its cost (number of edge traversals) to the length of the shortest path containing all edges of the tree. Depth-First-Search has competitive ratio 2 and, in the absence of any information about the tree, no algorithm can beat this value. We determine the minimum number of bits of advice that has to be given to an exploration algorithm in order to achieve competitive ratio strictly smaller than 2. Our main result establishes an exact threshold number of bits of advice that turns out to be roughly loglogD, where D is the diameter of the tree. More precisely, for any constant c, we construct an exploration algorithm with competitive ratio smaller than 2, using at most loglogD-c bits of advice, and we show that every algorithm using loglogD-g(D) bits of advice, for any function g unbounded from above, has competitive ratio at least 2.",
"We study deterministic broadcasting in radio networks in the recently introduced framework of network algorithms with advice. We concentrate on the problem of trade-offs between the number of bits of information (size of advice) available to nodes and the time in which broadcasting can be accomplished. In particular, we ask what is the minimum number of bits of information that must be available to nodes of the network, in order to broadcast very fast. For networks in which constant time broadcast is possible under a complete knowledge of the network we give a tight answer to the above question: O(n) bits of advice are sufficient but o(n) bits are not, in order to achieve constant broadcasting time in all these networks. This is in sharp contrast with geometric radio networks of constant broadcasting time: we show that in these networks a constant number of bits suffices to broadcast in constant time. For arbitrary radio networks we present a broadcasting algorithm whose time is inverse-proportional to the size of the advice.",
"We study the problem of the amount of information (advice) about a graph that must be given to its nodes in order to achieve fast distributed computations. The required size of the advice enables to measure the information sensitivity of a network problem. A problem is information sensitive if little advice is enough to solve the problem rapidly (i.e., much faster than in the absence of any advice), whereas it is information insensitive if it requires giving a lot of information to the nodes in order to ensure fast computation of the solution. In this paper, we study the information sensitivity of distributed graph coloring.",
"",
"",
"We study the amount of knowledge about a communication network that must be given to its nodes in order to efficiently disseminate information. Our approach is quantitative: we investigate the minimum total number of bits of information (minimum size of advice) that has to be available to nodes, regardless of the type of information provided. We compare the size of advice needed to perform broadcast and wakeup (the latter is a broadcast in which nodes can transmit only after getting the source information), both using a linear number of messages (which is optimal). We show that the minimum size of advice permitting the wakeup with a linear number of messages in an n-node network, is @Q(nlogn), while the broadcast with a linear number of messages can be achieved with advice of size O(n). We also show that the latter size of advice is almost optimal: no advice of size o(n) can permit to broadcast with a linear number of messages. Thus an efficient wakeup requires strictly more information about the network than an efficient broadcast.",
"This paper addresses the problem of locally verifying global properties. Several natural questions are studied, such as “how expensive is local verification?” and more specifically, “how expensive is local verification compared to computation?” A suitable model is introduced in which these questions are studied in terms of the number of bits a vertex needs to communicate. The model includes the definition of a proof labeling scheme (a pair of algorithms- one to assign the labels, and one to use them to verify that the global property holds). In addition, approaches are presented for the efficient construction of schemes, and upper and lower bounds are established on the bit complexity of schemes for multiple basic problems. The paper also studies the role and cost of unique identities in terms of impossibility and complexity, in the context of proof labeling schemes. Previous studies on related questions deal with distributed algorithms that simultaneously compute a configuration and verify that this configuration has a certain desired property. It turns out that this combined approach enables the verification to be less costly sometimes, since the configuration is typically generated so as to be easily verifiable. In contrast, our approach separates the configuration design from the verification. That is, it first generates the desired configuration without bothering with the need to verify it, and then handles the task of constructing a suitable verification scheme. Our approach thus allows for a more modular design of algorithms, and has the potential to aid in verifying properties even when the original design of the structures for maintaining them was done without verification in mind.",
"We consider the problem of labeling the nodes of a graph in a way that will allow one to compute the distance between any two nodes directly from their labels (without using any additional information). Our main interest is in the minimal length of labels needed in different cases. We obtain upper and lower bounds for several interesting families of graphs. In particular, our main results are the following. For general graphs, we show that the length needed is Θ(n). For trees, we show that the length needed is Θ(log2 n). For planar graphs, we show an upper bound of O(√nlogn) and a lower bound of Ω(n1 3). For bounded degree graphs, we show a lower bound of Ω(√n). The upper bounds for planar graphs and for trees follow by a more general upper bound for graphs with a r(n)-separator. The two lower bounds, however, are obtained by two different arguments that may be interesting in their own right. We also show some lower bounds on the length of the labels, even if it is only required that distances be approximated to a multiplicative factor s. For example, we show that for general graphs the required length is Ω(n) for every s < 3. We also consider the problem of the time complexity of the distance function once the labels are computed. We show that there are graphs with optimal labels of length 3 log n, such that if we use any labels with fewer than n bits per label, computing the distance function requires exponential time. A similar result is obtained for planar and bounded degree graphs.",
"We use the recently introduced advising scheme framework for measuring the difficulty of locally distributively computing a Minimum Spanning Tree (MST). An (m,t)-advising scheme for a distributed problem P is a way, for every possible input I of P, to provide an \"advice\" (i.e., a bit string) about I to each node so that: (1) the maximum size of the advices is at most m bits, and (2) the problem P can be solved distributively in at most t rounds using the advices as inputs. In case of MST, the output returned by each node of a weighted graph G is the edge leading to its parent in some rooted MST T of G. Clearly, there is a trivial (log n,0)-advising scheme for MST (each node is given the local port number of the edge leading to the root of some MST T), and it is known that any (0,t)-advising scheme satisfies t ≥ Ω (√n). Our main result is the construction of an (O(1),O(log n))-advising scheme for MST. That is, by only giving a constant number of bits of advice to each node, one can decrease exponentially the distributed computation time of MST in arbitrary graph, compared to algorithms dealing with the problem in absence of any a priori information. We also consider the average size of the advices. On the one hand, we show that any (m,0)-advising scheme for MST gives advices of average size Ω(log n). On the other hand we design an (m,1)-advising scheme for MST with advices of constant average size, that is one round is enough to decrease the average size of the advices from log(n) to constant.",
"We study the problem of the amount of information required to draw a complete or a partial map of a graph with unlabeled nodes and arbitrarily labeled ports. A mobile agent, starting at any node of an unknown connected graph and walking in it, has to accomplish one of the following tasks: draw a complete map of the graph, i.e., find an isomorphic copy of it including port numbering, or draw a partial map, i.e., a spanning tree, again with port numbering. The agent executes a deterministic algorithm and cannot mark visited nodes in any way. None of these map drawing tasks is feasible without any additional information, unless the graph is a tree. Hence we investigate the minimum number of bits of information (minimum size of advice) that has to be given to the agent to complete these tasks. It turns out that this minimum size of advice depends on the number n of nodes or the number m of edges of the graph, and on a crucial parameter @m, called the multiplicity of the graph, which measures the number of nodes that have an identical view of the graph. We give bounds on the minimum size of advice for both above tasks. For @m=1 our bounds are asymptotically tight for both tasks and show that the minimum size of advice is very small. For @m>1 the minimum size of advice increases abruptly. In this case our bounds are asymptotically tight for topology recognition and asymptotically almost tight for spanning tree construction.",
"[L. Blin, P. Fraigniaud, N. Nisse, S. Vial, Distributing chasing of network intruders, in: 13th Colloquium on Structural Information and Communication Complexity, SIROCCO, in: LNCS, vol. 4056, Springer-Verlag, 2006, pp. 70-84] introduced a new measure of difficulty for a distributed task in a network. The smallest number of bits of advice of a distributed problem is the smallest number of bits of information that has to be available to nodes in order to accomplish the task efficiently. Our paper deals with the number of bits of advice required to perform efficiently the graph searching problem in a distributed setting. In this variant of the problem, all searchers are initially placed at a particular node of the network. The aim of the team of searchers is to clear a contaminated graph in a monotone connected way, i.e., the cleared part of the graph is permanently connected, and never decreases while the search strategy is executed. Moreover, the clearing of the graph must be performed using the optimal number of searchers, i.e. the minimum number of searchers sufficient to clear the graph in a monotone connected way in a centralized setting. We show that the minimum number of bits of advice permitting the monotone connected and optimal clearing of a network in a distributed setting is @Q(nlogn), where n is the number of nodes of the network. More precisely, we first provide a labelling of the vertices of any graph G, using a total of O(nlogn) bits, and a protocol using this labelling that enables the optimal number of searchers to clear G in a monotone connected distributed way. Then, we show that this number of bits of advice is optimal: any distributed protocol requires @W(nlogn) bits of advice to clear a network in a monotone connected way, using an optimal number of searchers.",
"In topology recognition, each node of an anonymous network has to deterministically produce an isomorphic copy of the underlying graph, with all ports correctly marked. This task is usually unfeasible without any a priori information. Such information can be provided to nodes as advice. An oracle knowing the network can give a (possibly different) string of bits to each node, and all nodes must reconstruct the network using this advice, after a given number of rounds of communication. During each round each node can exchange arbitrary messages with all its neighbors and perform arbitrary local computations. The time of completing topology recognition is the number of rounds it takes, and the size of advice is the maximum length of a string given to nodes.We investigate tradeoffs between the time in which topology recognition is accomplished and the minimum size of advice that has to be given to nodes. We provide upper and lower bounds on the minimum size of advice that is sufficient to perform topology recognition in a given time, in the class of all graphs of size n and diameter D ? α n , for any constant α < 1 . In most cases, our bounds are asymptotically tight.",
"We consider a model for online computation in which the online algorithm receives, together with each request, some information regarding the future, referred to as advice. The advice is a function, defined by the online algorithm, of the whole request sequence. The advice provided to the online algorithm may allow an improvement in its performance, compared to the classical model of complete lack of information regarding the future. We are interested in the impact of such advice on the competitive ratio, and in particular, in the relation between the size b of the advice, measured in terms of bits of information per request, and the (improved) competitive ratio. Since b=0 corresponds to the classical online model, and b=@?log|A|@?, where A is the algorithm's action space, corresponds to the optimal (offline) one, our model spans a spectrum of settings ranging from classical online algorithms to offline ones. In this paper we propose the above model and illustrate its applicability by considering two of the most extensively studied online problems, namely, metrical task systems (MTS) and the k-server problem. For MTS we establish tight (up to constant factors) upper and lower bounds on the competitive ratio of deterministic and randomized online algorithms with advice for any choice of 1@?b@?@Q(logn), where n is the number of states in the system: we prove that any randomized online algorithm for MTS has competitive ratio @W(log(n) b) and we present a deterministic online algorithm for MTS with competitive ratio O(log(n) b). For the k-server problem we construct a deterministic online algorithm for general metric spaces with competitive ratio k^O^(^1^ ^b^) for any choice of @Q(1)@?b@?logk.",
"Let G = (V,E) be an undirected weighted graph with vVv = n and vEv = m. Let k ≥ 1 be an integer. We show that G = (V,E) can be preprocessed in O(kmn1 k) expected time, constructing a data structure of size O(kn1p1 k), such that any subsequent distance query can be answered, approximately, in O(k) time. The approximate distance returned is of stretch at most 2k−1, that is, the quotient obtained by dividing the estimated distance by the actual distance lies between 1 and 2k−1. A 1963 girth conjecture of Erdos, implies that Ω(n1p1 k) space is needed in the worst case for any real stretch strictly smaller than 2kp1. The space requirement of our algorithm is, therefore, essentially optimal. The most impressive feature of our data structure is its constant query time, hence the name \"oracle\". Previously, data structures that used only O(n1p1 k) space had a query time of Ω(n1 k).Our algorithms are extremely simple and easy to implement efficiently. They also provide faster constructions of sparse spanners of weighted graphs, and improved tree covers and distance labelings of weighted or unweighted graphs."
]
}
|
1811.06803
|
2941169308
|
The arboricity of a graph is the minimum number of forests it can be partitioned into. Previous approximation schemes were nonconstructive, i.e., they only approximated the arboricity as a value without computing a corresponding forest partition, as they operate on the related pseudoforest partitions or the dual problem. We propose an algorithm for converting a partition of @math pseudoforests into a partition of @math forests in @math time, where @math is the inverse Ackermann function, when @math expected time is allowed for pre-computation of a perfect hash function. Without perfect hashing, we obtain @math with a data structure by Brodal and Fagerberg that stores graphs of arboricity @math . For every fixed @math , the latter result implies a constructive @math -approximation algorithm with runtime @math by using Kowalik's approximation scheme for pseudoforest partitions. Our algorithm might help in designing a faster exact arboricity algorithm. We also make several remarks on approximation algorithms for the pseudoarboricity and the equivalent graph orientations with smallest maximum indegree, and correct some mistakes made in the literature.
|
The approximation scheme of Worou and Galtier @cite_19 computes for @math a @math -approximation of the fractional arboricity @math in time @math . It constructs a subgraph of this density (in the sense of ), but apparently no forest partition is computed.
|
{
"cite_N": [
"@cite_19"
],
"mid": [
"1983792994"
],
"abstract": [
"In this paper, we develop some algorithmic aspects of the fractional arboricity of a graph in order to study some new approaches for graph clustering. For a given undirected graph G ( V , E ) with m edges and n vertices, the fractional arboricity γ ( G ) measures the maximum edge density of the subgraphs of G ( V , E ) . It is the fractional covering number of the corresponding graphic matroid. The fractional arboricity, for applications in networks reliability or the approximation of the chromatic number of the graphs, has been studied for many years. There are some algorithms in polynomial time to compute the fractional arboricity and its integer part. But, for large graphs such as the graph of the Web or graphs of social networks, the exact algorithms are not fast enough for practical use. That is why we describe a new FPTAS to compute an e -approximation of the fractional arboricity (with e 0 as small as desired). Our algorithm uses the principle of the multiplicative weights update method, needs a memory of size O ( m ) and has a complexity of O ( m log 2 ( m ) log ( m n ) e 2 ) . We also give a 2-approximation of γ ( G ) with computation time O ( m ) , which is a quick preprocessing for the main algorithm. Finally, we present a fast algorithm to extract a subgraph which achieves the value of the approximation of the fractional arboricity."
]
}
|
1811.06803
|
2941169308
|
The arboricity of a graph is the minimum number of forests it can be partitioned into. Previous approximation schemes were nonconstructive, i.e., they only approximated the arboricity as a value without computing a corresponding forest partition, as they operate on the related pseudoforest partitions or the dual problem. We propose an algorithm for converting a partition of @math pseudoforests into a partition of @math forests in @math time, where @math is the inverse Ackermann function, when @math expected time is allowed for pre-computation of a perfect hash function. Without perfect hashing, we obtain @math with a data structure by Brodal and Fagerberg that stores graphs of arboricity @math . For every fixed @math , the latter result implies a constructive @math -approximation algorithm with runtime @math by using Kowalik's approximation scheme for pseudoforest partitions. Our algorithm might help in designing a faster exact arboricity algorithm. We also make several remarks on approximation algorithms for the pseudoarboricity and the equivalent graph orientations with smallest maximum indegree, and correct some mistakes made in the literature.
|
Barenboim and Elkin @cite_39 propose a constructive distributed algorithm that computes a @math -approximation of @math . @cite_12 describe an algorithm that distinguishes with high constant probability between graphs that are @math -close to and graphs that are @math -far from having arboricity @math , for some constant @math . Several upper bounds of the type @math for the arboricity were given by Chiba and Nishizeki @cite_25 , Gabow and Westermann @cite_9 , @cite_35 and Blumenstock @cite_28 . Let @math denote the bounds, and write @math if @math holds plus an example exists where the bounds differ. One can show @math . We do not know whether the second inequality is strict. The bound @math of is optimal.
|
{
"cite_N": [
"@cite_35",
"@cite_28",
"@cite_9",
"@cite_39",
"@cite_25",
"@cite_12"
],
"mid": [
"2168920762",
"2296110048",
"2060129302",
"1987125980",
"2055245094",
"2963942199"
],
"abstract": [
"We prove that the thickness and the arboricity of a graph with e edges are at most ⌊e3 + 32⌋ and ⌈e2⌉, respectively, and that the latter bound is best possible.",
"The densest subgraph problem, which asks for a subgraph with the maximum edges-to-vertices ratio d∗, is solvable in polynomial time. We discuss algorithms for this problem and the computation of a graph orientation with the lowest maximum indegree, which is equal to ⌈d∗⌉. This value also equals the pseudoarboricity of the graph. We show that it can be computed in O(|E| √ log log d∗) time, and that better estimates can be given for graph classes where d∗ satisfies certain asymptotic bounds. These runtimes are achieved by accelerating a binary search with an approximation scheme, and a runtime analysis of Dinitz’s algorithm on flow networks where all arcs, except the source and sink arcs, have unit capacity. We experimentally compare implementations of various algorithms for the densest subgraph and pseudoarboricity problems. In flow-based algorithms, Dinitz’s algorithm performs significantly better than push-relabel algorithms on all instances tested.",
"This paper presents improved algorithms for matroid partitioning problems, such as finding a maximum cardinality set of edges of a graph that can be partitioned into k forests. The notion of a clamp in a matroid sum is introduced. Efficient algorithms for problems involving clumps are presented. Applications of these algorithms to problems arising in the study of structural rigidity of graphs, the Shannon switching game and others are given.",
"We study the distributed maximal independent set (henceforth, MIS) problem on sparse graphs. Currently, there are known algorithms with a sublogarithmic running time for this problem on oriented trees and graphs of bounded degrees. We devise the first sublogarithmic algorithm for computing MIS on graphs of bounded arboricity. This is a large family of graphs that includes graphs of bounded degree, planar graphs, graphs of bounded genus, graphs of bounded treewidth, graphs that exclude a fixed minor, and many other graphs. We also devise efficient algorithms for coloring graphs from these families. These results are achieved by the following technique that may be of independent interest. Our algorithm starts with computing a certain graph-theoretic structure, called Nash-Williams forests-decomposition. Then this structure is used to compute the MIS or coloring. Our results demonstrate that this methodology is very powerful. Finally, we show nearly-tight lower bounds on the running time of any distributed algorithm for computing a forests-decomposition.",
"In this paper we introduce a new simple strategy into edge-searching of a graph, which is useful to the various subgraph listing problems. Applying the strategy, we obtain the following four algorithms. The first one lists all the triangles in a graph G in @math time, where m is the number of edges of G and @math the arboricity of G. The second finds all the quadrangles in @math time. Since @math is at most three for a planar graph G, both run in linear time for a planar graph. The third lists all the complete subgraphs @math of order l in @math time. The fourth lists all the cliques in @math time per clique. All the algorithms require linear space. We also establish an upper bound on @math for a graph @math , where n is the number of vertices in G.",
"In this paper we consider the problem of testing whether a graph has bounded arboricity. The family of graphs with bounded arboricity includes, among others, bounded-degree graphs, all minor-closed graph classes (e.g. planar graphs, graphs with bounded treewidth) and randomly generated preferential attachment graphs. Graphs with bounded arboricity have been studied extensively in the past, in particular since for many problems they allow for much more efficient algorithms and or better approximation ratios. We present a tolerant tester in the sparse-graphs model. The sparse-graphs model allows access to degree queries and neighbor queries, and the distance is defined with respect to the actual number of edges. More specifically, our algorithm distinguishes between graphs that are ϵ-close to having arboricity α and graphs that c · ϵ-far from having arboricity 3α, where c is an absolute small constant. The query complexity and running time of the algorithm are1 [EQUATION] where n denotes the number of vertices and m denotes the number of edges. In terms of the dependence on n and m this bound is optimal up to poly-logarithmic factors since [EQUATION] queries are necessary (and the arboricity of a graph is always [EQUATION]. We leave it as an open question whether the dependence on 1 ϵ can be improved from quasi-polynomial to polynomial. Our techniques include an efficient local simulation for approximating the outcome of a global (almost) forest-decomposition algorithm as well as a tailored procedure of edge sampling."
]
}
|
1811.06803
|
2941169308
|
The arboricity of a graph is the minimum number of forests it can be partitioned into. Previous approximation schemes were nonconstructive, i.e., they only approximated the arboricity as a value without computing a corresponding forest partition, as they operate on the related pseudoforest partitions or the dual problem. We propose an algorithm for converting a partition of @math pseudoforests into a partition of @math forests in @math time, where @math is the inverse Ackermann function, when @math expected time is allowed for pre-computation of a perfect hash function. Without perfect hashing, we obtain @math with a data structure by Brodal and Fagerberg that stores graphs of arboricity @math . For every fixed @math , the latter result implies a constructive @math -approximation algorithm with runtime @math by using Kowalik's approximation scheme for pseudoforest partitions. Our algorithm might help in designing a faster exact arboricity algorithm. We also make several remarks on approximation algorithms for the pseudoarboricity and the equivalent graph orientations with smallest maximum indegree, and correct some mistakes made in the literature.
|
Kowalik's approximation scheme @cite_15 works by terminating Dinitz's algorithm early. It computes an @math -orientation in time @math . The aforementioned greedy algorithm computes an acyclic @math -orientation @cite_3 @cite_36 and a subgraph of density at least @math @cite_4 @cite_22 @cite_27 @cite_31 in linear time. It repeatedly removes the vertex of minimum degree and orients its unassigned edges towards it. Georgakopoulos and Politopoulos @cite_31 give a generalization to hypergraphs. Charikar @cite_27 and Khuller and Saha @cite_22 address directed graphs. The fractional orientation problem is dual to the densest subgraph problem @cite_27 .
|
{
"cite_N": [
"@cite_31",
"@cite_4",
"@cite_22",
"@cite_36",
"@cite_3",
"@cite_27",
"@cite_15"
],
"mid": [
"2104146203",
"2165621523",
"1500512125",
"1044639830",
"2079535727",
"1535144194",
"1703925482"
],
"abstract": [
"",
"A data structure for representing a set of n items from a umverse of m items, which uses space n + o(n) and accommodates membership queries m constant time is described. Both the data structure and the query algorithm are easy to mplement.",
"Given an undirected graph G = (V ,E ), the density of a subgraph on vertex set S is defined as @math , where E (S ) is the set of edges in the subgraph induced by nodes in S . Finding subgraphs of maximum density is a very well studied problem. One can also generalize this notion to directed graphs. For a directed graph one notion of density given by Kannan and Vinay [12] is as follows: given subsets S and T of vertices, the density of the subgraph is @math , where E (S ,T ) is the set of edges going from S to T . Without any size constraints, a subgraph of maximum density can be found in polynomial time. When we require the subgraph to have a specified size, the problem of finding a maximum density subgraph becomes NP -hard. In this paper we focus on developing fast polynomial time algorithms for several variations of dense subgraph problems for both directed and undirected graphs. When there is no size bound, we extend the flow based technique for obtaining a densest subgraph in directed graphs and also give a linear time 2-approximation algorithm for it. When a size lower bound is specified for both directed and undirected cases, we show that the problem is NP-complete and give fast algorithms to find subgraphs within a factor 2 of the optimum density. We also show that solving the densest subgraph problem with an upper bound on size is as hard as solving the problem with an exact size constraint, within a constant factor.",
"A succint representation of a graph is a widely studied problem. A number of criteria can be used to determine the succinctness of the representation. We examined the representation of a graph in these two aspects: 1. The space complexity of the representation.",
"In graphs of bounded arboricity, the total complexity of all maximal complete bipartite subgraphs is O(n). We described a linear time algorithm to list such subgraphs. The arboricity bound is necessary: for any constant k and any n there exists an n-vertex graph with O(n) edges and (nlog n)k maximal complete bipartite subgraphs Kk,l.",
"We study the problem of finding highly connected subgraphs of undirected and directed graphs. For undirected graphs, the notion of density of a subgraph we use is the average degree of the subgraph. For directed graphs, a corresponding notion of density was introduced recently by Kannan and Vinay. This is designed to quantify highly connectedness of substructures in a sparse directed graph such as the web graph. We study the optimization problems of finding subgraphs maximizing these notions of density for undirected and directed graphs. This paper gives simple greedy approximation algorithms for these optimization problems. We also answer an open question about the complexity of the optimization problem for directed graphs.",
""
]
}
|
1811.06803
|
2941169308
|
The arboricity of a graph is the minimum number of forests it can be partitioned into. Previous approximation schemes were nonconstructive, i.e., they only approximated the arboricity as a value without computing a corresponding forest partition, as they operate on the related pseudoforest partitions or the dual problem. We propose an algorithm for converting a partition of @math pseudoforests into a partition of @math forests in @math time, where @math is the inverse Ackermann function, when @math expected time is allowed for pre-computation of a perfect hash function. Without perfect hashing, we obtain @math with a data structure by Brodal and Fagerberg that stores graphs of arboricity @math . For every fixed @math , the latter result implies a constructive @math -approximation algorithm with runtime @math by using Kowalik's approximation scheme for pseudoforest partitions. Our algorithm might help in designing a faster exact arboricity algorithm. We also make several remarks on approximation algorithms for the pseudoarboricity and the equivalent graph orientations with smallest maximum indegree, and correct some mistakes made in the literature.
|
@cite_24 compute a @math -orientation (assuming @math ) with a variant of the greedy orientation algorithm in @math , but the orientation produced may contain cycles.
|
{
"cite_N": [
"@cite_24"
],
"mid": [
"2033433572"
],
"abstract": [
"This paper studies the problem of orienting all edges of a weighted graph such that the maximum weighted outdegree of vertices is minimized. This problem, which has applications in the guard arrangement for example, can be shown to be -hard generally. In this paper we first give optimal orientation algorithms which run in polynomial time for the following special cases: (i) the input is an unweighted graph, and (ii) the input graph is a tree. Then, by using those algorithms as sub-procedures, we provide a simple, combinatorial, -approximation algorithm for the general case, where wmax and wmin are the maximum and the minimum weights of edges, respectively, and e is some small positive real number that depends on the input."
]
}
|
1811.06803
|
2941169308
|
The arboricity of a graph is the minimum number of forests it can be partitioned into. Previous approximation schemes were nonconstructive, i.e., they only approximated the arboricity as a value without computing a corresponding forest partition, as they operate on the related pseudoforest partitions or the dual problem. We propose an algorithm for converting a partition of @math pseudoforests into a partition of @math forests in @math time, where @math is the inverse Ackermann function, when @math expected time is allowed for pre-computation of a perfect hash function. Without perfect hashing, we obtain @math with a data structure by Brodal and Fagerberg that stores graphs of arboricity @math . For every fixed @math , the latter result implies a constructive @math -approximation algorithm with runtime @math by using Kowalik's approximation scheme for pseudoforest partitions. Our algorithm might help in designing a faster exact arboricity algorithm. We also make several remarks on approximation algorithms for the pseudoarboricity and the equivalent graph orientations with smallest maximum indegree, and correct some mistakes made in the literature.
|
A partition of @math pseudoforests can be converted into a partition of @math forests, and @math if possible, in @math . This is implicit in @cite_16 @cite_9 . (We claim in the appendix in Section that the runtime bound of @math is incorrect.)
|
{
"cite_N": [
"@cite_9",
"@cite_16"
],
"mid": [
"2060129302",
"2159059797"
],
"abstract": [
"This paper presents improved algorithms for matroid partitioning problems, such as finding a maximum cardinality set of edges of a graph that can be partitioned into k forests. The notion of a clamp in a matroid sum is introduced. Efficient algorithms for problems involving clumps are presented. Applications of these algorithms to problems arising in the study of structural rigidity of graphs, the Shannon switching game and others are given.",
"Matroid theory is the theory of independent sets in a finite universe. The term \"independent\" is borrowed from linear algebra as is most of the terminology. Many combinatorial problems can be modeled by matroids. A solution is expressed as a maximum cardinality independent set, also referred to as a base, or some other object in matroid theory. This thesis investigates problems that can be modeled with matroid sums. Matroid sums are matroids that can be decomposed into a number of simpler matroids. Typical problems that can be modeled with matroid sums are finding k disjoint spanning trees, the Shannon switching game, finding the arboricity and pseudorarboricity, and problems arising in the study of rigidity of structures. Further problems that can be modeled with matroid sums are the graphic and bicircular packing problem and maximum cardinality bipartite matching. Except for the last the matroid algorithms presented for the above problems yield improved time bounds. This thesis also investigates the theoretical properties of matroid sums. The concept of a clump is introduced. Clumps help to design and analyze algorithms on matroid sums. They lead to the notion of a top clump, which in turn leads to an invariant of a matroid sum and, a fortiori, to a family of invariants of a graph. They generalize another invariant, the principal partition introduced by Kishi and Kajitani and its generalization, the k-minor introduced by Bruno."
]
}
|
1811.06753
|
2901104906
|
The problem of keyword spotting i.e. identifying keywords in a real-time audio stream is mainly solved by applying a neural network over successive sliding windows. Due to the difficulty of the task, baseline models are usually large, resulting in a high computational cost and energy consumption level. We propose a new method called SANAS (Stochastic Adaptive Neural Architecture Search) which is able to adapt the architecture of the neural network on-the-fly at inference time such that small architectures will be used when the stream is easy to process (silence, low noise, ...) and bigger networks will be used when the task becomes more difficult. We show that this adaptive model can be learned end-to-end by optimizing a trade-off between the prediction performance and the average computational cost per unit of time. Experiments on the Speech Commands dataset show that this approach leads to a high recognition level while being much faster (and or energy saving) than classical approaches where the network architecture is static.
|
Neural Networks (NN) are known to obtain very high recognition rates on a large variety of tasks, and especially over signal-based problems like speech recognition @cite_20 , image classification @cite_0 @cite_3 , etc. However these models are usually composed of millions of parameters involved in millions of operations and have high computational and energy costs at prediction time. There is thus a need to increase their processing speed and reduce their energy footprint.
|
{
"cite_N": [
"@cite_0",
"@cite_3",
"@cite_20"
],
"mid": [
"2964081807",
"",
"2949640717"
],
"abstract": [
"Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribution of this work is the design of a new search space (which we call the \"NASNet search space\") which enables transferability. In our experiments, we search for the best convolutional layer (or \"cell\") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, which we name a \"NASNet architecture\". We also introduce a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. On CIFAR-10 itself, a NASNet found by our method achieves 2.4 error rate, which is state-of-the-art. Although the cell is not searched for directly on ImageNet, a NASNet constructed from the best cell achieves, among the published works, state-of-the-art accuracy of 82.7 top-1 and 96.2 top-5 on ImageNet. Our model is 1.2 better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS - a reduction of 28 in computational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74 top-1 accuracy, which is 3.1 better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the image features learned from image classification are generically useful and can be transferred to other computer vision problems. On the task of object detection, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0 achieving 43.1 mAP on the COCO dataset.",
"",
"We show that an end-to-end deep learning approach can be used to recognize either English or Mandarin Chinese speech--two vastly different languages. Because it replaces entire pipelines of hand-engineered components with neural networks, end-to-end learning allows us to handle a diverse variety of speech including noisy environments, accents and different languages. Key to our approach is our application of HPC techniques, resulting in a 7x speedup over our previous system. Because of this efficiency, experiments that previously took weeks now run in days. This enables us to iterate more quickly to identify superior architectures and algorithms. As a result, in several cases, our system is competitive with the transcription of human workers when benchmarked on standard datasets. Finally, using a technique called Batch Dispatch with GPUs in the data center, we show that our system can be inexpensively deployed in an online setting, delivering low latency when serving users at scale."
]
}
|
1811.06173
|
2901138302
|
Stock market prediction is one of the most attractive research topic since the successful prediction on the market's future movement leads to significant profit. Traditional short term stock market predictions are usually based on the analysis of historical market data, such as stock prices, moving averages or daily returns. However, financial news also contains useful information on public companies and the market. Existing methods in finance literature exploit sentiment signal features, which are limited by not considering factors such as events and the news context. We address this issue by leveraging deep neural models to extract rich semantic features from news text. In particular, a Bidirectional-LSTM are used to encode the news text and capture the context information, self attention mechanism are applied to distribute attention on most relative words, news and days. In terms of predicting directional changes in both Standard & Poor's 500 index and individual companies stock price, we show that this technique is competitive with other state of the art approaches, demonstrating the effectiveness of recent NLP technology advances for computational finance.
|
Analyzing stock market using relevant text is complicated but intriguing @cite_34 @cite_0 @cite_32 @cite_22 @cite_20 @cite_2 @cite_5 @cite_23 @cite_11 @cite_29 . For instance, a model with the name Enalyst was introduced in Their goal is to predict stock intraday price trends by analyzing news articles published in the homepage of YAHOO finance. Mittermayer and Knolmayer implemented several prototypes for predicting the short-term market reaction to news based on text mining techniques. Their model forecast 1-day trend of the five major companies indices. @cite_34 predicted stock trends by selecting a representative set of bursty features (keywords) that have impact on individual stocks. Vivek @cite_29 introduced a method to predict stock market using sentiment. Similarly, Michał @cite_11 used sentiment from postings on twitter to predict future stock prices. However, these methods have many limitations including unveiling the rules that may govern the dynamics of the market which makes the prediction model incapable to catch the impact of recent trends.
|
{
"cite_N": [
"@cite_22",
"@cite_29",
"@cite_32",
"@cite_0",
"@cite_23",
"@cite_2",
"@cite_5",
"@cite_34",
"@cite_20",
"@cite_11"
],
"mid": [
"2296438605",
"",
"2126267628",
"2144366591",
"",
"",
"1573027512",
"",
"",
"1918540025"
],
"abstract": [
"We propose a deep learning method for event-driven stock market prediction. First, events are extracted from news text, and represented as dense vectors, trained using a novel neural tensor network. Second, a deep convolutional neural network is used to model both short-term and long-term influences of events on stock price movements. Experimental results show that our model can achieve nearly 6 improvements on S&P 500 index prediction and individual stock prediction, respectively, compared to state-of-the-art baseline methods. In addition, market simulation results show that our system is more capable of making profits than previously reported systems trained on S&P 500 stock historical data.",
"",
"It has been shown that news events influence the trends of stock price movements. However, previous work on news-driven stock market prediction rely on shallow features (such as bags-of-words, named entities and noun phrases), which do not capture structured entity-relation information, and hence cannot represent complete and exact events. Recent advances in Open Information Extraction (Open IE) techniques enable the extraction of structured events from web-scale data. We propose to adapt Open IE technology for event-based stock price movement prediction, extracting structured events from large-scale public news without manual efforts. Both linear and nonlinear models are employed to empirically investigate the hidden and complex relationships between events and the stock market. Largescale experiments show that the accuracy of S&P 500 index prediction is 60 , and that of individual stock prediction can be over 70 . Our event-based system outperforms bags-of-words-based baselines, and previously reported systems trained on S&P 500 stock historical data.",
"Semantic frames are a rich linguistic resource. There has been much work on semantic frame parsers, but less that applies them to general NLP problems. We address a task to predict change in stock price from financial news. Semantic frames help to generalize from specific sentences to scenarios, and to detect the (positive or negative) roles of specific companies. We introduce a novel tree representation, and use it to train predictive models with tree kernels using support vector machines. Our experiments test multiple text representations on two binary classification tasks, change of price and polarity. Experiments show that features derived from semantic frame parsing have significantly better performance across years on the polarity task.",
"",
"",
"Financial news contains useful information on public companies and the market. In this paper we apply the popular word embedding methods and deep neural networks to leverage financial news to predict stock price movements in the market. Experimental results have shown that our proposed methods are simple but very effective, which can significantly improve the stock prediction accuracy on a standard financial database over the baseline system using only the historical price information.",
"",
"",
"This paper covers design, implementation and evaluation of a system that may be used to predict future stock prices basing on analysis of data from social media services. The authors took advantage of large datasets available from Twitter micro blogging platform and widely available stock market records. Data was collected during three months and processed for further analysis. Machine learning was employed to conduct sentiment classification of data coming from social networks in order to estimate future stock prices. Calculations were performed in distributed environment according to Map Reduce programming model. Evaluation and discussion of results of predictions for different time intervals and input datasets proved efficiency of chosen approach is discussed here."
]
}
|
1811.06366
|
2964502028
|
Homicide mortality is a worldwide concern and has occupied the agenda of researchers and public managers. In Brazil, homicide is the third leading cause of death in the general population and the first in the 15-39 age group. In South America, Brazil has the third highest homicide mortality, behind Venezuela and Colombia. To measure the impacts of violence it is important to assess health systems and criminal justice, as well as other areas. In this paper, we analyze the spatial distribution of homicide mortality in the state of Goias, Center-West of Brazil, since the homicide rate increased from 24.5 per 100,000 in 2002 to 42.6 per 100,000 in 2014 in this location. Moreover, this state had the fifth position of homicides in Brazil in 2014. We considered socio-demographic variables for the state, performed analysis about correlation and employed three clustering algorithms: K-means, Density-based and Hierarchical. The results indicate the homicide rates are higher in cities neighbors of large urban centers, although these cities have the best socioeconomic indicators.
|
A plenty of works has been developed on homicide rate analysis since the last half of the 20th century. The fields of sociology and criminology were the first to start researching this theme. The main purposes of those works were related to investigating whether demographic, economic, ecological, and social variables maintained some correlations to the variation in homicide rates across time and space @cite_10 . Variables as resident racial segregation, racial inequality, extreme poverty, social capital, and unemployment rate were used for some well-succeeded findings @cite_17 .
|
{
"cite_N": [
"@cite_10",
"@cite_17"
],
"mid": [
"2131259854",
"2135943726"
],
"abstract": [
"This study demonstrate that the empirical literature on the structural convariates of homicide rates contains inconsistent findings across different time periods and different geographical units. This apparent variance of findings may be due to statistical or methodological artifacts of particular studies, such as different time periods covered, units of analysis, samples, model specification, and problems of statistical analysis and inference. A baseline regression model using 11 structural covariates is estimated for cities, metropolitan areas, and states in 1960, 1970, and 1980. The empirical estimates of this model exhibit instability because of high levels of collinearity among several regressors. Principal components analysis is applied to simplify the dimensionally of the structural covariate space. Reestimation of the regression model then indicates that the apparent inconsistencies across time and social space are greatly reduced. The theoretical significance of the findings for substantive theor...",
"As the 20-year mark since the publication of an article by Kenneth C. Land, Patricia L. McCall, and Lawrence Cohen, “Structural Covariates of Homicide Rates: Are There Any Invariances Across Time and Social Space?” approaches, the question that these scholars originally posed is raised again: Have researchers been able to identify a set of robust structural covariates that consistently predict crime rates? Subsequent to the publication of this piece, numerous scholars have replicated and extended its conceptual, methodological, and empirical work in various ways—with more than 500 citations to date. In response to this attention, the authors first review the advances made by the article. This is followed by a review of findings from studies published over the past 20 years to determine which structural predictors identified in the piece continue to be prominent in the study of homicide and which structural predictors have surfaced in recent years as influential to crime rates. Usin..."
]
}
|
1811.06366
|
2964502028
|
Homicide mortality is a worldwide concern and has occupied the agenda of researchers and public managers. In Brazil, homicide is the third leading cause of death in the general population and the first in the 15-39 age group. In South America, Brazil has the third highest homicide mortality, behind Venezuela and Colombia. To measure the impacts of violence it is important to assess health systems and criminal justice, as well as other areas. In this paper, we analyze the spatial distribution of homicide mortality in the state of Goias, Center-West of Brazil, since the homicide rate increased from 24.5 per 100,000 in 2002 to 42.6 per 100,000 in 2014 in this location. Moreover, this state had the fifth position of homicides in Brazil in 2014. We considered socio-demographic variables for the state, performed analysis about correlation and employed three clustering algorithms: K-means, Density-based and Hierarchical. The results indicate the homicide rates are higher in cities neighbors of large urban centers, although these cities have the best socioeconomic indicators.
|
In the United States of America, gunshot violence is responsible for about 34,000 deaths annually @cite_6 . A paper published in 2017 used scan spatial statistics to analyze clusters of the gunshot occurrences within the city of Syracuse, New York. Amongst the results, it was noticed that the higher violence rate was related to environmental and economic disparities @cite_6 .
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2597550966"
],
"abstract": [
"Gun violence in the United States of America is a large public health problem that disproportionately affects urban areas. The epidemiology of gun violence reflects various aspects of an infectious disease including spatial and temporal clustering. We examined the spatial and temporal trends of gun violence in Syracuse, New York, a city of 145,000. We used a spatial scan statistic to reveal spatio-temporal clusters of gunshots investigated and corroborated by Syracuse City Police Department for the years 2009–2015. We also examined predictors of areas with increased gun violence using a multi-level zero-inflated Poisson regression with data from the 2010 census. Two space-time clusters of gun violence were revealed in the city. Higher rates of segregation, poverty and the summer months were all associated with increased risk of gun violence. Previous gunshots in the area were associated with a 26.8 increase in the risk of gun violence. Gun violence in Syracuse, NY is both spatially and temporally stable, with some neighborhoods of the city greatly afflicted."
]
}
|
1811.06366
|
2964502028
|
Homicide mortality is a worldwide concern and has occupied the agenda of researchers and public managers. In Brazil, homicide is the third leading cause of death in the general population and the first in the 15-39 age group. In South America, Brazil has the third highest homicide mortality, behind Venezuela and Colombia. To measure the impacts of violence it is important to assess health systems and criminal justice, as well as other areas. In this paper, we analyze the spatial distribution of homicide mortality in the state of Goias, Center-West of Brazil, since the homicide rate increased from 24.5 per 100,000 in 2002 to 42.6 per 100,000 in 2014 in this location. Moreover, this state had the fifth position of homicides in Brazil in 2014. We considered socio-demographic variables for the state, performed analysis about correlation and employed three clustering algorithms: K-means, Density-based and Hierarchical. The results indicate the homicide rates are higher in cities neighbors of large urban centers, although these cities have the best socioeconomic indicators.
|
In Central America, where the homicide rates are historically elevated, some works have been developed on the analysis of possible causes. In El Salvador, the clusters of homicides may be related to drug trafficking and organized crime @cite_3 . And in Mexico, the spatial variation of homicides was explained as being linked to firearms possession, drug trafficking and social exclusion @cite_14 .
|
{
"cite_N": [
"@cite_14",
"@cite_3"
],
"mid": [
"1966407060",
"1952101089"
],
"abstract": [
"Introduction This study seeks to analyse the trend of homicide rate in Mexico in last 30 years by age, gender and mechanism of death and identify the socioeconomic variables that better explain the spatial variations of homicide rate in Mexico in 2000 and 2008. Methods Homicide rates adjusted by age were calculated; through the use of multiple regression analysis (stepwise method), variables that better explained the interstate variations in the homicide rates were identified. Results The results show that although homicide rates in Mexico have been relatively high, the rate markedly decreased between early nineties and 2005, but has increased around 35 in last 3 years; furthermore, years of potential life lost by homicide has increased in recent years because the victims are younger; currently, male homicide rate is nine times higher than female rate; throughout the period more than half of homicides were committed by firearms, and in recent years figures exceed 60 . Moreover, social exclusion, drug trafficking, impunity and firearms possession are key elements to understand the spatial variations of the homicide mortality in Mexico in analysed years. Conclusions In recent years it is observed a rise of the homicide rate and consequently, an increment of the social insecurity at a national level; to reduce the number of homicide victims and spatial variations in the rate, the Mexican government needs to combat the cartels of drug trafficking, but also to implement structural reforms to improve the life conditions of Mexican population and diminish the socioeconomic disparities among states.",
"This paper examines the spatio-temporal evolution of homicide across the municipalities of El Salvador. It aims at identifying both temporal trends and spatial clusters that may contribute to the formation of time-stable corridors lying behind a historically (recurrent) high homicide rate. The results from this study reveal the presence of significant clusters of high homicide municipalities in the Western part of the country that have remained stable over time, and a process of formation of high homicide clusters in the Eastern region. The results show an increasing homicide trend from 2002 to 2013 with significant municipality-specific differential trends across the country. The data suggests that links may exist between the dynamics of homicide rates, drug trafficking and organized crime."
]
}
|
1811.06366
|
2964502028
|
Homicide mortality is a worldwide concern and has occupied the agenda of researchers and public managers. In Brazil, homicide is the third leading cause of death in the general population and the first in the 15-39 age group. In South America, Brazil has the third highest homicide mortality, behind Venezuela and Colombia. To measure the impacts of violence it is important to assess health systems and criminal justice, as well as other areas. In this paper, we analyze the spatial distribution of homicide mortality in the state of Goias, Center-West of Brazil, since the homicide rate increased from 24.5 per 100,000 in 2002 to 42.6 per 100,000 in 2014 in this location. Moreover, this state had the fifth position of homicides in Brazil in 2014. We considered socio-demographic variables for the state, performed analysis about correlation and employed three clustering algorithms: K-means, Density-based and Hierarchical. The results indicate the homicide rates are higher in cities neighbors of large urban centers, although these cities have the best socioeconomic indicators.
|
As far as we know no paper studied the homicide rates focused on the Goi 'as state. Furthermore, we did not find any work using clustering algorithms to analyze the spatial distribution of homicides. The techniques and tools commonly used on this topic are: estimation of regression coefficients @cite_10 @cite_17 ; spatial scan statistics @cite_6 ; SaTScan software methods @cite_20 ; Bayesian approach with Monte Carlo Markov Chain algorithm @cite_3 ; Moran's Global index @cite_9 ; descriptive statistics or correlation techniques @cite_2 @cite_12 ; estimable functions and negative binomial regression @cite_8 ; SSPS software tools @cite_7 ; and multiple regression analysis (stepwise method) @cite_14 .
|
{
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_3",
"@cite_6",
"@cite_2",
"@cite_10",
"@cite_12",
"@cite_20",
"@cite_17"
],
"mid": [
"1966407060",
"",
"2154081589",
"2172009573",
"1952101089",
"2597550966",
"",
"2131259854",
"2052966624",
"1888064802",
"2135943726"
],
"abstract": [
"Introduction This study seeks to analyse the trend of homicide rate in Mexico in last 30 years by age, gender and mechanism of death and identify the socioeconomic variables that better explain the spatial variations of homicide rate in Mexico in 2000 and 2008. Methods Homicide rates adjusted by age were calculated; through the use of multiple regression analysis (stepwise method), variables that better explained the interstate variations in the homicide rates were identified. Results The results show that although homicide rates in Mexico have been relatively high, the rate markedly decreased between early nineties and 2005, but has increased around 35 in last 3 years; furthermore, years of potential life lost by homicide has increased in recent years because the victims are younger; currently, male homicide rate is nine times higher than female rate; throughout the period more than half of homicides were committed by firearms, and in recent years figures exceed 60 . Moreover, social exclusion, drug trafficking, impunity and firearms possession are key elements to understand the spatial variations of the homicide mortality in Mexico in analysed years. Conclusions In recent years it is observed a rise of the homicide rate and consequently, an increment of the social insecurity at a national level; to reduce the number of homicide victims and spatial variations in the rate, the Mexican government needs to combat the cartels of drug trafficking, but also to implement structural reforms to improve the life conditions of Mexican population and diminish the socioeconomic disparities among states.",
"",
"OBJETIVO: Descrever a evolucao da mortalidade por homicidios no Municipio de Sao Paulo segundo tipo de arma, sexo, raca ou cor, idade e areas de exclusao inclusao social entre 1996 e 2008. METODOS: Estudo ecologico de serie temporal. Os dados sobre obitos ocorridos no Municipio foram coletados da base de dados do Programa de Aprimoramento das Informacoes sobre Mortalidade, seguindo a Classificacao Internacional de Doencas, Decima Revisao (CID-10). Foram calculadas as taxas de mortalidade por homicidio (TMH) para a populacao total, por sexo, raca ou cor, faixa etaria, tipo de arma e area de exclusao inclusao social. As TMH foram padronizadas por idade pelo metodo direto. Foram calculados os percentuais de variacao no periodo estudado. Para as areas de exclusao inclusao social foram calculados os riscos relativos de morte por homicidio. RESULTADOS: As TMH apresentaram queda de 73,7 entre 2001 e 2008. Foi observada reducao da TMH em todos os grupos analisados, mais pronunciada em homens (-74,5 ), jovens de 15 a 24 anos (-78,0 ) e moradores de areas de exclusao social extrema (-79,3 ). A reducao ocorreu, sobretudo, nos homicidios cometidos com armas de fogo (-74,1 ). O risco relativo de morte por homicidio nas areas de exclusao extrema (tendo como referencia areas com algum grau de exclusao social) foi de 2,77 em 1996, 3,9 em 2001 e 2,13 em 2008. Nas areas de alta exclusao social, o risco relativo foi de 2,07 em 1996 e 1,96 em 2008. CONCLUSOES: Para compreender a reducao dos homicidios no Municipio, e importante considerar macrodeterminantes que atingem todo o Municipio e todos os subgrupos populacionais e microdeterminantes que atuam localmente, influenciando de forma diferenciada os homicidios com armas de fogo e os homicidios na populacao jovem, no sexo masculino e em residentes em areas de alta exclusao social.",
"OBJECTIVE To analyze the spatial distribution of homicide mortality in the state of Bahia, Northeastern Brazil. METHODS Ecological study of the 15 to 39-year old male population in the state of Bahia in the period 1996-2010. Data from the Mortality Information System, relating to homicide (X85-Y09) and population estimates from the Brazilian Institute of Geography and Statistics were used. The existence of spatial correlation, the presence of clusters and critical areas of the event studied were analyzed using Moran’s I Global and Local indices. RESULTS A non-random spatial pattern was observed in the distribution of rates, as was the presence of three clusters, the first in the north health district, the second in the eastern region, and the third cluster included townships in the south and the far south of Bahia. CONCLUSIONS The homicide mortality in the three different critical areas requires further studies that consider the socioeconomic, cultural and environmental characteristics in order to guide specific preventive and interventionist practices.",
"This paper examines the spatio-temporal evolution of homicide across the municipalities of El Salvador. It aims at identifying both temporal trends and spatial clusters that may contribute to the formation of time-stable corridors lying behind a historically (recurrent) high homicide rate. The results from this study reveal the presence of significant clusters of high homicide municipalities in the Western part of the country that have remained stable over time, and a process of formation of high homicide clusters in the Eastern region. The results show an increasing homicide trend from 2002 to 2013 with significant municipality-specific differential trends across the country. The data suggests that links may exist between the dynamics of homicide rates, drug trafficking and organized crime.",
"Gun violence in the United States of America is a large public health problem that disproportionately affects urban areas. The epidemiology of gun violence reflects various aspects of an infectious disease including spatial and temporal clustering. We examined the spatial and temporal trends of gun violence in Syracuse, New York, a city of 145,000. We used a spatial scan statistic to reveal spatio-temporal clusters of gunshots investigated and corroborated by Syracuse City Police Department for the years 2009–2015. We also examined predictors of areas with increased gun violence using a multi-level zero-inflated Poisson regression with data from the 2010 census. Two space-time clusters of gun violence were revealed in the city. Higher rates of segregation, poverty and the summer months were all associated with increased risk of gun violence. Previous gunshots in the area were associated with a 26.8 increase in the risk of gun violence. Gun violence in Syracuse, NY is both spatially and temporally stable, with some neighborhoods of the city greatly afflicted.",
"",
"This study demonstrate that the empirical literature on the structural convariates of homicide rates contains inconsistent findings across different time periods and different geographical units. This apparent variance of findings may be due to statistical or methodological artifacts of particular studies, such as different time periods covered, units of analysis, samples, model specification, and problems of statistical analysis and inference. A baseline regression model using 11 structural covariates is estimated for cities, metropolitan areas, and states in 1960, 1970, and 1980. The empirical estimates of this model exhibit instability because of high levels of collinearity among several regressors. Principal components analysis is applied to simplify the dimensionally of the structural covariate space. Reestimation of the regression model then indicates that the apparent inconsistencies across time and social space are greatly reduced. The theoretical significance of the findings for substantive theor...",
"The objective was to evaluate correlations between suicide, homicide and socio-demographic variables by an ecological study. Mortality and socio-demographic data were collected from official records of the Ministry of Health and IBGE (2010), aggregated by state (27). The data were analyzed using correlation techniques, factor analysis, principal component analysis with a varimax rotation and multiple linear regression. Suicide age-adjusted rates for the total population, men and women were 5.0, 8.0, and 2.2 per 100,000 inhabitants respectively. The suicide rates ranged from 2.7 in Para to 9.1 in Rio Grande do Sul. Homicide for the total population, men and women were 27.2, 50.8, and 4.5 per 100,000, respectively. The homicide rates ranged from 13.0 in Santa Catarina to 68.9 in Alagoas. Suicide and homicide were negatively associated, the significance persisted among men. Unemployment was negatively correlated with suicide and positively with homicide. Different socio-demographic variables were found to correlate with suicide and homicide in the regressions. Suicide showed a pattern suggesting that, in Brazil, it is related to high socioeconomic status. Homicide seemed to follow the pattern found in other countries, associated with lower social and economic status.",
"Objectives. We modeled the spatiotemporal movement of hotspot clusters of homicide by motive in Newark, New Jersey, to investigate whether different homicide types have different patterns of clustering and movement.Methods. We obtained homicide data from the Newark Police Department Homicide Unit’s investigative files from 1997 through 2007 (n = 560). We geocoded the address at which each homicide victim was found and recorded the date of and the motive for the homicide. We used cluster detection software to model the spatiotemporal movement of statistically significant homicide clusters by motive, using census tract and month of occurrence as the spatial and temporal units of analysis.Results. Gang-motivated homicides showed evidence of clustering and diffusion through Newark. Additionally, gang-motivated homicide clusters overlapped to a degree with revenge and drug-motivated homicide clusters. Escalating dispute and nonintimate familial homicides clustered; however, there was no evidence of diffusion. ...",
"As the 20-year mark since the publication of an article by Kenneth C. Land, Patricia L. McCall, and Lawrence Cohen, “Structural Covariates of Homicide Rates: Are There Any Invariances Across Time and Social Space?” approaches, the question that these scholars originally posed is raised again: Have researchers been able to identify a set of robust structural covariates that consistently predict crime rates? Subsequent to the publication of this piece, numerous scholars have replicated and extended its conceptual, methodological, and empirical work in various ways—with more than 500 citations to date. In response to this attention, the authors first review the advances made by the article. This is followed by a review of findings from studies published over the past 20 years to determine which structural predictors identified in the piece continue to be prominent in the study of homicide and which structural predictors have surfaced in recent years as influential to crime rates. Usin..."
]
}
|
1811.06437
|
2963852241
|
A contextual care protocol is used by a medical practitioner for patient healthcare, given the context or situation that the specified patient is in. This paper proposes a method to build an automated self-adapting protocol which can help make relevant, early decisions for effective healthcare delivery. The hybrid model leverages neural networks and decision trees. The neural network estimates the chances of each disease and each tree in the decision trees represents care protocol for a disease. These trees are subject to change in case of aberrations found by the diagnosticians. These corrections or prediction errors are clustered into similar groups for scalability and review by the experts. The corrections as suggested by the experts are incorporated into the model.
|
Machine learning algorithms have been widely used in the medical field to build the disease diagnosis support system. Filippo studied the use of artificial neural networks that can be a powerful tool to help physicians perform diagnosis and other enforcement. He highlights the advantages of ANNs which can process a large amount of data, reduces likelihood of overlooking relevant information and thereby reducing diagnosis time @cite_3 .
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2137687977"
],
"abstract": [
"An extensive amount of information is currently available to clinical specialists, ranging from details of clinical symptoms to various types of biochemical data and outputs of imaging devices. Each type of data provides information that must be evaluated and assigned to a particular pathology during the diagnostic process. To streamline the diagnostic process in daily routine and avoid misdiagnosis, artificial intelligence methods (especially computer aided diagnosis and artificial neural networks) can be employed. These adaptive learning algorithms can handle diverse types of medical data and integrate them into categorized outputs. In this paper, we briefly review and discuss the philosophy, capabilities, and limitations of artificial neural networks in medical diagnosis through selected examples."
]
}
|
1811.06437
|
2963852241
|
A contextual care protocol is used by a medical practitioner for patient healthcare, given the context or situation that the specified patient is in. This paper proposes a method to build an automated self-adapting protocol which can help make relevant, early decisions for effective healthcare delivery. The hybrid model leverages neural networks and decision trees. The neural network estimates the chances of each disease and each tree in the decision trees represents care protocol for a disease. These trees are subject to change in case of aberrations found by the diagnosticians. These corrections or prediction errors are clustered into similar groups for scalability and review by the experts. The corrections as suggested by the experts are incorporated into the model.
|
B.N worked out a learning model using C4.5 decision tree algorithm for classification as well as predicting risks induced during pregnancy @cite_9 . The application proposed by the patent using neural networks identifies important input variables for a medical diagnostic test which was used in training the decision-support systems to guide the development of the tests and helps in accessing the effectiveness of a selected therapeutic protocol @cite_11 . S. proposed neural network and decision tree algorithm in an integrative manner to predict heart attacks with high amount of accuracy @cite_0 .
|
{
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_11"
],
"mid": [
"",
"2343661964",
"2138498766"
],
"abstract": [
"",
"Gestation or pregnancy a stage where women undergo several physiological changes, sometimes inducing complications turning severe and initiating instances leading to death of both mother and fetus. Pregnant women must thus be protected from complications arising during gestation period. Several classification algorithms are successfully implemented in several fields. Decision Tree Classification Method is one efficient method best suitable for medical diagnosis. A popular algorithm C4.5 Decision Tree classification algorithm is appropriate for classifying the pregnancy data. The algorithm constructs a learning model from the training data and later risks in pregnancy are predicted for unseen pregnancy data. The main aim of this paper is to optimise performance of C4.5 classification algorithm by applying on standardized and appropriate format of data. The paper highlights the effective performance achieved by C4.5 classifier in accurately predicting risk levels during pregnancy from the collected, standardized and transformed data efficiently.",
"Methods are provided for developing medical diagnostic tests using decision-support systems, such as neural networks. Patient data or information, typically patient history or clinical data, are analyzed by the decision-support systems to identify important or relevant variables and decision-support systems are trained on the patient data. Patient data are augmented by biochemical test data, or results, where available, to refine performance. The resulting decision-support systems are employed to evaluate specific observation values and test results, to guide the development of biochemical or other diagnostic tests, too assess a course of treatment, to identify new diagnostic tests and disease markers, to identify useful therapies, and to provide the decision-support functionality for the test. Methods for identification of important input variables for a medical diagnostic tests for use in training the decision-support systems to guide the development of the tests, for improving the sensitivity and specificity of such tests, and for selecting diagnostic tests that improve overall diagnosis of, or potential for, a disease state and that permit the effectiveness of a selected therapeutic protocol to be assessed are provided. The methods for identification can be applied in any field in which statistics are used to determine outcomes. A method for evaluating the effectiveness of any given diagnostic test is also provided."
]
}
|
1811.06437
|
2963852241
|
A contextual care protocol is used by a medical practitioner for patient healthcare, given the context or situation that the specified patient is in. This paper proposes a method to build an automated self-adapting protocol which can help make relevant, early decisions for effective healthcare delivery. The hybrid model leverages neural networks and decision trees. The neural network estimates the chances of each disease and each tree in the decision trees represents care protocol for a disease. These trees are subject to change in case of aberrations found by the diagnosticians. These corrections or prediction errors are clustered into similar groups for scalability and review by the experts. The corrections as suggested by the experts are incorporated into the model.
|
Jose´ M. Jerez-Aragone´s proposed a model that combines TDIDT (CIDIM), with a system composed of different neural network topologies to approximate Bayes’ optimal error for the prediction of patient relapse after breast cancer surgery. The CIDIM algorithm selects the most relevant prognostic factors for the accurate prognosis of breast cancer, while the neural networks system takes as input these selected variables in order for it to reach correct classification probability @cite_12 .
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"2103592462"
],
"abstract": [
"The prediction of clinical outcome of patients after breast cancer surgery plays an important role in medical tasks such as diagnosis and treatment planning. Different prognostic factors for breast cancer outcome appear to be significant predictors for overall survival, but probably form part of a bigger picture comprising many factors. Survival estimations are currently performed by clinicians using the statistical techniques of survival analysis. In this sense, artificial neural networks are shown to be a powerful tool for analysing datasets where there are complicated non-linear interactions between the input data and the information to be predicted. This paper presents a decision support tool for the prognosis of breast cancer relapse that combines a novel algorithm TDIDT (control of induction by sample division method, CIDIM), to select the most relevant prognostic factors for the accurate prognosis of breast cancer, with a system composed of different neural networks topologies that takes as input the selected variables in order for it to reach good correct classification probability. In addition, a new method for the estimate of Bayes' optimal error using the neural network paradigm is proposed. Clinical-pathological data were obtained from the Medical Oncology Service of the Hospital [email protected]?nico Universitario of Malaga, Spain. The results show that the proposed system is an useful tool to be used by clinicians to search through large datasets seeking subtle patterns in prognostic factors, and that may further assist the selection of appropriate adjuvant treatments for the individual patient."
]
}
|
1811.06437
|
2963852241
|
A contextual care protocol is used by a medical practitioner for patient healthcare, given the context or situation that the specified patient is in. This paper proposes a method to build an automated self-adapting protocol which can help make relevant, early decisions for effective healthcare delivery. The hybrid model leverages neural networks and decision trees. The neural network estimates the chances of each disease and each tree in the decision trees represents care protocol for a disease. These trees are subject to change in case of aberrations found by the diagnosticians. These corrections or prediction errors are clustered into similar groups for scalability and review by the experts. The corrections as suggested by the experts are incorporated into the model.
|
L. G. presented a framework for diagnosing eye diseases using Neural Networks and Decision Trees. They proposed the hybrid model as Neural Networks Decision Trees Eye Disease Diagnosing System (NNDTEDDS) to train younger ophthalmologists. The use of neural networks is in the diagnosis according to the various symptoms, physical eye conditions and that of decision trees is in knowledge extraction from trained neural networks The various rules obtained according to symptoms explains the knowledge acquired in neural networks by learning from previous samples of symptoms and physical eye conditions @cite_2 .
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"1492388214"
],
"abstract": [
"Clinical Decision Support Systems (CDSS) provide clinicians, staff, patients, and other indi‐ viduals with knowledge and person-specific information, intelligently filtered and present‐ ed at appropriate times, to enhance health and health care [1]. Medical errors have already become the universal matter of international society. In 1999, IOM (American Institute of Medicine) published a report “To err is Human” [2], that indicated: First, the quantity of medical errors is incredible, the medical errors had already became the fifth lethal; Second, the most of medical errors occurred by the human factor which could be avoid via the com‐ puter system. Improving the quality of healthcare, reducing medical errors, and guaranty‐ ing the safety of patients are the most serious duty of the hospital. The clinical guideline can enhance the security and quality of clinical diagnosis and treatment, its importance already obtained widespread approval [3]. In 1990, clinical practice guidelines were defined as “sys‐ tematically developed statements to assist practitioner and patient decisions about appropri‐ ate health care for specific clinical circumstances” [4]."
]
}
|
1811.06502
|
2901234647
|
Formal verification provides strong safety guarantees but only for models of cyber-physical systems. Hybrid system models describe the required interplay of computation and physical dynamics, which is crucial to guarantee what computations lead to safe physical behavior (e.g., cars should not collide). Control computations that affect physical dynamics must act in advance to avoid possibly unsafe future circumstances. Formal verification then ensures that the controllers correctly identify and provably avoid unsafe future situations under a certain model of physics. But any model of physics necessarily deviates from reality and, moreover, any observation with real sensors and manipulation with real actuators is subject to uncertainty. This makes runtime validation a crucial step to monitor whether the model assumptions hold for the real system implementation. The key question is what property needs to be runtime-monitored and what a satisfied runtime monitor entails about the safety of the system: the observations of a runtime monitor only relate back to the safety of the system if they are themselves accompanied by a proof of correctness! For an unbroken chain of correctness guarantees, we, thus, synthesize runtime monitors in a provably correct way from provably safe hybrid system models. This paper addresses the inevitable challenge of making the synthesized monitoring conditions robust to partial observability of sensor uncertainty and partial controllability due to actuator disturbance. We show that the monitoring conditions result in provable safety guarantees with fallback controllers that react to monitor violation at runtime.
|
Runtime verification and monitoring for finite state discrete systems has received significant attention ( , @cite_28 @cite_0 @cite_6 ). Others monitor continuous-time signals ( , @cite_30 @cite_9 ). We focus on hybrid systems models of CPS to combine both, and our methods are robust to sensor uncertainty and actuator disturbance.
|
{
"cite_N": [
"@cite_30",
"@cite_28",
"@cite_9",
"@cite_6",
"@cite_0"
],
"mid": [
"2101804404",
"2113012730",
"1483606732",
"2078356878",
"2118962824"
],
"abstract": [
"Monitoring transient behaviors of real-time systems plays an important role in model-based systems design. Signal Temporal Logic (STL) emerges as a convenient and powerful formalism for continuous and hybrid systems. This paper presents an efficient algorithm for computing the robustness degree in which a piecewise-continuous signal satisfies or violates an STL formula. The algorithm, by leveraging state-of-the-art streaming algorithms from Signal Processing, is linear in the size of the signal and its implementation in the Breach tool is shown to outperform alternative implementations.",
"We present a specification language and algorithms for the online and offline monitoring of synchronous systems including circuits and embedded systems. Such monitoring is useful not only for testing, but also under actual deployment. The specification language is simple and expressive; it can describe both correctness failure assertions along with interesting statistical measures that are useful for system profiling and coverage analysis. The algorithm for online monitoring of queries in this language follows a partial evaluation strategy: it incrementally constructs output streams from input streams, while maintaining a store of partially evaluated expressions for forward references. We identify a class of specifications, characterized syntactically, for which the algorithm's memory requirement is independent of the length of the input streams. Being able to bound memory requirements is especially important in online monitoring of large input streams. We extend the concepts used in the online algorithm to construct an efficient offline monitoring algorithm for large traces. We have implemented our algorithm and applied it to two industrial systems, the PCI bus protocol and a memory controller. The results demonstrate that our algorithms are practical and that our specification language is sufficiently expressive to handle specifications of interest to industry.",
"In this paper we describe AMT, a tool for monitoring temporal properties of continuous signals. We first introduce STL PSL, a specification formalism based on the industrial standard language PSL and the real-time temporal logic MITL, extended with constructs that allow describing behaviors of real-valued variables. The tool automatically builds property observers from an STL PSL specification and checks, in an offline or incremental fashion, whether simulation traces satisfy the property. The AMT tool is validated through a Flash memory case-study.",
"This article gives an overview of the, monitoring oriented programming framework (MOP). In MOP, runtime monitoring is supported and encouraged as a fundamental principle for building reliable systems. Monitors are automatically synthesized from specified properties and are used in conjunction with the original system to check its dynamic behaviors. When a specification is violated or validated at runtime, user-defined actions will be triggered, which can be any code, such as information logging or runtime recovery. Two instances of MOP are presented: JavaMOP (for Java programs) and BusMOP (for monitoring PCI bus traffic). The architecture of MOP is discussed, and an explanation of parametric trace monitoring and its implementation is given. A comprehensive evaluation of JavaMOP attests to its efficiency, especially in comparison with similar systems. The implementation of BusMOP is discussed in detail. In general, BusMOP imposes no runtime overhead on the system it is monitoring.",
"The problem of testing whether a finite execution trace of events generated by an executing program violates a linear temporal logic (LTL) formula occurs naturally in runtime analysis of software. Two efficient algorithms for this problem are presented in this paper, both for checking safety formulae of the form “always P”, where P is a past-time LTL formula. The first algorithm is implemented by rewriting, and the second synthesizes efficient code from formulae. Further optimizations of the second algorithm are suggested, reducing space and time consumption. Special operators suitable for writing succinct specifications are discussed and shown to be equivalent to the standard past-time operators. This work is part of NASA’s PathExplorer project, the objective of which is to construct a flexible framework for efficient monitoring and analysis of program executions."
]
}
|
1811.06502
|
2901234647
|
Formal verification provides strong safety guarantees but only for models of cyber-physical systems. Hybrid system models describe the required interplay of computation and physical dynamics, which is crucial to guarantee what computations lead to safe physical behavior (e.g., cars should not collide). Control computations that affect physical dynamics must act in advance to avoid possibly unsafe future circumstances. Formal verification then ensures that the controllers correctly identify and provably avoid unsafe future situations under a certain model of physics. But any model of physics necessarily deviates from reality and, moreover, any observation with real sensors and manipulation with real actuators is subject to uncertainty. This makes runtime validation a crucial step to monitor whether the model assumptions hold for the real system implementation. The key question is what property needs to be runtime-monitored and what a satisfied runtime monitor entails about the safety of the system: the observations of a runtime monitor only relate back to the safety of the system if they are themselves accompanied by a proof of correctness! For an unbroken chain of correctness guarantees, we, thus, synthesize runtime monitors in a provably correct way from provably safe hybrid system models. This paper addresses the inevitable challenge of making the synthesized monitoring conditions robust to partial observability of sensor uncertainty and partial controllability due to actuator disturbance. We show that the monitoring conditions result in provable safety guarantees with fallback controllers that react to monitor violation at runtime.
|
In @cite_22 , offline model checking is combined with runtime monitoring for path planning of robots. For offline verification, the method that motion of the robot stays inside a tube around the planned path; staying inside the tube is monitored at runtime. This can only be sound when augmented with additional assumptions on the continuous dynamics between sampling points @cite_32 , which we handle explicitly in our approach @cite_20 . We do not ignore physics models and environment behavior and therefore, our monitors:
|
{
"cite_N": [
"@cite_32",
"@cite_22",
"@cite_20"
],
"mid": [
"2154128151",
"2752666516",
"2127240436"
],
"abstract": [
"In this paper, we analyze limits of approximation techniques for (non-linear) continuous image computation in model checking hybrid systems. In particular, we show that even a single step of continuous image computation is not semidecidable numerically even for a very restricted class of functions. Moreover, we show that symbolic insight about derivative bounds provides sufficient additional information for approximation refinement model checking. Finally, we prove that purely numerical algorithms can perform continuous image computation with arbitrarily high probability. Using these results, we analyze the prerequisites for a safe operation of the roundabout maneuver in air traffic collision avoidance.",
"A major challenge towards large scale deployment of autonomous mobile robots is to program them with formal guarantees and high assurance of correct operation. To this end, we present a framework for building safe robots. Our approach for validating the end-to-end correctness of robotics system consists of two parts: (1) a high-level programming language for implementing and systematically testing the reactive robotics software via model checking; (2) a signal temporal logic (STL) based online monitoring system to ensure that the assumptions about the low-level controllers (discrete models) used during model checking hold at runtime. Combining model checking with runtime verification helps us bridge the gap between software verification (discrete) that makes assumptions about the low-level controllers and the physical world, and the actual execution of the software on a real robotic platform in the physical world. To demonstrate the efficacy of our approach, we build a safe adaptive surveillance system and present software-in-the-loop simulations of the application.",
"Formal verification and validation play a crucial role in making cyber-physical systems (CPS) safe. Formal methods make strong guarantees about the system behavior if accurate models of the system can be obtained, including models of the controller and of the physical dynamics. In CPS, models are essential; but any model we could possibly build necessarily deviates from the real world. If the real system fits to the model, its behavior is guaranteed to satisfy the correctness properties verified with respect to the model. Otherwise, all bets are off. This article introduces ModelPlex, a method ensuring that verification results about models apply to CPS implementations. ModelPlex provides correctness guarantees for CPS executions at runtime: it combines offline verification of CPS models with runtime validation of system executions for compliance with the model. ModelPlex ensures in a provably correct way that the verification results obtained for the model apply to the actual system runs by monitoring the behavior of the world for compliance with the model. If, at some point, the observed behavior no longer complies with the model so that offline verification results no longer apply, ModelPlex initiates provably safe fallback actions, assuming the system dynamics deviation is bounded. This article, furthermore, develops a systematic technique to synthesize provably correct monitors automatically from CPS proofs in differential dynamic logic by a correct-by-construction approach, leading to verifiably correct runtime model validation. Overall, ModelPlex generates provably correct monitor conditions that, if checked to hold at runtime, are provably guaranteed to imply that the offline safety verification results about the CPS model apply to the present run of the actual CPS implementation."
]
}
|
1811.06502
|
2901234647
|
Formal verification provides strong safety guarantees but only for models of cyber-physical systems. Hybrid system models describe the required interplay of computation and physical dynamics, which is crucial to guarantee what computations lead to safe physical behavior (e.g., cars should not collide). Control computations that affect physical dynamics must act in advance to avoid possibly unsafe future circumstances. Formal verification then ensures that the controllers correctly identify and provably avoid unsafe future situations under a certain model of physics. But any model of physics necessarily deviates from reality and, moreover, any observation with real sensors and manipulation with real actuators is subject to uncertainty. This makes runtime validation a crucial step to monitor whether the model assumptions hold for the real system implementation. The key question is what property needs to be runtime-monitored and what a satisfied runtime monitor entails about the safety of the system: the observations of a runtime monitor only relate back to the safety of the system if they are themselves accompanied by a proof of correctness! For an unbroken chain of correctness guarantees, we, thus, synthesize runtime monitors in a provably correct way from provably safe hybrid system models. This paper addresses the inevitable challenge of making the synthesized monitoring conditions robust to partial observability of sensor uncertainty and partial controllability due to actuator disturbance. We show that the monitoring conditions result in provable safety guarantees with fallback controllers that react to monitor violation at runtime.
|
Reachset conformance testing @cite_18 computes reachable sets of hybrid automata at runtime to transfer safety properties of reachability analysis methods by falsifying simulations or recorded data. The crucial benefit of our methods is to and when the monitor conditions are satisfied at runtime of the monitored system and control decisions that are subject to .
|
{
"cite_N": [
"@cite_18"
],
"mid": [
"2339842460"
],
"abstract": [
"Industrial-sized hybrid systems are typically not amenable to formal verification techniques. For this reason, a common approach is to formally verify abstractions of (parts of) the original system. However, we need to show that this abstraction conforms to the actual system implementation including its physical dynamics. In particular, verified properties of the abstract system need to transfer to the implementation. To this end, we introduce a formal conformance relation, called reachset conformance, which guarantees transference of safety properties, while being a weaker relation than the existing trace inclusion conformance. Based on this formal relation, we present a conformance testing method which allows us to tune the trade-off between accuracy and computational load. Additionally, we present a test selection algorithm that uses a coverage measure to reduce the number of test cases for conformance testing. We experimentally show the benefits of our novel techniques based on an example from autonomous driving."
]
}
|
1811.06502
|
2901234647
|
Formal verification provides strong safety guarantees but only for models of cyber-physical systems. Hybrid system models describe the required interplay of computation and physical dynamics, which is crucial to guarantee what computations lead to safe physical behavior (e.g., cars should not collide). Control computations that affect physical dynamics must act in advance to avoid possibly unsafe future circumstances. Formal verification then ensures that the controllers correctly identify and provably avoid unsafe future situations under a certain model of physics. But any model of physics necessarily deviates from reality and, moreover, any observation with real sensors and manipulation with real actuators is subject to uncertainty. This makes runtime validation a crucial step to monitor whether the model assumptions hold for the real system implementation. The key question is what property needs to be runtime-monitored and what a satisfied runtime monitor entails about the safety of the system: the observations of a runtime monitor only relate back to the safety of the system if they are themselves accompanied by a proof of correctness! For an unbroken chain of correctness guarantees, we, thus, synthesize runtime monitors in a provably correct way from provably safe hybrid system models. This paper addresses the inevitable challenge of making the synthesized monitoring conditions robust to partial observability of sensor uncertainty and partial controllability due to actuator disturbance. We show that the monitoring conditions result in provable safety guarantees with fallback controllers that react to monitor violation at runtime.
|
Specification mining techniques for LTL can be adapted to monitor for safety violations @cite_24 and intervene ahead of time, assuming that the is available to the monitor. In cyber-physical systems, this is feasible only when the next input can be prevented from becoming actuated, which is the rationale behind our controller monitors @cite_20 safeguarding untrusted control. When the next input is measured through sensors after the fact it may already present a (unpreventable) safety violation; counteracting these requires additional assumptions on the nature of the violation @cite_20 and, as presented in this paper, means to detect gradual deviation from the model that accumulates to violation over time.
|
{
"cite_N": [
"@cite_24",
"@cite_20"
],
"mid": [
"1512912754",
"2127240436"
],
"abstract": [
"Several control systems in safety-critical applications involve the interaction of an autonomous controller with one or more human operators. Examples include pilots interacting with an autopilot system in an aircraft, and a driver interacting with automated driver-assistance features in an automobile. The correctness of such systems depends not only on the autonomous controller, but also on the actions of the human controller. In this paper, we present a formalism for human-in-the-loop (HuIL) control systems. Particularly, we focus on the problem of synthesizing a semi-autonomous controller from high-level temporal specifications that expect occasional human intervention for correct operation. We present an algorithm for this problem, and demonstrate its operation on problems related to driver assistance in automobiles.",
"Formal verification and validation play a crucial role in making cyber-physical systems (CPS) safe. Formal methods make strong guarantees about the system behavior if accurate models of the system can be obtained, including models of the controller and of the physical dynamics. In CPS, models are essential; but any model we could possibly build necessarily deviates from the real world. If the real system fits to the model, its behavior is guaranteed to satisfy the correctness properties verified with respect to the model. Otherwise, all bets are off. This article introduces ModelPlex, a method ensuring that verification results about models apply to CPS implementations. ModelPlex provides correctness guarantees for CPS executions at runtime: it combines offline verification of CPS models with runtime validation of system executions for compliance with the model. ModelPlex ensures in a provably correct way that the verification results obtained for the model apply to the actual system runs by monitoring the behavior of the world for compliance with the model. If, at some point, the observed behavior no longer complies with the model so that offline verification results no longer apply, ModelPlex initiates provably safe fallback actions, assuming the system dynamics deviation is bounded. This article, furthermore, develops a systematic technique to synthesize provably correct monitors automatically from CPS proofs in differential dynamic logic by a correct-by-construction approach, leading to verifiably correct runtime model validation. Overall, ModelPlex generates provably correct monitor conditions that, if checked to hold at runtime, are provably guaranteed to imply that the offline safety verification results about the CPS model apply to the present run of the actual CPS implementation."
]
}
|
1811.06502
|
2901234647
|
Formal verification provides strong safety guarantees but only for models of cyber-physical systems. Hybrid system models describe the required interplay of computation and physical dynamics, which is crucial to guarantee what computations lead to safe physical behavior (e.g., cars should not collide). Control computations that affect physical dynamics must act in advance to avoid possibly unsafe future circumstances. Formal verification then ensures that the controllers correctly identify and provably avoid unsafe future situations under a certain model of physics. But any model of physics necessarily deviates from reality and, moreover, any observation with real sensors and manipulation with real actuators is subject to uncertainty. This makes runtime validation a crucial step to monitor whether the model assumptions hold for the real system implementation. The key question is what property needs to be runtime-monitored and what a satisfied runtime monitor entails about the safety of the system: the observations of a runtime monitor only relate back to the safety of the system if they are themselves accompanied by a proof of correctness! For an unbroken chain of correctness guarantees, we, thus, synthesize runtime monitors in a provably correct way from provably safe hybrid system models. This paper addresses the inevitable challenge of making the synthesized monitoring conditions robust to partial observability of sensor uncertainty and partial controllability due to actuator disturbance. We show that the monitoring conditions result in provable safety guarantees with fallback controllers that react to monitor violation at runtime.
|
Languages for modeling runtime monitors based on sensor events @cite_3 are purely discrete (e.g., speed lower than threshold), come without correctness guarantees on the mapping between monitor and inputs outputs and without correctness guarantees on the safety properties and alarms. In contrast, our methods that satisfied monitors at runtime imply system safety (and in particular safety of the resulting physical effects) by relating the observed dynamics to the safe models verified offline.
|
{
"cite_N": [
"@cite_3"
],
"mid": [
"2160614176"
],
"abstract": [
"We describe the Monitoring and Checking (MaC) framework which provides assurance on the correctness of an execution of a real-time system at runtime. Monitoring is performed based on a formal specification of system requirements. MaC bridges the gap between formal specification, which analyzes designs rather than implementations, and testing, which validates implementations but lacks formality. An important aspect of the framework is a clear separation between implementation-dependent description of monitored objects and high-level requirements specification. Another salient feature is automatic instrumentation of executable code. The paper presents an overview of the framework, languages to express monitoring scripts and requirements, and a prototype implementation of MaC targeted at systems implemented in Java."
]
}
|
1811.06502
|
2901234647
|
Formal verification provides strong safety guarantees but only for models of cyber-physical systems. Hybrid system models describe the required interplay of computation and physical dynamics, which is crucial to guarantee what computations lead to safe physical behavior (e.g., cars should not collide). Control computations that affect physical dynamics must act in advance to avoid possibly unsafe future circumstances. Formal verification then ensures that the controllers correctly identify and provably avoid unsafe future situations under a certain model of physics. But any model of physics necessarily deviates from reality and, moreover, any observation with real sensors and manipulation with real actuators is subject to uncertainty. This makes runtime validation a crucial step to monitor whether the model assumptions hold for the real system implementation. The key question is what property needs to be runtime-monitored and what a satisfied runtime monitor entails about the safety of the system: the observations of a runtime monitor only relate back to the safety of the system if they are themselves accompanied by a proof of correctness! For an unbroken chain of correctness guarantees, we, thus, synthesize runtime monitors in a provably correct way from provably safe hybrid system models. This paper addresses the inevitable challenge of making the synthesized monitoring conditions robust to partial observability of sensor uncertainty and partial controllability due to actuator disturbance. We show that the monitoring conditions result in provable safety guarantees with fallback controllers that react to monitor violation at runtime.
|
Robustness estimation methods @cite_15 @cite_1 @cite_23 measure the degree to which a monitor given as a signal metric temporal logic specification is satisfied in order to allow bounded perturbation akin to our actuator disturbance, but cannot detect gradual drift in sensor measurements. The methods assume a finite time horizon, compact inputs and outputs, and restrictions on the dynamics ( , piecewise constant between sampling points @cite_23 ), so are useful for detecting violations after they occur, but for safety of the system at runtime need to be augmented with a predictive model of the continuous dynamics, which we handle explicitly.
|
{
"cite_N": [
"@cite_15",
"@cite_1",
"@cite_23"
],
"mid": [
"2039287452",
"189973795",
"828139470"
],
"abstract": [
"Randomized testing is a popular approach for checking properties of large embedded system designs. It is well known that a uniform random choice of test inputs is often sub-optimal. Ideally, the choice of inputs has to be guided by choosing the right input distributions in order to expose corner-case violations. However, this is also known to be a hard problem, in practice. In this paper, we present an application of the cross-entropy method for adaptively choosing input distributions for falsifying temporal logic properties of hybrid systems. We present various choices for representing input distribution families for the cross-entropy method, ranging from a complete partitioning of the input space into cells to a factored distribution of the input using graphical models. Finally, we experimentally compare the falsification approach using the cross-entropy method to other stochastic and heuristic optimization techniques implemented inside the tool S-Taliro over a set of benchmark systems. The performance of the cross entropy method is quite promising. We find that sampling inputs using the cross-entropy method guided by trace robustness can discover violations faster, and more consistently than the other competing methods considered.",
"In this paper, we provide a Dynamic Programming algorithm for on-line monitoring of the state robustness of Metric Temporal Logic specifications with past time operators. We compute the robustness of MTL with unbounded past and bounded future temporal operators (MTL (^ <+ _ +pt )) over sampled traces of Cyber-Physical Systems. We implemented our tool in Matlab as a Simulink block that can be used in any Simulink model. We experimentally demonstrate that the overhead of the MTL (^ <+ _ +pt ) robustness monitoring is acceptable for certain classes of practical specifications.",
"Signal temporal logic (STL) is a formalism used to rigorously specify requirements of cyberphysical systems (CPS), i.e., systems mixing digital or discrete components in interaction with a continuous environment or analog components. STL is naturally equipped with a quantitative semantics which can be used for various purposes: from assessing the robustness of a specification to guiding searches over the input and parameter space with the goal of falsifying the given property over system behaviors. Algorithms have been proposed and implemented for offline computation of such quantitative semantics, but only few methods exist for an online setting, where one would want to monitor the satisfaction of a formula during simulation. In this paper, we formalize a semantics for robust online monitoring of partial traces, i.e., traces for which there might not be enough data to decide the Boolean satisfaction (and to compute its quantitative counterpart). We propose an efficient algorithm to compute it and demonstrate its usage on two large scale real-world case studies coming from the automotive domain and from CPS education in a Massively Open Online Course setting. We show that savings in computationally expensive simulations far outweigh any overheads incurred by an online approach."
]
}
|
1811.06184
|
2900590651
|
We propose a novel way to use Electric Vehicles (EVs) as dynamic mobile energy storage with the goal to support grid balancing during peak load times. EVs seeking parking in a busy expensive inner city area, can get free parking with a valet company in exchange for being utilized for grid support. The valet company would have an agreement with the local utility company to receive varying rewards for discharging EVs at designated times and locations of need (say, where power lines are congested). Given vehicle availabilities, the valet company would compute an optimal schedule of which vehicle to utilize where and when so as to maximize rewards collected. Our contributions are a detailed description of this new concept along with supporting theory to bring it to fruition. On the theory side, we provide new hardness results, as well as efficient algorithms with provable performance guarantees that we also test empirically.
|
EV charging and discharging has previously been studied in the context of power loss minimization @cite_2 , frequency regulation @cite_8 , voltage regulation @cite_2 @cite_3 , peak shaving @cite_18 @cite_13 , and supporting renewable energy sources @cite_14 . We refer the reader to a recent survey by Amjad al @cite_17 on various optimization approaches and objectives employed for EV charging. Our goal is most closely related to peak shaving, however the above-mentioned work on that is very different from ours in that it either does not consider discharging @cite_13 , or does not incentivize the EV owners @cite_18 .
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_8",
"@cite_3",
"@cite_2",
"@cite_13",
"@cite_17"
],
"mid": [
"",
"2100469382",
"2094270660",
"1971054096",
"2009663735",
"2567739352",
""
],
"abstract": [
"",
"We consider the management of electric vehicle (EV) loads within a market-based Electric Power System Control Area. EV load management achieves cost savings in both (i) EV battery charging and (ii) the provision of additional regulation service required by wind farm expansion. More specifically, we develop a decision support method for an EV Load Aggregator or Energy Service Company (ESCo) that controls the battery charging for a fleet of EVs. A hierarchical decision making methodology is proposed for hedging in the day-ahead market and for playing the real-time market in a manner that yields regulation service revenues and allows for negotiated discounts on the use of distribution network payments. Amongst several potential solutions that are available, we employ a rolling horizon look-ahead stochastic dynamic programming algorithm and report some typical computational experience.",
"Vehicle-to-grid (V2G) control has the potential to provide frequency regulation service for power system operation from electric vehicles (EVs). In this paper, a decentralized V2G control (DVC) method is proposed for EVs to participate in primary frequency control considering charging demands from EV customers. When an EV customer wants to maintain the residual state of charge (SOC) of the EV battery, a V2G control strategy, called battery SOC holder (BSH), is performed to maintain the battery energy around the residual SOC along with adaptive frequency droop control. If the residual battery energy is not enough for next trip, the customer needs to charge the EV to higher SOC level. Then, a smart charging method, called charging with frequency regulation (CFR), is developed to achieve scheduled charging and provide frequency regulation at the same time. Simulations on a two-area interconnected power system with wind power integration have shown the effectiveness of the proposed method.",
"Excessive carbon emissions from the current transportation sector has encouraged the growth of electric vehicles. Despite the environmental and economical benefits electric vehicles charging will introduce negative impacts on the existing network operation. This paper examines the voltage impact due to electric vehicle fast charging in low voltage distribution network during the peak load condition. Simulation results show that fast charging of only six electric vehicles have driven the network to go beyond the safe operational voltage level. Therefore, a bi-directional DC fast charging station with novel control topology is proposed to solve the voltage drop problem. The switching of power converter modules of DC fast charging station are controlled to fast charge the electric vehicles with new constant current reduced constant current approach. The control topology maintains the DC-link voltage at 800 V and provides reactive power compensation to regulate the network bus voltage at the steady-state voltage or rated voltage (one per unit). The reactive power compensation is realized by simple direct-voltage control, which is capable of supplying sufficient reactive power to grid in situations where the electric vehicle is charging or electric vehicle is not receiving charges.",
"The penetration of Electric Vehicle (EV) on the Indian grid and its positive impact can be seen if the EV's are co-ordinated. The co-ordinate charging and discharging of EV's can improve the voltage profile and reduce the power transmission loss. Primary distribution of Guwahati City is simulated using actual data. Voltage profile and transmission loss have been analyzed considering various levels of EV penetration and charging patterns. It is shown that coordinated charging and discharging of EV's on the grid will flatten the voltage profile of a bus as well as reduce the power loss.",
"Smart electric vehicle (EV) charging deals with increasing demand charges caused by EV load on EV supply equipment (EVSE) hosts. This paper proposes a real-time smart charging algorithm that can be integrated with commercial & industrial EVSE hosts through building energy management system or with utility back office through the advanced metering infrastructure. The proposed charging scheme implements a real-time water-filling algorithm able to reduce the peak demand and to prioritize EV charging based on the data of plugged-in EVs. The algorithm also accommodates utility and local demand response and load control signals for extensive peak shaving. Real-world EV charging data from different types of venues are used to develop and evaluate the smart charging scheme for demand charge reduction at medium & large general service locations. The results show that even at constrained venues such as large retails, monthly demand charges caused by EVs can be reduced by 20 –35 for 30 EV penetration level without depreciating EVs’ charging demand.",
""
]
}
|
1811.06184
|
2900590651
|
We propose a novel way to use Electric Vehicles (EVs) as dynamic mobile energy storage with the goal to support grid balancing during peak load times. EVs seeking parking in a busy expensive inner city area, can get free parking with a valet company in exchange for being utilized for grid support. The valet company would have an agreement with the local utility company to receive varying rewards for discharging EVs at designated times and locations of need (say, where power lines are congested). Given vehicle availabilities, the valet company would compute an optimal schedule of which vehicle to utilize where and when so as to maximize rewards collected. Our contributions are a detailed description of this new concept along with supporting theory to bring it to fruition. On the theory side, we provide new hardness results, as well as efficient algorithms with provable performance guarantees that we also test empirically.
|
Hutson al @cite_16 propose an intelligent scheduling of EVs in a parking lot, so as to maximize profits by discharging EVs at the times when the market power price is high, and charging when the price is low. The spatial variation of the price is not considered in their model. Finally, they utilize a particle swarm optimization approach that does not provide any guarantee and, indeed, it can suffer from premature convergence. In contrast, we provide guarantees. But more importantly, our goal is to alleviate congestion for the utility and not to collect the highest market power price. Sometimes these may coincide but are not necessarily related in general. Additionally, we charge and discharge EVs in multiple locations in a geographic area as opposed to one parking lot.
|
{
"cite_N": [
"@cite_16"
],
"mid": [
"2542505360"
],
"abstract": [
"This paper proposes an intelligent method for scheduling usage of available energy storage capacity from plug-in hybrid electric vehicles (PHEV) and electric vehicles (EV). The batteries on these vehicles can either provide power to the grid when parked, known as vehicle-to-grid (V2G) concept or take power from the grid to charge the batteries on the vehicles. A scalable parking lot model is developed with different parameters assigned to fleets of vehicles. The size of the parking lot is assumed to be large enough to accommodate the number of vehicles performing grid transactions. In order to figure out the appropriate charge and discharge times throughout the day, binary particle swarm optimization is applied. Price curves from the California ISO database are used in this study to have realistic price fluctuations. Finding optimal solutions that maximize profits to vehicle owners while satisfying system and vehicle owners constraints is the objective of this study. Different fleets of vehicles are used to approximate varying customer base and demonstrate the scalability of parking lots for V2G. The results are compared for consistency and scalability. Discussions on how this technique can be applied to other grid issues such as peaking power are included at the end."
]
}
|
1811.06184
|
2900590651
|
We propose a novel way to use Electric Vehicles (EVs) as dynamic mobile energy storage with the goal to support grid balancing during peak load times. EVs seeking parking in a busy expensive inner city area, can get free parking with a valet company in exchange for being utilized for grid support. The valet company would have an agreement with the local utility company to receive varying rewards for discharging EVs at designated times and locations of need (say, where power lines are congested). Given vehicle availabilities, the valet company would compute an optimal schedule of which vehicle to utilize where and when so as to maximize rewards collected. Our contributions are a detailed description of this new concept along with supporting theory to bring it to fruition. On the theory side, we provide new hardness results, as well as efficient algorithms with provable performance guarantees that we also test empirically.
|
An online auction framework for EV charging is proposeded by Xiang al @cite_9 , where in a large parking lot, every spot is equipped with a charging point, and the EV users submit bids on their charging demands. The parking lot then decides on the allocation and pricing based on the collected bids. It is shown that the proposed mechanism is truthful and individually rational, while approximately maximizing social welfare. Vehicle to grid (V2G) discharging is not considered in the proposed market mechanism, while effective discharging is our main objective.
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"2015020077"
],
"abstract": [
"The increasing market share of electric vehicles (EVs) makes large-scale charging stations indispensable infrastructure for integrating EVs into the future smart grid. Thus their operation modes have drawn great attention from researchers. One promising mode called park-and-charge was recently proposed. It allows people to park their EVs at a parking lot, where EVs can get charged during the parking time. This mode has been experimented and demonstrated in small scale. However, the missing of an efficient market mechanism is an important gap preventing its large-scale deployment. Existing pricing policies, e.g., pay-by-use and flat-rate pricing, would jeopardize the efficiency of electricity allocation and the corresponding social welfare in the park-and-charge mode, and thus are inapplicable. To find an efficient mechanism, this paper explores the feasibility and benefits of utilizing auction mechanism in the EV park-and-charge mode. The auction allows EV users to submit and update bids on their charging demand to the charging station, which makes corresponding electricity allocation and pricing decisions. To this end, we propose Auc2Charge, an online auction framework. Auc2Charge is truthful and individual rational. Running in polynomial time, it provides an efficient electricity allocation for EV users with a close-form approximation ratio on system social welfare. Through both theoretical analysis and numerical simulation, we demonstrate the efficacy of Auc2Charge in terms of social welfare and user satisfaction."
]
}
|
1811.06184
|
2900590651
|
We propose a novel way to use Electric Vehicles (EVs) as dynamic mobile energy storage with the goal to support grid balancing during peak load times. EVs seeking parking in a busy expensive inner city area, can get free parking with a valet company in exchange for being utilized for grid support. The valet company would have an agreement with the local utility company to receive varying rewards for discharging EVs at designated times and locations of need (say, where power lines are congested). Given vehicle availabilities, the valet company would compute an optimal schedule of which vehicle to utilize where and when so as to maximize rewards collected. Our contributions are a detailed description of this new concept along with supporting theory to bring it to fruition. On the theory side, we provide new hardness results, as well as efficient algorithms with provable performance guarantees that we also test empirically.
|
A combination of an autonomous parking system with EV charging is studied by Timpner and Wolf @cite_15 , with the goal of scheduling the charging times of autonomous vehicles on a limited number of charging stations in a parking lot. The difference with our valet model is that they consider homogeneous charging stations, unidirectional flow of energy (no V2G), and charging station's utilization as the objective, while in our paper heterogeneous stations in different locations, EV discharging (V2G) and reward collection are critical. Furthermore, the authors propose 5 different scheduling algorithms, but no theoretical guarantee is given with respect to their stated objective, whereas we provide theoretical guarantees.
|
{
"cite_N": [
"@cite_15"
],
"mid": [
"2118207746"
],
"abstract": [
"Electric vehicles (EVs) still have relatively long and frequent charging cycles. Moreover, charging resources are typically limited and must therefore be used efficiently. The V-Charge project has the vision to provide a solution by combining autonomous valet parking with e-mobility, introducing improved parking and charging comfort. V-Charge proposes a solution for charging autonomous EVs in parking places and efficiently using scarce charging resources, thus simplifying the life of the customer and increasing the feasibility of EVs. For the management of parking lots and charging resources, V-Charge provides a server back end and a communication infrastructure. In this paper, we present our design of scheduling concepts for a coordinated charging strategy that is implemented by this back end. Through intensive simulations, we show that the V-Charge server is able to efficiently handle realistic parking volume and performs well in fulfilling customer requirements, e.g., energy demand for the next driving tasks. Moreover, we evaluate the suitability of various scheduling strategies in different usage scenarios. For the simulation setup, real-world parking statistics obtained from Hamburg Airport and the City of Braunschweig, Germany, are used."
]
}
|
1811.06184
|
2900590651
|
We propose a novel way to use Electric Vehicles (EVs) as dynamic mobile energy storage with the goal to support grid balancing during peak load times. EVs seeking parking in a busy expensive inner city area, can get free parking with a valet company in exchange for being utilized for grid support. The valet company would have an agreement with the local utility company to receive varying rewards for discharging EVs at designated times and locations of need (say, where power lines are congested). Given vehicle availabilities, the valet company would compute an optimal schedule of which vehicle to utilize where and when so as to maximize rewards collected. Our contributions are a detailed description of this new concept along with supporting theory to bring it to fruition. On the theory side, we provide new hardness results, as well as efficient algorithms with provable performance guarantees that we also test empirically.
|
From an algorithmic perspective, most existing work is either based on mixed-integer programs @cite_0 , which cannot be solved efficiently for large instances, or heuristics without any optimality guarantees. These heuristics include genetic algorithms @cite_12 , particle swarm optimization @cite_16 @cite_10 @cite_6 , and ant colony optimization @cite_11 . In contrast, we exploit techniques from theoretical computer science, in particular we provide novel hardness results and approximation algorithms, as well as adapt an algorithm for an interval scheduling problem @cite_5 , to provide efficient algorithms with rigorous performance guarantees.
|
{
"cite_N": [
"@cite_6",
"@cite_0",
"@cite_5",
"@cite_16",
"@cite_10",
"@cite_12",
"@cite_11"
],
"mid": [
"2161946994",
"2111380948",
"2047220011",
"2542505360",
"2131091718",
"2061926759",
""
],
"abstract": [
"This paper addresses the problem of energy resource scheduling. An aggregator will manage all distributed resources connected to its distribution network, including distributed generation based on renewable energy resources, demand response, storage systems, and electrical gridable vehicles. The use of gridable vehicles will have a significant impact on power systems management, especially in distribution networks. Therefore, the inclusion of vehicles in the optimal scheduling problem will be very important in future network management. The proposed particle swarm optimization approach is compared with a reference methodology based on mixed integer non-linear programming, implemented in GAMS, to evaluate the effectiveness of the proposed methodology. The paper includes a case study that consider a 32 bus distribution network with 66 distributed generators, 32 loads and 50 electric vehicles.",
"The Information and Communication Technologies (ICT) that are currently under development for future smart grid systems can enable load aggregators to have bidirectional communications with both the grid and Electric Vehicles (EVs) to obtain real-time price and load information, and to adjust EV charging schedules in real time. In addition, Energy Storage (ES) can be utilized by the aggregator to mitigate the impact of uncertainty and inaccurate prediction. In this paper, we study a problem of scheduling EV charging with ES from an electricity market perspective with joint consideration for the aggregator energy trading in the day-ahead and real-time markets. We present a Mixed Integer Linear Programming (MILP) model to provide optimal solutions as well as a simple polynomial-time heuristic algorithm based on LP rounding. In addition, we present a communication protocol for interactions among the aggregator, the ES, the power grid, and EVs, and demonstrate how to integrate the proposed scheduling approach in real-time charging operations. Extensive simulation results based on real electricity price and load data have been presented to justify the effectiveness of the proposed approach and to show how several key parameters affect its performance.",
"We study algorithmic problems that are motivated by bandwidth trading in next-generation networks. Typically, bandwidth trading involves sellers (e.g., network operators) interested in selling bandwidth pipes that offer to buyers a guaranteed level of service for a specified time interval. The buyers (e.g., bandwidth brokers) are looking to procure bandwidth pipes to satisfy the reservation requests of end-users (e.g., Internet subscribers). Depending on what is available in the bandwidth exchange, the goal of a buyer is to either spend the least amount of money so as to satisfy all the reservations made by its customers, or to maximize its revenue from whatever reservations can be satisfied. We model this as a real-time nonpreemptive scheduling problem in which machine types correspond to bandwidth pipes and jobs correspond to end-user reservation requests. Each job specifies a time interval during which it must be processed, and a set of machine types on which it can be executed. If necessary, multiple machines of a given type may be allocated, but each must be paid for. Finally, each job has associated with it a revenue, which is realized if the job is scheduled on some machine. There are two versions of the problem that we consider. In the cost minimization version, the goal is to minimize the total cost incurred for scheduling all jobs, and in the revenue maximization version the goal is to maximize the revenue of the jobs that are scheduled for processing on a given set of machines. We consider several variants of the problems that arise in practical scenarios, and provide constant factor approximations.",
"This paper proposes an intelligent method for scheduling usage of available energy storage capacity from plug-in hybrid electric vehicles (PHEV) and electric vehicles (EV). The batteries on these vehicles can either provide power to the grid when parked, known as vehicle-to-grid (V2G) concept or take power from the grid to charge the batteries on the vehicles. A scalable parking lot model is developed with different parameters assigned to fleets of vehicles. The size of the parking lot is assumed to be large enough to accommodate the number of vehicles performing grid transactions. In order to figure out the appropriate charge and discharge times throughout the day, binary particle swarm optimization is applied. Price curves from the California ISO database are used in this study to have realistic price fluctuations. Finding optimal solutions that maximize profits to vehicle owners while satisfying system and vehicle owners constraints is the objective of this study. Different fleets of vehicles are used to approximate varying customer base and demonstrate the scalability of parking lots for V2G. The results are compared for consistency and scalability. Discussions on how this technique can be applied to other grid issues such as peaking power are included at the end.",
"An automatic Vehicle-to-Grid (V2G) technology can contribute to the utility grid. V2G technology has drawn great interest in the recent years. Success of the sophisticated automatic V2G research depends on efficient scheduling of gridable vehicles in constrained parking lots. Parking lots have constraints of space and current limits for V2G. However, V2G can reduce dependencies on small expensive units in the existing power systems as energy storage that can decrease running costs. It can efficiently manage load fluctuation, peak load; however, it increases spinning reserves and reliability. As number of gridable vehicles in V2G is much higher than small units of existing systems, unit commitment (UC) with V2G is more complex than basic UC for only thermal units. Particle swarm optimization (PSO) is proposed to solve the V2G, as PSO has been demonstrated to reliably and accurately solve complex constrained optimization problems easily and quickly without any dimension limitation and physical computer memory limit. In the proposed model, binary PSO optimizes the on off states of power generating units easily. Vehicles are presented by signed integer number instead of 0 1 to reduce the dimension of the problem. Typical discrete version of PSO has less balance between local and global searching abilities to optimize the number of charging discharging gridable vehicles in the constrained system. In the same model, balanced PSO is proposed to optimize the V2G part in the constrained parking lots. Finally, results show a considerable amount of profit for using proper scheduling of gridable vehicles in constrained parking lots.",
"A large penetration of electric and plug-in hybrid electric vehicles would likely result in increased system peaks and overloading of power system assets if the charging of vehicles is left uncontrolled. In this paper we propose both a centralized and a decentralized smart-charging scheme which seek to minimize system-wide generation costs while respecting grid constraints. Under the centralized scheme, vehicles' batteries are aggregated to virtual storage resources at each network node, which are optimally dispatched with a multiperiod Optimal Power Flow. On the other hand, under the decentralized scheme, price profiles broadcasted to vehicles day-ahead are determined so that the optimal response of individual vehicles to this tariff achieves the goal of cost minimization. Two alternative tariffs are explored, one where the same price profile applies system-wide, and another where different prices can be defined at different nodes. Results show that compared with uncontrolled charging, these smart-charging schemes successfully avoid asset overloading, displace most charging to valley hours and reduce generation costs. Moreover they are robust in the face of forecast errors in vehicle behavior.",
""
]
}
|
1906.10823
|
2954037163
|
Deep learning has been used as a powerful tool for various tasks in computer vision, such as image segmentation, object recognition and data generation. A key part of end-to-end training is designing the appropriate encoder to extract specific features from the input data. However, few encoders maintain the topological properties of data, such as connection structures and global contours. In this paper, we introduce a Voronoi Diagram encoder based on convex set distance (CSVD) and apply it in edge encoding. The boundaries of Voronoi cells is related to detected edges of structures and contours. The CSVD model improves contour extraction in CNN and structure generation in GAN. We also show the experimental results and demonstrate that the proposed model has great potentiality in different visual problems where topology information should be involved.
|
Voronoi diagram has been widely applied in computer vision and graphics. The equivalent form of VD is delaunay triangulation which avoids sliver triangles. It is adopted in path planning for automated driving @cite_12 and face segmentation @cite_20 . Kise al @cite_29 proposes a direct approximated area Voronoi diagram to analysis and segment page-like images. VD can also be applied into biological structures modeling, such as distribution of cells @cite_2 and bone microarchitecture @cite_0 .
|
{
"cite_N": [
"@cite_29",
"@cite_0",
"@cite_2",
"@cite_12",
"@cite_20"
],
"mid": [
"2055408294",
"2139236989",
"2036290398",
"",
"2021566651"
],
"abstract": [
"This paper presents a method of page segmentation based on the approximated area Voronoi diagram. The characteristics of the proposed method are as follows: (1) The Voronoi diagram enables us to obtain the candidates of boundaries of document components from page images with non-Manhattan layout and a skew. (2) The candidates are utilized to estimate the intercharacter and interline gaps without the use of domain-specific parameters to select the boundaries. From the experimental results for 128 images with non-Manhattan layout and the skew of 0° 45° as well as 98 images with Manhattan layout, we have confirmed that the method is effective for extraction of body text regions, and it is as efficient as other methods based on connected component analysis.",
"We develop and evaluate a novel 3D computational bone framework, which is capable of enabling quantitative assessment of bone micro-architecture, bone mineral density and fracture risks. Our model for bone mineral is developed and its parameters are estimated from imaging data obtained with dual energy x-ray absorptiometry and x-ray imaging methods. Using these parameters, we propose a proper 3D microstructure bone model. The research starts by developing a spatio-temporal 3D microstructure bone model using Voronoi tessellation. Then, we simulate and analyze the architecture of human normal bone network and osteoporotic bone network with edge pruning process in an appropriate ratio. Finally, we design several measurements to analyze Bone Mineral Density (BMD) and bone strength based on our model. The validation results clearly demonstrate our 3D Microstructure Bone Model is robust to reflect the properties of bone in the real world.",
"Voronoi tessellations have been used to model the geometric arrangement of cells in morphogenetic or cancerous tissues, however, so far only with flat hyper-surfaces as cell-cell contact borders. In order to reproduce the experimentally observed piecewise spherical boundary shapes, we develop a consistent theoretical framework of multiplicatively weighted distance functions, defining generalized finite Voronoi neighborhoods around cell bodies of varying radius, which serve as heterogeneous generators of the resulting model tissue. The interactions between cells are represented by adhesive and repelling force densities on the cell contact borders. In addition, protrusive locomotion forces are implemented along the cell boundaries at the tissue margin, and stochastic perturbations allow for non-deterministic motility effects. Simulations of the emerging system of stochastic differential equations for position and velocity of cell centers show the feasibility of this Voronoi method generating realistic cell shapes. In the limiting case of a single cell pair in brief contact, the dynamical nonlinear Ornstein–Uhlenbeck process is analytically investigated. In general, topologically distinct tissue conformations are observed, exhibiting stability on different time scales, and tissue coherence is quantified by suitable characteristics. Finally, an argument is derived pointing to a tradeoff in natural tissues between cell size heterogeneity and the extension of cellular lamellae.",
"",
"Segmentation of human faces from still images is a research field of rapidly increasing interest. Although the field encounters several challenges, this paper seeks to present a novel face segmentation and facial feature extraction algorithm for gray intensity images (each containing a single face object). Face location and extraction must first be performed to obtain the approximate, if not exact, representation of a given face in an image. The proposed approach is based on the Voronoi diagram (VD), a well-known technique in computational geometry, which generates clusters of intensity values using information from the vertices of the external boundary of Delaunay triangulation (DT). In this way, it is possible to produce segmented image regions. A greedy search algorithm looks for a particular face candidate by focusing its action in elliptical-like regions. VD is presently employed in many fields, but researchers primarily focus on its use in skeletonization and for generating Euclidean distances; this work exploits the triangulations (i.e., Delaunay) generated by the VD for use in this field. A distance transformation is applied to segment face features. We used the BioID face database to test our algorithm. We obtained promising results: 95.14 of faces were correctly segmented; 90.2 of eyes were detected and a 98.03 detection rate was obtained for mouth and nose."
]
}
|
1906.10823
|
2954037163
|
Deep learning has been used as a powerful tool for various tasks in computer vision, such as image segmentation, object recognition and data generation. A key part of end-to-end training is designing the appropriate encoder to extract specific features from the input data. However, few encoders maintain the topological properties of data, such as connection structures and global contours. In this paper, we introduce a Voronoi Diagram encoder based on convex set distance (CSVD) and apply it in edge encoding. The boundaries of Voronoi cells is related to detected edges of structures and contours. The CSVD model improves contour extraction in CNN and structure generation in GAN. We also show the experimental results and demonstrate that the proposed model has great potentiality in different visual problems where topology information should be involved.
|
GANs is invented by Ian Goodfellow @cite_26 and has been developed into powerful tools for data generation. Zhu al @cite_10 applies GANs to learn the natural image databases and output reasonable landscape given a few user strokes. A pixel to pixel photo-realistic images synthesis is achieved by conditional GANs @cite_15 in more complicated scenes. Karras al @cite_14 grows both the generator and discriminator progressively in coarse-to-fine manner to generate high quality images with fine details. An analytic framework is proposed to get comprehension of objects and context in real images by @cite_19 , through which an exist unit can be placed in a new surrounding without conflicts. All of these efforts are in pixel level and focused on discrete distributions. So far, all of these efforts are in pixel level and focused on the image discrete distribution. They are unable to maintain the connecting structures of input data. Our net-embedding model can convert the edges distribution to Voronoi diagram parameter, which is continuous and topological invariant. After learning the distribution of VD parameters, the output diagram should have the same topological structure with the input.
|
{
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_19",
"@cite_15",
"@cite_10"
],
"mid": [
"2962760235",
"2099471712",
"2901107321",
"2963800363",
"2519536754"
],
"abstract": [
"We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"Generative Adversarial Networks (GANs) have recently achieved impressive results for many real-world applications, and many GAN variants have emerged with improvements in sample quality and training stability. However, they have not been well visualized or understood. How does a GAN represent our visual world internally? What causes the artifacts in GAN results? How do architectural choices affect GAN learning? Answering such questions could enable us to develop new insights and better models. In this work, we present an analytic framework to visualize and understand GANs at the unit-, object-, and scene-level. We first identify a group of interpretable units that are closely related to object concepts using a segmentation-based network dissection method. Then, we quantify the causal effect of interpretable units by measuring the ability of interventions to control objects in the output. We examine the contextual relationship between these units and their surroundings by inserting the discovered object concepts into new images. We show several practical applications enabled by our framework, from comparing internal representations across different layers, models, and datasets, to improving GANs by locating and removing artifact-causing units, to interactively manipulating objects in a scene. We provide open source interpretation tools to help researchers and practitioners better understand their GAN models.",
"We present a new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs). Conditional GANs have enabled a variety of applications, but the results are often limited to low-resolution and still far from realistic. In this work, we generate 2048 A— 1024 visually appealing results with a novel adversarial loss, as well as new multi-scale generator and discriminator architectures. Furthermore, we extend our framework to interactive visual manipulation with two additional features. First, we incorporate object instance segmentation information, which enables object manipulations such as removing adding objects and changing the object category. Second, we propose a method to generate diverse results given the same input, allowing users to edit the object appearance interactively. Human opinion studies demonstrate that our method significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing.",
"Realistic image manipulation is challenging because it requires modifying the image appearance in a user-controlled way, while preserving the realism of the result. Unless the user has considerable artistic skill, it is easy to “fall off” the manifold of natural images while editing. In this paper, we propose to learn the natural image manifold directly from data using a generative adversarial neural network. We then define a class of image editing operations, and constrain their output to lie on that learned manifold at all times. The model automatically adjusts the output keeping all edits as realistic as possible. All our manipulations are expressed in terms of constrained optimization and are applied in near-real time. We evaluate our algorithm on the task of realistic photo manipulation of shape and color. The presented method can further be used for changing one image to look like the other, as well as generating novel imagery from scratch based on user’s scribbles."
]
}
|
1906.10823
|
2954037163
|
Deep learning has been used as a powerful tool for various tasks in computer vision, such as image segmentation, object recognition and data generation. A key part of end-to-end training is designing the appropriate encoder to extract specific features from the input data. However, few encoders maintain the topological properties of data, such as connection structures and global contours. In this paper, we introduce a Voronoi Diagram encoder based on convex set distance (CSVD) and apply it in edge encoding. The boundaries of Voronoi cells is related to detected edges of structures and contours. The CSVD model improves contour extraction in CNN and structure generation in GAN. We also show the experimental results and demonstrate that the proposed model has great potentiality in different visual problems where topology information should be involved.
|
Contours detection has always been a classic topic in image processing. It's widely applied in objects detection, recognition and classification. In edge detection, discontinuities pixels are extracted and a delicate algorithm is designed to trace true edges from these pixels @cite_3 . In most visual problems, noise caused be the texture will disturb recognition and a clean outline is necessary. One of the most famous traditional method is snakes @cite_9 , in which an active contour is propagated to track object boundaries. Li al @cite_30 use an unsupervised learning method to detect the edge in video with the help of optical flow. Data-driven approaches are adopted by @cite_4 @cite_21 @cite_22 . They learns the probability that each pixel belongs to a certain class. Multi-scale convolution layers are included in @cite_32 @cite_24 to take advantage of global image distribution. We will show that the topology information in our CSVD model can help to extract the clean outline. CSVD net has a great potential to be embedded in both supervised and unsupervised learning of contours.
|
{
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_22",
"@cite_9",
"@cite_21",
"@cite_32",
"@cite_3",
"@cite_24"
],
"mid": [
"2962958090",
"1930528368",
"",
"2171101179",
"",
"2560622558",
"2145023731",
""
],
"abstract": [
"Data-driven approaches for edge detection have proven effective and achieve top results on modern benchmarks. However, all current data-driven edge detectors require manual supervision for training in the form of hand-labeled region segments or object boundaries. Specifically, human annotators mark semantically meaningful edges which are subsequently used for training. Is this form of strong, highlevel supervision actually necessary to learn to accurately detect edges? In this work we present a simple yet effective approach for training edge detectors without human supervision. To this end we utilize motion, and more specifically, the only input to our method is noisy semi-dense matches between frames. We begin with only a rudimentary knowledge of edges (in the form of image gradients), and alternate between improving motion estimation and edge detection in turn. Using a large corpus of video data, we show that edge detectors trained using our unsupervised scheme approach the performance of the same methods trained with full supervision (within 3-5 ). Finally, we show that when using a deep network for the edge detector, our approach provides a novel pre-training scheme for object detection.",
"Contour detection has been a fundamental component in many image segmentation and object detection systems. Most previous work utilizes low-level features such as texture or saliency to detect contours and then use them as cues for a higher-level task such as object detection. However, we claim that recognizing objects and predicting contours are two mutually related tasks. Contrary to traditional approaches, we show that we can invert the commonly established pipeline: instead of detecting contours with low-level cues for a higher-level recognition task, we exploit object-related features as high-level cues for contour detection.",
"",
"The author investigated automatic extraction of left ventricular contours from cardiac magnetic resonance imaging (MRI) studies. The contour extraction algorithms were based on active contour models, or snakes. Based on cardiac MR image characteristics, the author suggested algorithms for extracting contours from these large data sets. The author specifically considered contour propagation methods to make the contours reliable enough despite noise, artifacts, and poor temporal resolution. The emphasis was on reliable contour extraction with a minimum of user interaction. Both spin echo and gradient echo studies were considered. The extracted contours were used for determining quantitative measures for the heart and could also be used for obtaining graphically rendered cardiac surfaces. >",
"",
"Edge detection is a fundamental problem in computer vision. Recently, convolutional neural networks (CNNs) have pushed forward this field significantly. Existing methods which adopt specific layers of deep CNNs may fail to capture complex data structures caused by variations of scales and aspect ratios. In this paper, we propose an accurate edge detector using richer convolutional features (RCF). RCF encapsulates all convolutional features into more discriminative representation, which makes good usage of rich feature hierarchies, and is amenable to training via backpropagation. RCF fully exploits multiscale and multilevel information of objects to perform the image-to-image prediction holistically. Using VGG16 network, we achieve state-of-the-art performance on several available datasets. When evaluating on the well-known BSDS500 benchmark, we achieve ODS F-measure of 0.811 while retaining a fast speed (8 FPS). Besides, our fast version of RCF achieves ODS F-measure of 0.806 with 30 FPS. We also demonstrate the versatility of the proposed method by applying RCF edges for classical image segmentation.",
"This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge.",
""
]
}
|
1906.10823
|
2954037163
|
Deep learning has been used as a powerful tool for various tasks in computer vision, such as image segmentation, object recognition and data generation. A key part of end-to-end training is designing the appropriate encoder to extract specific features from the input data. However, few encoders maintain the topological properties of data, such as connection structures and global contours. In this paper, we introduce a Voronoi Diagram encoder based on convex set distance (CSVD) and apply it in edge encoding. The boundaries of Voronoi cells is related to detected edges of structures and contours. The CSVD model improves contour extraction in CNN and structure generation in GAN. We also show the experimental results and demonstrate that the proposed model has great potentiality in different visual problems where topology information should be involved.
|
Traditional convolution layers encounter difficulties when facing 3D data. An additional dimension brings huge costs both in memory and calculation. Different attempts have been made to take advantage of 3D data sparsity. A feature-centric voting scheme is employed to build novel convolutional layers of input point cloud in @cite_13 . Multi-view CNNs for object are applied in 3D shape recognition @cite_23 and improved by introducing multi-resolution filtering in @cite_33 . Octree-based Convolutional Neural Networks appears to encode 3D data adaptively @cite_16 @cite_17 . The surfaces of objects can be seen as outlines in 2D images. Then the 3D extension of CSVD model can be used to encode shape features. It is more flexible and efficient than voxel-based representation.
|
{
"cite_N": [
"@cite_33",
"@cite_23",
"@cite_16",
"@cite_13",
"@cite_17"
],
"mid": [
"2962731536",
"1910619957",
"2556802233",
"2963721253",
""
],
"abstract": [
"3D shape models are becoming widely available and easier to capture, making available 3D information crucial for progress in object classification. Current state-of-theart methods rely on CNNs to address this problem. Recently, we witness two types of CNNs being developed: CNNs based upon volumetric representations versus CNNs based upon multi-view representations. Empirical results from these two types of CNNs exhibit a large gap, indicating that existing volumetric CNN architectures and approaches are unable to fully exploit the power of 3D representations. In this paper, we aim to improve both volumetric CNNs and multi-view CNNs according to extensive analysis of existing approaches. To this end, we introduce two distinct network architectures of volumetric CNNs. In addition, we examine multi-view CNNs, where we introduce multiresolution filtering in 3D. Overall, we are able to outperform current state-of-the-art methods for both volumetric CNNs and multi-view CNNs. We provide extensive experiments designed to evaluate underlying design choices, thus providing a better understanding of the space of methods available for object classification on 3D data.",
"Contour detection serves as the basis of a variety of computer vision tasks such as image segmentation and object recognition. The mainstream works to address this problem focus on designing engineered gradient features. In this work, we show that contour detection accuracy can be improved by instead making the use of the deep features learned from convolutional neural networks (CNNs). While rather than using the networks as a blackbox feature extractor, we customize the training strategy by partitioning contour (positive) data into subclasses and fitting each subclass by different model parameters. A new loss function, named positive-sharing loss, in which each subclass shares the loss for the whole positive class, is proposed to learn the parameters. Compared to the sofmax loss function, the proposed one, introduces an extra regularizer to emphasizes the losses for the positive and negative classes, which facilitates to explore more discriminative features. Our experimental results demonstrate that learned deep features can achieve top performance on Berkeley Segmentation Dataset and Benchmark (BSDS500) and obtain competitive cross dataset generalization result on the NYUD dataset.",
"We present OctNet, a representation for deep learning with sparse 3D data. In contrast to existing models, our representation enables 3D convolutional networks which are both deep and high resolution. Towards this goal, we exploit the sparsity in the input data to hierarchically partition the space using a set of unbalanced octrees where each leaf node stores a pooled feature representation. This allows to focus memory allocation and computation to the relevant dense regions and enables deeper networks without compromising resolution. We demonstrate the utility of our OctNet representation by analyzing the impact of resolution on several 3D tasks including 3D object classification, orientation estimation and point cloud labeling.",
"This paper proposes a computationally efficient approach to detecting objects natively in 3D point clouds using convolutional neural networks (CNNs). In particular, this is achieved by leveraging a feature-centric voting scheme to implement novel convolutional layers which explicitly exploit the sparsity encountered in the input. To this end, we examine the trade-off between accuracy and speed for different architectures and additionally propose to use an L 1 penalty on the filter activations to further encourage sparsity in the intermediate representations. To the best of our knowledge, this is the first work to propose sparse convolutional layers and L 1 regularisation for efficient large-scale processing of 3D data. We demonstrate the efficacy of our approach on the KITTI object detection benchmark and show that VoteSDeep models with as few as three layers outperform the previous state of the art in both laser and laser-vision based approaches by margins of up to 40 while remaining highly competitive in terms of processing time.",
""
]
}
|
1906.10720
|
2955382287
|
Recurrent neural networks (RNNs) are a widely used tool for modeling sequential data, yet they are often treated as inscrutable black boxes. Given a trained recurrent network, we would like to reverse engineer it--to obtain a quantitative, interpretable description of how it solves a particular task. Even for simple tasks, a detailed understanding of how recurrent networks work, or a prescription for how to develop such an understanding, remains elusive. In this work, we use tools from dynamical systems analysis to reverse engineer recurrent networks trained to perform sentiment classification, a foundational natural language processing task. Given a trained network, we find fixed points of the recurrent dynamics and linearize the nonlinear system around these fixed points. Despite their theoretical capacity to implement complex, high-dimensional computations, we find that trained networks converge to highly interpretable, low-dimensional representations. In particular, the topological structure of the fixed points and corresponding linearized dynamics reveal an approximate line attractor within the RNN, which we can use to quantitatively understand how the RNN solves the sentiment analysis task. Finally, we find this mechanism present across RNN architectures (including LSTMs, GRUs, and vanilla RNNs) trained on multiple datasets, suggesting that our findings are not unique to a particular architecture or dataset. Overall, these results demonstrate that surprisingly universal and human interpretable computations can arise across a range of recurrent networks.
|
Several studies have tried to interpret recurrent networks by visualizing the activity of individual RNN units and memory gates during NLP tasks @cite_0 @cite_17 . While some individual RNN state variables appear to encode semantically meaningful features, most units do not have clear interpretations. For example, the hidden states of an LSTM appear extremely complex when performing a task (Fig. ). Other work has suggested that network units with human interpretable behaviors (e.g. class selectivity) are not more important for network performance @cite_2 , and thus our understanding of RNN function may be misled by focusing only on single interpretable units. Instead, this work aims to interpret the entire hidden state to infer computational mechanisms underlying trained RNNs.
|
{
"cite_N": [
"@cite_0",
"@cite_2",
"@cite_17"
],
"mid": [
"1951216520",
"2963420658",
""
],
"abstract": [
"Recurrent Neural Networks (RNNs), and specifically a variant with Long Short-Term Memory (LSTM), are enjoying renewed interest as a result of successful applications in a wide range of machine learning problems that involve sequential data. However, while LSTMs provide exceptional results in practice, the source of their performance and their limitations remain rather poorly understood. Using character-level language models as an interpretable testbed, we aim to bridge this gap by providing an analysis of their representations, predictions and error types. In particular, our experiments reveal the existence of interpretable cells that keep track of long-range dependencies such as line lengths, quotes and brackets. Moreover, our comparative analysis with finite horizon n-gram models traces the source of the LSTM improvements to long-range structural dependencies. Finally, we provide analysis of the remaining errors and suggests areas for further study.",
"Despite their ability to memorize large datasets, deep neural networks often achieve good generalization performance. However, the differences between the learned solutions of networks which generalize and those which do not remain unclear. Additionally, the tuning properties of single directions (defined as the activation of a single unit or some linear combination of units in response to some input) have been highlighted, but their importance has not been evaluated. Here, we connect these lines of inquiry to demonstrate that a network’s reliance on single directions is a good predictor of its generalization performance, across networks trained on datasets with different fractions of corrupted labels, across ensembles of networks trained on datasets with unmodified labels, across different hyper- parameters, and over the course of training. While dropout only regularizes this quantity up to a point, batch normalization implicitly discourages single direction reliance, in part by decreasing the class selectivity of individual units. Finally, we find that class selectivity is a poor predictor of task importance, suggesting not only that networks which generalize well minimize their dependence on individual units by reducing their selectivity, but also that individually selective units may not be necessary for strong network performance.",
""
]
}
|
1906.10775
|
2955300456
|
The ability to quickly revoke a compromised key is critical to the security of a public-key infrastructure. Regrettably, most certificate revocation schemes suffer from latency, availability, or privacy issues. The problem is exacerbated by the lack of a native delegation mechanism in TLS, which increasingly leads domain owners to engage in dangerous practices such as sharing their private keys with third parties. We investigate the utility of "proxy certificates" to address long-standing revocation and delegation shortcomings in the web PKI. By issuing proxy certificates, entities holding a regular (non-CA) certificate can grant all or a subset of their privileges to other entities. This fine-grained control on delegating privileges requires no further actions from a CA, yet does not require trust on first use (TOFU). The lifetime of a proxy certificate can be made almost arbitrarily short to curb the consequences of a key compromise. We analyze the benefits of this approach in comparison to alternatives, discussing various use cases and technical implications. We also show that combining short-lived proxy certificates with other schemes constitutes an attractive solution to several pressing problems. Overall, we make the case that the benefits obtained from incorporating proxy certificates into the current PKI substantially outweighs the changes required in practice. Such changes are minimal, and would only be required on the browser end, should a domain owner opt to use proxy certificates.
|
AKI @cite_30 and its successor ARPKI @cite_10 are more holistic approaches to upgrading the web PKI. One of the main ideas in these proposals is that resilience to compromise can be improved by requiring that multiple CAs sign each domain certificate. Additionally, to guarantee that no other rogue certificate exists for a given domain, all certificates must be logged and log servers are used to efficiently produce both presence and absence proofs. Unfortunately, this also implies that only one certificate per domain can be valid at any given time. ARPKI's key security properties were also formally verified. PoliCert @cite_8 builds on top of ARPKI to solve that problem by replacing the unique domain certificate by a unique domain policy that specifies which CAs are allowed to issue (potentially multiple) certificates for that domain. However, that approach does not allow domain owners to rapidly change their policies or produce their own certificates ( without contacting several CAs). Therefore, proxy certificates could complement ARPKI certificate chains as a lightweight and more dynamic alternative to PoliCert.
|
{
"cite_N": [
"@cite_30",
"@cite_10",
"@cite_8"
],
"mid": [
"2294157280",
"2511395838",
"2095738444"
],
"abstract": [
"Recent trends in public-key infrastructure research explore the tradeoff between decreased trust in Certificate Authorities (CAs), resilience against attacks, communication overhead (bandwidth and latency) for setting up an SSL TLS connection, and availability with respect to verifiability of public key information. In this paper, we propose AKI as a new public-key validation infrastructure, to reduce the level of trust in CAs. AKI integrates an architecture for key revocation of all entities (e.g., CAs, domains) with an architecture for accountability of all infrastructure parties through checks-and-balances. AKI efficiently handles common certification operations, and gracefully handles catastrophic events such as domain key loss or compromise. We propose AKI to make progress towards a public-key validation infrastructure with key revocation that reduces trust in any single entity.",
"The current Transport Layer Security (TLS) Public-Key Infrastructure (PKI) is based on a weakest-link security model that depends on over a thousand trust roots. The recent history of malicious and compromised Certification Authorities has fueled the desire for alternatives. Creating a new, secure infrastructure is, however, a surprisingly challenging task due to the large number of parties involved and the many ways that they can interact. A principled approach to its design is therefore mandatory, as humans cannot feasibly consider all the cases that can occur due to the multitude of interleavings of actions by legitimate parties and attackers, such as private key compromises (e.g., domain, Certification Authority, log server, other trusted entities), key revocations, key updates, etc. We present ARPKI, a PKI architecture that ensures that certificate-related operations, such as certificate issuance, update, revocation, and validation, are transparent and accountable. ARPKI efficiently supports these operations, and gracefully handles catastrophic events such as domain key loss or compromise. Moreover ARPKI is the first PKI architecture that is co-designed with a formal model, and we verify its core security property using the TAMARIN prover. We prove that ARPKI offers extremely strong security guarantees, where compromising even n 1 trusted signing and verifying entities is insufficient to launch a man-in-the-middle attack. Moreover, ARPKI’s use deters misbehavior as all operations are publicly visible. Finally, we present a proof-of-concept implementation that provides all the features required for deployment. Our experiments indicate that ARPKI efficiently handles the certification process with low overhead. It does not incur additional latency to TLS, since no additional round trips are required.",
"The recently proposed concept of publicly verifiable logs is a promising approach for mitigating security issues and threats of the current Public-Key Infrastructure (PKI). Although much progress has been made towards a more secure infrastructure, the currently proposed approaches still suffer from security vulnerabilities, inefficiency, or incremental deployment challenges. In this paper we propose PoliCert, a comprehensive log-based and domain-oriented architecture that enhances the security of PKI by offering: a) stronger authentication of a domain's public keys, b) comprehensive and clean mechanisms for certificate management, and c) an incentivised incremental deployment plan. Surprisingly, our approach has proved fruitful in addressing other seemingly unrelated problems such as TLS-related error handling and client server misconfiguration."
]
}
|
1906.10775
|
2955300456
|
The ability to quickly revoke a compromised key is critical to the security of a public-key infrastructure. Regrettably, most certificate revocation schemes suffer from latency, availability, or privacy issues. The problem is exacerbated by the lack of a native delegation mechanism in TLS, which increasingly leads domain owners to engage in dangerous practices such as sharing their private keys with third parties. We investigate the utility of "proxy certificates" to address long-standing revocation and delegation shortcomings in the web PKI. By issuing proxy certificates, entities holding a regular (non-CA) certificate can grant all or a subset of their privileges to other entities. This fine-grained control on delegating privileges requires no further actions from a CA, yet does not require trust on first use (TOFU). The lifetime of a proxy certificate can be made almost arbitrarily short to curb the consequences of a key compromise. We analyze the benefits of this approach in comparison to alternatives, discussing various use cases and technical implications. We also show that combining short-lived proxy certificates with other schemes constitutes an attractive solution to several pressing problems. Overall, we make the case that the benefits obtained from incorporating proxy certificates into the current PKI substantially outweighs the changes required in practice. Such changes are minimal, and would only be required on the browser end, should a domain owner opt to use proxy certificates.
|
A number of previous research papers have addressed the problem of delegation. @cite_16 argue that a secure delegation system should always make explicit will do to '', and present a design for the SSH protocol, called Guardian Agent. @cite_18 address the problem of server delegation in the context of capture-resilient devices ( devices required to confirm password guesses with a designated remote server before private-key operations). STYX @cite_4 is a key management scheme, based on Intel SGX, Intel QuickAssist Technology, and the SIGMA (SIGn-and-MAc) protocol, which can be used to distribute and protect SSL TLS keys.
|
{
"cite_N": [
"@cite_18",
"@cite_16",
"@cite_4"
],
"mid": [
"2017306439",
"2769473656",
"2756609872"
],
"abstract": [
"A device that performs private key operations (signatures or decryptions), and whose private key operations are protected by a password, can be immunized against offline dictionary attacks in case of capture by forcing the device to confirm a password guess with a designated remote server in order to perform a private key operation. Recent proposals for achieving this allow untrusted servers and require no server initialization per device. In this paper we extend these proposals to enable dynamic delegation from one server to another; i.e., the device can subsequently use the second server to secure its private key operations. One application is to allow a user who is traveling to a foreign country to temporarily delegate to a server local to that country the ability to confirm password guesses and aid the user's device in performing private key operations, or in the limit, to temporarily delegate this ability to a token in the user's possession. Another application is proactive security for the device's private key, i.e., proactive updates to the device and servers to eliminate any threat of offline password guessing attacks due to previously compromised servers.",
"Today's secure stream protocols, SSH and TLS, were designed for end-to-end security and do not include a role for semi-trusted third parties. As a result, users who wish to delegate some of their authority to third parties (e.g., to run SSH clients in the cloud, or to host websites on CDNs) rely on insecure workarounds such as ssh-agent forwarding and Keyless TLS. We argue that protocol designers should consider the delegation use-case explicitly, and we propose a definition of \"secure\" delegation: Before a principal agrees to delegate its authority, a system should provide it with secure advance notice of who will do what to whom under that authority. We developed Guardian Agent, a delegation system for the SSH protocol that, unlike ssh-agent forwarding, allows the user to control which delegate machines can run which commands on which servers. We were able to implement Guardian Agent in a way that remains fully compatible with existing SSH servers, by \"handing over\" a secure connection to the delegate once it has been set up. Additionally, we use this work to suggest a path for secure delegation on the Web.",
"Protecting the customer's SSL private key is the paramount issue to persuade the website owners to migrate their contents onto the cloud infrastructure, besides the advantages of cloud infrastructure in terms of flexibility, efficiency, scalability and elasticity. The emerging Keyless SSL solution retains on-premise custody of customers' SSL private keys on their own servers. However, it suffers from significant performance degradation and limited scalability, caused by the long distance connection to Key Server for each new coming end-user request. The performance improvements using persistent session and key caching onto cloud will degrade the key invulnerability and discourage the website owners because of the cloud's security bugs. In this paper, the challenges of secured key protection and distribution are addressed in philosophy of \"Storing the trusted DATA on untrusted platform and transmitting through untrusted channel\". To this end, a three-phase hierarchical key management scheme, called STYX1 is proposed to provide the secured key protection together with hardware assisted service acceleration for cloud-based content delivery network (CCDN) applications. The STYX is implemented based on Intel Software Guard Extensions (SGX), Intel QuickAssist Technology (QAT) and SIGMA (SIGn-and-MAc) protocol. STYX can provide the tight key security guarantee by SGX based key distribution with a light overhead, and it can further significantly enhance the system performance with QAT based acceleration. The comprehensive evaluations show that the STYX not only guarantees the absolute security but also outperforms the direct HTTPS server deployed CDN without QAT by up to 5x throughput with significant latency reduction at the same time."
]
}
|
1906.10775
|
2955300456
|
The ability to quickly revoke a compromised key is critical to the security of a public-key infrastructure. Regrettably, most certificate revocation schemes suffer from latency, availability, or privacy issues. The problem is exacerbated by the lack of a native delegation mechanism in TLS, which increasingly leads domain owners to engage in dangerous practices such as sharing their private keys with third parties. We investigate the utility of "proxy certificates" to address long-standing revocation and delegation shortcomings in the web PKI. By issuing proxy certificates, entities holding a regular (non-CA) certificate can grant all or a subset of their privileges to other entities. This fine-grained control on delegating privileges requires no further actions from a CA, yet does not require trust on first use (TOFU). The lifetime of a proxy certificate can be made almost arbitrarily short to curb the consequences of a key compromise. We analyze the benefits of this approach in comparison to alternatives, discussing various use cases and technical implications. We also show that combining short-lived proxy certificates with other schemes constitutes an attractive solution to several pressing problems. Overall, we make the case that the benefits obtained from incorporating proxy certificates into the current PKI substantially outweighs the changes required in practice. Such changes are minimal, and would only be required on the browser end, should a domain owner opt to use proxy certificates.
|
@cite_34 recently showed, after analyzing as many as 48 browsers, that session resumption was also problematic for user privacy as it can be used to track the average user for up to eight days with standard settings. With a long session resumption lifetime, a majority of users can even be tracked permanently. Problems have also been discovered in the way CAs perform domain validation: exploiting a BGP vulnerability to hijack traffic, an attacker can obtain a rogue certificate from vulnerable CAs @cite_20 @cite_15 . also found serious vulnerabilities in popular implementations of SSL TLS using frankencerts'', certificates with unusual combinations of extensions and constraints @cite_36 .
|
{
"cite_N": [
"@cite_36",
"@cite_15",
"@cite_34",
"@cite_20"
],
"mid": [
"1976919795",
"2889555490",
"2896648147",
"2889089210"
],
"abstract": [
"Modern network security rests on the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols. Distributed systems, mobile and desktop applications, embedded devices, and all of secure Web rely on SSL TLS for protection against network attacks. This protection critically depends on whether SSL TLS clients correctly validate X.509 certificates presented by servers during the SSL TLS handshake protocol. We design, implement, and apply the first methodology for large-scale testing of certificate validation logic in SSL TLS implementations. Our first ingredient is \"frankencerts,\" synthetic certificates that are randomly mutated from parts of real certificates and thus include unusual combinations of extensions and constraints. Our second ingredient is differential testing: if one SSL TLS implementation accepts a certificate while another rejects the same certificate, we use the discrepancy as an oracle for finding flaws in individual implementations. Differential testing with frankencerts uncovered 208 discrepancies between popular SSL TLS implementations such as OpenSSL, NSS, CyaSSL, GnuTLS, PolarSSL, MatrixSSL, etc. Many of them are caused by serious security vulnerabilities. For example, any server with a valid X.509 version1 certificate can act as a rogue certificate authority and issue fake certificates for any domain, enabling man-in-the-middle attacks against MatrixSSL and GnuTLS. Several implementations also accept certificate authorities created by unauthorized issuers, as well as certificates not intended for server authentication. We also found serious vulnerabilities in how users are warned about certificate validation errors. When presented with an expired, self-signed certificate, NSS, Safari, and Chrome (on Linux) report that the certificate has expired - a low-risk, often ignored error - but not that the connection is insecure against a man-in-the-middle attack. These results demonstrate that automated adversarial testing with frankencerts is a powerful methodology for discovering security flaws in SSL TLS implementations.",
"The security of Internet-based applications fundamentally relies on the trustworthiness of Certificate Authorities (CAs). We practically demonstrate for the first time that even a weak off-path attacker can effectively subvert the trustworthiness of popular commercially used CAs. Our attack targets CAs which use Domain Validation (DV) for authenticating domain ownership; collectively these CAs control 99 of the certificates market. The attack utilises DNS Cache poisoning and tricks the CA into issuing fraudulent certificates for domains the attacker does not legitimately own -- namely certificates binding the attacker's public key to a victim domain. We discuss short and long term defences, but argue that they fall short of securing DV. To mitigate the threats we propose Domain Validation++ (DV++). DV++ replaces the need in cryptography through assumptions in distributed systems. While retaining the benefits of DV (automation, efficiency and low costs) DV++ is secure even against Man-in-the-Middle (MitM) attackers. Deployment of DV++ is simple and does not require changing the existing infrastructure nor systems of the CAs. We demonstrate security of DV++ under realistic assumptions and provide open source access to DV++ implementation.",
"User tracking on the Internet can come in various forms, e.g., via cookies or by fingerprinting web browsers. A technique that got less attention so far is user tracking based on TLS and specifically based on the TLS session resumption mechanism. To the best of our knowledge, we are the first that investigate the applicability of TLS session resumption for user tracking. For that, we evaluated the configuration of 48 popular browsers and one million of the most popular websites. Moreover, we present a so-called prolongation attack, which allows extending the tracking period beyond the lifetime of the session resumption mechanism. To show that under the observed browser configurations tracking via TLS session resumptions is feasible, we also looked into DNS data to understand the longest consecutive tracking period for a user by a particular website. Our results indicate that with the standard setting of the session resumption lifetime in many current browsers, the average user can be tracked for up to eight days. With a session resumption lifetime of seven days, as recommended upper limit in the draft for TLS version 1.3, 65 of all users in our dataset can be tracked permanently.",
""
]
}
|
1906.10689
|
2951269809
|
This article describes the application of soft computing methods for solving the problem of locating garbage accumulation points in urban scenarios. This is a relevant problem in modern smart cities, in order to reduce negative environmental and social impacts in the waste management process, and also to optimize the available budget from the city administration to install waste bins. A specific problem model is presented, which accounts for reducing the investment costs, enhance the number of citizens served by the installed bins, and the accessibility to the system. A family of single- and multi-objective heuristics based on the PageRank method and two mutiobjective evolutionary algorithms are proposed. Experimental evaluation performed on real scenarios on the cities of Montevideo (Uruguay) and Bahia Blanca (Argentina) demonstrates the effectiveness of the proposed approaches. The methods allow computing plannings with different trade-off between the problem objectives. The computed results improve over the current planning in Montevideo and provide a reasonable budget cost and quality of service for Bahia Blanca.
|
Due to aforementioned NP-hard nature of the problem, it is not surprising that the majority of the works that have addressed the GAP location problem have used heuristic or metaheuristics approaches. Moreover, despite the existence of some works that used exact methods, these approaches in general fail to properly handle large scale real-world instances. For example, @cite_27 had to perform a partition of the original instance, which as a whole was solved heuristically, in order to apply an exact approach and even optimal solutions were not found after 3600 seconds of execution. In another example, the exact algorithm proposed by @cite_21 was highly time consuming, even when applied to a single objective version of the GAP location problem, being unable to find optimal solutions after 4200 seconds of execution.
|
{
"cite_N": [
"@cite_27",
"@cite_21"
],
"mid": [
"1994430056",
"2917309866"
],
"abstract": [
"Abstract Urban waste management is becoming an increasingly complex task, absorbing a huge amount of resources, and having a major environmental impact. The design of a waste management system consists in various activities, and one of these is related to the location of waste collection sites. In this paper, we propose an integer programming model that helps decision makers in choosing the sites where to locate the unsorted waste collection bins in a residential town, as well as the capacities of the bins to be located at each collection site. This model helps in assessing tactical decisions through constraints that force each collection area to be capacitated enough to fit the expected waste to be directed to that area, while taking into account Quality of Service constraints from the citizens’ point of view. Moreover, we propose an effective constructive heuristic approach whose aim is to provide a good solution quality in an extremely reduced computational time. Computational results on data related to the city of Nardo, in the south of Italy, show that both exact and heuristic approaches provide consistently better solutions than that currently implemented, resulting in a lower number of activated collection sites, and a lower number of bins to be used.",
"Residential garbage collection is an important urban issue to address in modern cities, being a key activity that explains a large proportion of budget expenses for local governments. Under the smart cities paradigm, specific solutions can be developed to plan a better garbage collection system, improving the quality of service provided to citizens and reducing costs. This article addresses the problem of selecting locations for community bins in a medium size Argentinian city, that stills uses a door-to-door system. An integer programming model is presented to locate community bins that minimize the installment cost while also maximize the days between two consecutive visit of the collection vehicle. Results demonstrate that the proposed model and the proposed resolution algorithm were able to provide a set of suitable solutions that can be used as a starting point for migrating from the current door-to-door system to a community bins system."
]
}
|
1906.10689
|
2951269809
|
This article describes the application of soft computing methods for solving the problem of locating garbage accumulation points in urban scenarios. This is a relevant problem in modern smart cities, in order to reduce negative environmental and social impacts in the waste management process, and also to optimize the available budget from the city administration to install waste bins. A specific problem model is presented, which accounts for reducing the investment costs, enhance the number of citizens served by the installed bins, and the accessibility to the system. A family of single- and multi-objective heuristics based on the PageRank method and two mutiobjective evolutionary algorithms are proposed. Experimental evaluation performed on real scenarios on the cities of Montevideo (Uruguay) and Bahia Blanca (Argentina) demonstrates the effectiveness of the proposed approaches. The methods allow computing plannings with different trade-off between the problem objectives. The computed results improve over the current planning in Montevideo and provide a reasonable budget cost and quality of service for Bahia Blanca.
|
Several articles have presented heuristics and soft computing methods for solving problems that are similar to the GAPs location problem. Bautista and Pereira @cite_15 modeled the GAPs location problem as a minimal set covering maximum satisfiability problem, and proposed a genetic algorithm and a GRASP metaheuristic for solving real instances in Barcelona, Spain. Other authors have applied integral approaches to solve the bins location problem and the routing collection problem simultaneously. For example, Chang and Wei @cite_32 used a fuzzy evolutionary search to solve the problem for a scenario in Kaohsiung, Taiwan. The model considered the percentage of population served, the average walking distance between users and their assigned GAP, and the approximate length of the routes of collecting vehicles as objectives. Explicit costs were not taken into account. @cite_8 introduced the Waste Bin Allocation and Routing Problem, which was solved applying different methodologies that combine sequential and simultaneous strategies; the allocation was solved either with an exact or an heuristic method, while the routing was solved using Variable Neighborhood Search.
|
{
"cite_N": [
"@cite_15",
"@cite_32",
"@cite_8"
],
"mid": [
"2062863528",
"2004034304",
"2115083197"
],
"abstract": [
"Reverse logistics problems arising in municipal waste management are both wide-ranging and varied. The usual collection system in UE countries is composed of two phases. First, citizens leave their refuse at special collection areas where different types of waste (glass, paper, plastic, organic material) are stored in special refuse bins. Subsequently, each type of waste is collected separately and moved to its final destination (a recycling plant or refuse dump). The present study focuses on the problem of locating these collection areas. We establish the relationship between the problem, the set covering problem and the MAX-SAT problem and then go on to develop a genetic algorithm and a GRASP heuristic to, respectively, solve each formulation. Finally, the quality of the algorithms is tested in a computational experience with real instances from the metropolitan area of Barcelona, as well as a reduced set of set covering instances from the literature.",
"Due to the rapid depletion of landfill space and the time-consuming process for siting and building new municipal incinerators, solid waste management strategies have to be reorganized in light of the success of recycling, recovery and reuse of secondary materials. Effective planning of solid waste recycling programs, however, is currently a substantial challenge in many solid waste management systems. One of such efforts is how to effectively allocate the recycling drop-off stations with appropriate size in the solid waste collection network to maximize the recycling achievement with minimum expense. This paper illustrates a new approach with a view to optimizing siting and routing aspects using a fuzzy multiobjective nonlinear integer programming model as a means that is particularly solved by a genetic algorithm. The case study, based on one of the administrative districts in the city of Kaohsiung in Taiwan, presents the application potential of such a planning methodology.",
"The efficient organization of waste collection systems based on bins located along the streets involves the solution of several tactical optimization problems. In particular, the bin configuration and sizing at each collection site as well as the service frequency over a given planning horizon have to be decided. In this context, a higher service frequency leads to higher routing costs, but at the same time less or smaller bins are required, which leads to lower bin allocation investment costs. The bins used have different types and different costs and there is a limit on the space at each collection site as well as a limit on the total number of bins of each type that can be used. In this paper we consider the problem of designing a collection system consisting of the combination of a vehicle routing and a bin allocation problem in which the trade-off between the associated costs has to be considered. The solution approach combines an effective variable neighborhood search metaheuristic for the routing part with a mixed integer linear programming-based exact method for the solution of the bin allocation part. We propose hierarchical solution procedures where the two decision problems are solved in sequence, as well as an integrated approach where the two problems are considered simultaneously. Extensive computational testing on synthetic and real-world instances with hundreds of collection sites shows the benefit of the integrated approaches with respect to the hierarchical ones."
]
}
|
1906.10689
|
2951269809
|
This article describes the application of soft computing methods for solving the problem of locating garbage accumulation points in urban scenarios. This is a relevant problem in modern smart cities, in order to reduce negative environmental and social impacts in the waste management process, and also to optimize the available budget from the city administration to install waste bins. A specific problem model is presented, which accounts for reducing the investment costs, enhance the number of citizens served by the installed bins, and the accessibility to the system. A family of single- and multi-objective heuristics based on the PageRank method and two mutiobjective evolutionary algorithms are proposed. Experimental evaluation performed on real scenarios on the cities of Montevideo (Uruguay) and Bahia Blanca (Argentina) demonstrates the effectiveness of the proposed approaches. The methods allow computing plannings with different trade-off between the problem objectives. The computed results improve over the current planning in Montevideo and provide a reasonable budget cost and quality of service for Bahia Blanca.
|
A similar problem was addressed by @cite_27 , using a constructive heuristic for solving large instances in Nard o , Italy, that cannot be properly handled by exact methods implemented in CPLEX. Later @cite_13 , the heuristic was modified to bound posterior routing costs, e.g., not allowing the installation in a same GAP of bins that require different type of collecting vehicles. Di Felice @cite_29 proposed a two-phase heuristic for the problem, for a real case in LAquila, Italy. The first phase solved the location of the GAPs thorough a constructive heuristic, while the second determined the quantity and size of bins needed for each GA,P according to the number of waste generators served by that GAP. A similar heuristic was applied by Boskovic and Jovici @cite_31 for Kragujevac, Serbia, using the ArcGIS Network Analyst. Since bins location is a problem that uses spatial information, other authors relied on Geographic Information Systems to gather and analyze data. For example, @cite_33 used a constructive heuristic to establish the GAPs sequentially according to some priorities in order to cover an studied area in Dundas, Canada.
|
{
"cite_N": [
"@cite_33",
"@cite_29",
"@cite_27",
"@cite_31",
"@cite_13"
],
"mid": [
"2133184147",
"2039829031",
"1994430056",
"2179645433",
"2002902304"
],
"abstract": [
"A location-allocation model contained within a geographic informa tion systems (GIS) software package was used to design a recycling depot scheme for a community of 22,000 people. Considered a less expensive alternative to curb side recycling, the depot scheme would receive a variety of recyclable materials from the public on a voluntary basis. Depot sites were located using a model that maximized the coverage of a depot site, with constraints based on projected \"re cycler behavior.\" Shopping centers, municipal parking lots, and roadside sites were candidate locations for recycling depots in the two modeled cases. A GIS-based approach is shown to be useful for determining the number and location of material recycling depots for use within an integrated municipal solid waste management system.",
"Abstract The paper reports about a pilot study that gives a numerical solution to the solid waste accumulation problem (SWAP). The purpose is to show both a simple and effective way to implement the theory using the technology of the Spatial DataBase Management Systems (SDBMSs), and the versatility of the proposed solution from the point of view of those responsible for the MSW management who, in fact, are offered a dual-mode display of the results: one tabular (typical of relational databases) and the other based on geographical maps, the latter particularly useful to highlight the spatial component of the data of the SWAP.",
"Abstract Urban waste management is becoming an increasingly complex task, absorbing a huge amount of resources, and having a major environmental impact. The design of a waste management system consists in various activities, and one of these is related to the location of waste collection sites. In this paper, we propose an integer programming model that helps decision makers in choosing the sites where to locate the unsorted waste collection bins in a residential town, as well as the capacities of the bins to be located at each collection site. This model helps in assessing tactical decisions through constraints that force each collection area to be capacitated enough to fit the expected waste to be directed to that area, while taking into account Quality of Service constraints from the citizens’ point of view. Moreover, we propose an effective constructive heuristic approach whose aim is to provide a good solution quality in an extremely reduced computational time. Computational results on data related to the city of Nardo, in the south of Italy, show that both exact and heuristic approaches provide consistently better solutions than that currently implemented, resulting in a lower number of activated collection sites, and a lower number of bins to be used.",
"This paper concerns the development of a methodology aimed at determining the optimal number of waste bins as well optimizing the location of collection points. The methodology was based on a geographic information system, which handled different sets of information, such as street directions, spatial location of objects and number of inhabitants, location of waste bins, and radius of their coverage. The study was conducted in a district in the central area of the city of Kragujevac. Due to a lack of information about the existing situation, all necessary data was collected by fieldwork and by using GPS equipment. By using the developed methodology, the results indicated a reduction of 24 in the number of collection points and 33.5 in the number of waste bins, without reducing the quality of the provided services. It has led to cost and time savings for waste collection and environmental benefits. All users of the services were covered within a 75-m radius, and the usage of bins is more efficient. According to the reduction in the number of waste bins, a total amount of €26,000 may be achieved. In addition, the time for waste collection was reduced, resulting in a €1700 saving per year in fuel costs, as well as 4.5 tons of emitted CO2 into the atmosphere.",
"Abstract In this paper, we study two decisional problems arising when planning the collection of solid waste, namely the location of collection sites (together with bin allocation) and the zoning of the service territory, and we assess the potential impact that an efficient location has on the subsequent zoning phase. We first propose both an exact and a heuristic approach to locate the unsorted waste collection bins in a residential town, and to decide the capacities and characteristics of the bins to be located at each collection site. A peculiar aspect we consider is that of taking into account the compatibility between the different types of bins when allocating them to collection areas. Moreover, we propose a fast and effective heuristic approach to identify homogeneous zones that can be served by a single collection vehicle. Computational results on data related to a real-life instance show that an efficient location is fundamental in achieving consistent monetary savings, as well as a reduced environmental impact. These reductions are the result of one vehicle less needed to perform the waste collection operations, and an overall traveled distance reduced by about 25 on the average."
]
}
|
1906.10794
|
2949661012
|
We study black-box reductions from mechanism design to algorithm design for welfare maximization in settings of incomplete information. Given oracle access to an algorithm for an underlying optimization problem, the goal is to simulate an incentive compatible mechanism. The mechanism will be evaluated on its expected welfare, relative to the algorithm provided, and its complexity is measured by the time (and queries) needed to simulate the mechanism on any input. While it is known that black-box reductions are not possible in many prior-free settings, settings with priors appear more promising: there are known reductions for Bayesian incentive compatible (BIC) mechanism design for general classes of welfare maximization problems. This dichotomy begs the question: which mechanism design problems admit black-box reductions, and which do not? Our main result is that black-box mechanism design is impossible under two of the simplest settings not captured by known positive results. First, for the problem of allocating @math goods to a single buyer whose valuation is additive and independent across the goods, subject to a downward-closed constraint on feasible allocations, we show that there is no polytime (in @math ) BIC black-box reduction for expected welfare maximization. Second, for the setting of multiple single-parameter agents---where polytime BIC reductions are known---we show that no polytime reductions exist when the incentive requirement is tightened to Max-In-Distributional-Range. In each case, we show that achieving a sub-polynomial approximation to the expected welfare requires exponentially many queries, even when the set of feasible allocations is known to be downward-closed.
|
As described above, BIC black-box reductions are known for a rich class of welfare maximization problems @cite_14 @cite_5 @cite_10 @cite_0 . In the prior-free setting, Babaioff, Lavi, and Pavlov also show that for a class of single-minded combinatorial auction problems, one can achieve DSIC in a black-box way by losing a factor that is logarithmic in the ratio between the largest and smallest possible agent values @cite_13 . Dughmi and Roughgarden show a black-box reduction for FPTAS algorithms that also applies in a broad range of multi-dimensional welfare maximization problems @cite_3 . There is also a significant line of work studying general methods for converting certain types of algorithms into IC mechanisms @cite_8 @cite_7 .
|
{
"cite_N": [
"@cite_13",
"@cite_14",
"@cite_7",
"@cite_8",
"@cite_3",
"@cite_0",
"@cite_5",
"@cite_10"
],
"mid": [
"2142270691",
"2112139342",
"2103751307",
"1982492546",
"2139692878",
"",
"2079025269",
""
],
"abstract": [
"In this article, we are interested in general techniques for designing mechanisms that approximate the social welfare in the presence of selfish rational behavior. We demonstrate our results in the setting of Combinatorial Auctions (CA). Our first result is a general deterministic technique to decouple the algorithmic allocation problem from the strategic aspects, by a procedure that converts any algorithm to a dominant-strategy ascending mechanism. This technique works for any single value domain, in which each agent has the same value for each desired outcome, and this value is the only private information. In particular, for “single-value CAs”, where each player desires any one of several different bundles but has the same value for each of them, our technique converts any approximation algorithm to a dominant strategy mechanism that almost preserves the original approximation ratio. Our second result provides the first computationally efficient deterministic mechanism for the case of single-value multi-minded bidders (with private value and private desired bundles). The mechanism achieves an approximation to the social welfare which is close to the best possible in polynomial time (unless PeNP). This mechanism is an algorithmic implementation in undominated strategies, a notion that we define and justify, and is of independent interest.",
"The optimal allocation of resources in complex environments—like allocation of dynamic wireless spectrum, cloud computing services, and Internet advertising—is computationally challenging even given the true preferences of the participants. In the theory and practice of optimization in complex environments, a wide variety of special and general purpose algorithms have been developed; these algorithms produce outcomes that are satisfactory but not generally optimal or incentive compatible. This paper develops a very simple approach for converting any, potentially non-optimal, algorithm for optimization given the true participant preferences, into a Bayesian incentive compatible mechanism that weakly improves social welfare and revenue. (JEL D82, H82, L82)",
"We give a general technique to obtain approximation mechanisms that are truthful in expectation. We show that for packing domains, any spl alpha -approximation algorithm that also bounds the integrality gap of the IF relaxation of the problem by a can be used to construct an spl alpha -approximation mechanism that is truthful in expectation. This immediately yields a variety of new and significantly improved results for various problem domains and furthermore, yields truthful (in expectation) mechanisms with guarantees that match the best known approximation guarantees when truthfulness is not required. In particular, we obtain the first truthful mechanisms with approximation guarantees for a variety of multi-parameter domains. We obtain truthful (in expectation) mechanisms achieving approximation guarantees of O( spl radic m) for combinatorial auctions (CAs), (1 + spl epsi ) for multiunit CAs with B = spl Omega (log m) copies of each item, and 2 for multiparameter knapsack problems (multiunit auctions). Our construction is based on considering an LP relaxation of the problem and using the classic VCG mechanism by W. Vickrey (1961), E. Clarke (1971) and T. Groves (1973) to obtain a truthful mechanism in this fractional domain. We argue that the (fractional) optimal solution scaled down by a, where a is the integrality gap of the problem, can be represented as a convex combination of integer solutions, and by viewing this convex combination as specifying a probability distribution over integer solutions, we get a randomized, truthful in expectation mechanism. Our construction can be seen as a way of exploiting VCG in a computational tractable way even when the underlying social-welfare maximization problem is NP-hard.",
"This paper deals with the design of efficiently computable incentive compatible, or truthful, mechanisms for combinatorial optimization problems with multi-parameter agents. We focus on approximation algorithms for NP-hard mechanism design problems. These algorithms need to satisfy certain monotonicity properties to ensure truthfulness. Since most of the known approximation techniques do not fulfill these properties, we study alternative techniques.Our first contribution is a quite general method to transform a pseudopolynomial algorithm into a monotone FPTAS. This can be applied to various problems like, e.g., knapsack, constrained shortest path, or job scheduling with deadlines. For example, the monotone FPTAS for the knapsack problem gives a very efficient, truthful mechanism for single-minded multi-unit auctions. The best previous result for such auctions was a 2-approximation. In addition, we present a monotone PTAS for the generalized assignment problem with any bounded number of parameters per agent.The most efficient way to solve packing integer programs (PIPs) is LP-based randomized rounding, which also is in general not monotone. We show that primal-dual greedy algorithms achieve almost the same approximation ratios for PIPs as randomized rounding. The advantage is that these algorithms are inherently monotone. This way, we can significantly improve the approximation ratios of truthful mechanisms for various fundamental mechanism design problems like single-minded combinatorial auctions (CAs), unsplittable flow routing and multicast routing. Our approximation algorithms can also be used for the winner determination in CAs with general bidders specifying their bids through an oracle.",
"We give the first black-box reduction from approximation algorithms to truthful approximation mechanisms for a non-trivial class of multi-parameter problems. Specifically, we prove that every welfare-maximization problem that admits a fully polynomial-time approximation scheme (FPTAS) and can be encoded as a packing problem also admits a truthful-in-expectation randomized mechanism that is an FPTAS. Our reduction makes novel use of smoothed analysis by employing small perturbations as a tool in algorithmic mechanism design. We develop a “duality” between linear perturbations of the objective function of an optimization problem and of its feasible set, and we use the “primal” and “dual” viewpoints to prove the running time bound and the truthfulness guarantee, respectively, for our mechanism.",
"",
"Optimally allocating cellphone spectrum, advertisements on the Internet, and landing slots at airports is computationally intractable. When the participants may strategize, not only must the optimizer deal with complex feasibility constraints but also with complex incentive constraints. We give a very simple method for constructing a Bayesian incentive compatible mechanism from any, potentially non-optimal, algorithm that maps agents' reports to an allocation. The expected welfare of the mechanism is, approximately, at least that of the algorithm on the agents' true preferences.",
""
]
}
|
1906.10794
|
2949661012
|
We study black-box reductions from mechanism design to algorithm design for welfare maximization in settings of incomplete information. Given oracle access to an algorithm for an underlying optimization problem, the goal is to simulate an incentive compatible mechanism. The mechanism will be evaluated on its expected welfare, relative to the algorithm provided, and its complexity is measured by the time (and queries) needed to simulate the mechanism on any input. While it is known that black-box reductions are not possible in many prior-free settings, settings with priors appear more promising: there are known reductions for Bayesian incentive compatible (BIC) mechanism design for general classes of welfare maximization problems. This dichotomy begs the question: which mechanism design problems admit black-box reductions, and which do not? Our main result is that black-box mechanism design is impossible under two of the simplest settings not captured by known positive results. First, for the problem of allocating @math goods to a single buyer whose valuation is additive and independent across the goods, subject to a downward-closed constraint on feasible allocations, we show that there is no polytime (in @math ) BIC black-box reduction for expected welfare maximization. Second, for the setting of multiple single-parameter agents---where polytime BIC reductions are known---we show that no polytime reductions exist when the incentive requirement is tightened to Max-In-Distributional-Range. In each case, we show that achieving a sub-polynomial approximation to the expected welfare requires exponentially many queries, even when the set of feasible allocations is known to be downward-closed.
|
The first impossibility result for black-box reductions in mechanism design was due to Chawla, Immorlica, and Lucier @cite_12 , who showed that no black-box reduction that guarantees dominant strategy incentive compatibility (DSIC) can approximately preserve the worst-case approximation factor of a given algorithmic oracle. Relative to that result, we relax the performance evaluation from worst-case welfare approximation to expected welfare approximation, and strengthen the incentive compatibility constraint from DSIC to MIDR. In addition to this result, Chawla, Immorlica and Lucier also showed that there is no black-box reduction for BIC mechanisms with the objective of minimizing the makespan for single-parameter agents @cite_12 . They left as an open question whether there exists a black-box reduction for DSIC mechanisms with the objective of maximizing expected welfare for single-parameter agents. We show that the answer is no for the stronger incentive property of MIDR.
|
{
"cite_N": [
"@cite_12"
],
"mid": [
"2113939008"
],
"abstract": [
"We consider the problem of converting an arbitrary approximation algorithm for a single-parameter optimization problem into a computationally efficient truthful mechanism. We ask for reductions that are black-box, meaning that they require only oracle access to the given algorithm and in particular do not require explicit knowledge of the problem constraints. Such a reduction is known to be possible, for example, for the social welfare objective when the goal is to achieve Bayesian truthfulness and preserve social welfare in expectation. We show that a black-box reduction for the social welfare objective is not possible if the resulting mechanism is required to be truthful in expectation and to preserve the worst-case approximation ratio of the algorithm to within a subpolynomial factor. Further, we prove that for other objectives such as makespan, no black-box reduction is possible even if we only require Bayesian truthfulness and an average-case performance guarantee."
]
}
|
1906.10794
|
2949661012
|
We study black-box reductions from mechanism design to algorithm design for welfare maximization in settings of incomplete information. Given oracle access to an algorithm for an underlying optimization problem, the goal is to simulate an incentive compatible mechanism. The mechanism will be evaluated on its expected welfare, relative to the algorithm provided, and its complexity is measured by the time (and queries) needed to simulate the mechanism on any input. While it is known that black-box reductions are not possible in many prior-free settings, settings with priors appear more promising: there are known reductions for Bayesian incentive compatible (BIC) mechanism design for general classes of welfare maximization problems. This dichotomy begs the question: which mechanism design problems admit black-box reductions, and which do not? Our main result is that black-box mechanism design is impossible under two of the simplest settings not captured by known positive results. First, for the problem of allocating @math goods to a single buyer whose valuation is additive and independent across the goods, subject to a downward-closed constraint on feasible allocations, we show that there is no polytime (in @math ) BIC black-box reduction for expected welfare maximization. Second, for the setting of multiple single-parameter agents---where polytime BIC reductions are known---we show that no polytime reductions exist when the incentive requirement is tightened to Max-In-Distributional-Range. In each case, we show that achieving a sub-polynomial approximation to the expected welfare requires exponentially many queries, even when the set of feasible allocations is known to be downward-closed.
|
We focus primarily on black-box reductions with priors. For the setting without priors, Pass and Seth @cite_9 build upon the impossibility result of Chawla, Immorlica, and Lucier to show that even if the transformation is given access to the underlying problem's feasibility constraint, black-box transformations are still impossible under standard cryptographic assumptions. Suksompong @cite_11 considers the case of downward-closed single-parameter environments, and likewise shows limits on the power of black-box reductions.
|
{
"cite_N": [
"@cite_9",
"@cite_11"
],
"mid": [
"310430555",
"2963986735"
],
"abstract": [
"A fundamental question in algorithmic mechanism design is whether any approximation algorithm for a single-parameter social-welfare maximization problem can be turned into a dominant-strategy truthful mechanism for the same problem (while preserving the approximation ratio up to a constant factor). A particularly desirable type of transformations—called black-box transformations—achieve the above goal by only accessing the approximation algorithm as a black box.",
"Black-box transformations have been extensively studied in algorithmic mechanism design as a generic tool for converting algorithms into truthful mechanisms without degrading the approximation guarantees. While such transformations have been designed for a variety of settings, showed that no fully general black-box transformation exists for single-parameter environments. In this paper, we investigate the potentials and limits of black-box transformations in the prior-free (i.e., non-Bayesian) setting in downward-closed single-parameter environments, a large and important class of environments in mechanism design. On the positive side, we show that such a transformation can preserve a constant fraction of the welfare at every input if the private valuations of the agents take on a constant number of values that are far apart, while on the negative side, we show that this task is not possible for general private valuations."
]
}
|
1906.10794
|
2949661012
|
We study black-box reductions from mechanism design to algorithm design for welfare maximization in settings of incomplete information. Given oracle access to an algorithm for an underlying optimization problem, the goal is to simulate an incentive compatible mechanism. The mechanism will be evaluated on its expected welfare, relative to the algorithm provided, and its complexity is measured by the time (and queries) needed to simulate the mechanism on any input. While it is known that black-box reductions are not possible in many prior-free settings, settings with priors appear more promising: there are known reductions for Bayesian incentive compatible (BIC) mechanism design for general classes of welfare maximization problems. This dichotomy begs the question: which mechanism design problems admit black-box reductions, and which do not? Our main result is that black-box mechanism design is impossible under two of the simplest settings not captured by known positive results. First, for the problem of allocating @math goods to a single buyer whose valuation is additive and independent across the goods, subject to a downward-closed constraint on feasible allocations, we show that there is no polytime (in @math ) BIC black-box reduction for expected welfare maximization. Second, for the setting of multiple single-parameter agents---where polytime BIC reductions are known---we show that no polytime reductions exist when the incentive requirement is tightened to Max-In-Distributional-Range. In each case, we show that achieving a sub-polynomial approximation to the expected welfare requires exponentially many queries, even when the set of feasible allocations is known to be downward-closed.
|
Our negative result for BIC black-box reductions for an additive bidder applies in a setting with a downward-closed feasibility constraint on the allocations. This is closely related to models of agent valuations that are additive subject to a downward-closed constraint, such as @math -additive valuations or additivity subject to a matroid constraint. These models have recently attracted interest in the literature on revenue maximization; for example, it is known that simple pricing methods can achieve a constant fraction of the optimal revenue in any such environment @cite_4 . Our impossibility result shows that even for the conceptually simpler goal of maximizing expected welfare, no general reduction is possible when the downward-closed constraint is not known to the transformation.
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"2951767300"
],
"abstract": [
"We study the revenue maximization problem of a seller with n heterogeneous items for sale to a single buyer whose valuation function for sets of items is unknown and drawn from some distribution D. We show that if D is a distribution over subadditive valuations with independent items, then the better of pricing each item separately or pricing only the grand bundle achieves a constant-factor approximation to the revenue of the optimal mechanism. This includes buyers who are k-demand, additive up to a matroid constraint, or additive up to constraints of any downwards-closed set system (and whose values for the individual items are sampled independently), as well as buyers who are fractionally subadditive with item multipliers drawn independently. Our proof makes use of the core-tail decomposition framework developed in prior work showing similar results for the significantly simpler class of additive buyers [LY13, BILW14]. In the second part of the paper, we develop a connection between approximately optimal simple mechanisms and approximate revenue monotonicity with respect to buyers' valuations. Revenue non-monotonicity is the phenomenon that sometimes strictly increasing buyers' values for every set can strictly decrease the revenue of the optimal mechanism [HR12]. Using our main result, we derive a bound on how bad this degradation can be (and dub such a bound a proof of approximate revenue monotonicity); we further show that better bounds on approximate monotonicity imply a better analysis of our simple mechanisms."
]
}
|
1906.10725
|
2954148993
|
Manifold learning techniques for dynamical systems and time series have shown their utility for a broad spectrum of applications in recent years. While these methods are effective at learning a low-dimensional representation, they are often insufficient for visualizing the global and local structure of the data. In this paper, we present DIG (Dynamical Information Geometry), a visualization method for multivariate time series data that extracts an information geometry from a diffusion framework. Specifically, we implement a novel group of distances in the context of diffusion operators, which may be useful to reveal structure in the data that may not be accessible by the commonly used diffusion distances. Finally, we present a case study applying our visualization tool to EEG data to visualize sleep stages.
|
Many dimensionality reduction methods exist, some of which have been used for visualization @cite_4 @cite_15 @cite_13 @cite_3 @cite_19 @cite_20 @cite_26 . Principal components analysis (PCA) @cite_20 and t-distributed stochastic neighborhood embedding (t-SNE) @cite_4 are two of the most commonly used methods for visualization. However, these and other methods are inadequate in many applications. First, these methods tend to favor one aspect of the data at the expense of the other. For example, when used for visualization, PCA typically shows the large scale global structure of the data while neglecting the finer, local structure. In contrast, t-SNE is explicitly designed to focus on the local structure and often distorts the global structure, potentially leading to misinterpretations @cite_1 . Second, PCA and t-SNE fail to explicitly denoise the data for visualization. Thus in noisy settings, the true structure of the data can be obscured. In addition, none of these methods are designed to exploit the structure present in dynamical systems.
|
{
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_1",
"@cite_3",
"@cite_19",
"@cite_15",
"@cite_13",
"@cite_20"
],
"mid": [
"",
"2187089797",
"2542768043",
"2053186076",
"",
"2001141328",
"2902652978",
"1602659231"
],
"abstract": [
"",
"We present a new technique called “t-SNE” that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large datasets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of datasets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the datasets.",
"",
"Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text. How do we judge similarity? Our mental representations of the world are formed by processing large numbers of sensory in",
"",
"Scientists working with large volumes of high-dimensional data, such as global climate patterns, stellar spectra, or human gene distributions, regularly confront the problem of dimensionality reduction: finding meaningful low-dimensional structures hidden in their high-dimensional observations. The human brain confronts the same problem in everyday perception, extracting from its high-dimensional sensory inputs—30,000 auditory nerve fibers or 106 optic nerve fibers—a manageably small number of perceptually relevant features. Here we describe an approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set. Unlike classical techniques such as principal component analysis (PCA) and multidimensional scaling (MDS), our approach is capable of discovering the nonlinear degrees of freedom that underlie complex natural observations, such as human handwriting or images of a face under different viewing conditions. In contrast to previous algorithms for nonlinear dimensionality reduction, ours efficiently computes a globally optimal solution, and, for an important class of data manifolds, is guaranteed to converge asymptotically to the true structure.",
"A benchmarking analysis on single-cell RNA-seq and mass cytometry data reveals the best-performing technique for dimensionality reduction.",
"I. INTRODUCTION AND FOUNDATIONS. 1. Introduction and Foundations. II. VECTOR SPACES AND LINEAR ALGEBRA. 2. Signal Spaces. 3. Representation and Approximation in Vector Spaces. 4. Linear Operators and Matrix Inverses. 5. Some Important Matrix Factorizations. 6. Eigenvalues and Eigenvectors. 7. The Singular Value Decomposition. 8. Some Special Matrices and Their Applications. 9. Kronecker Products and the Vec Operator. III. DETECTION, ESTIMATION, AND OPTIMAL FILTERING. 10. Introduction to Detection and Estimation, and Mathematical Notation. 11. Detection Theory. 12. Estimation Theory. 13. The Kalman Filter. IV. ITERATIVE AND RECURSIVE METHODS IN SIGNAL PROCESSING. 14. Basic Concepts and Methods of Iterative Algorithms. 15. Iteration by Composition of Mappings. 16. Other Iterative Algorithms. 17. The EM Algorithm in Signal Processing. V. METHODS OF OPTIMIZATION. 18. Theory of Constrained Optimization. 19. Shortest-Path Algorithms and Dynamic Programming. 20. Linear Programming. APPENDIXES. A. Basic Concepts and Definitions. B. Completing the Square. C. Basic Matrix Concepts. D. Random Processes. E. Derivatives and Gradients. F. Conditional Expectations of Multinomial and Poisson r.v.s."
]
}
|
1906.10725
|
2954148993
|
Manifold learning techniques for dynamical systems and time series have shown their utility for a broad spectrum of applications in recent years. While these methods are effective at learning a low-dimensional representation, they are often insufficient for visualizing the global and local structure of the data. In this paper, we present DIG (Dynamical Information Geometry), a visualization method for multivariate time series data that extracts an information geometry from a diffusion framework. Specifically, we implement a novel group of distances in the context of diffusion operators, which may be useful to reveal structure in the data that may not be accessible by the commonly used diffusion distances. Finally, we present a case study applying our visualization tool to EEG data to visualize sleep stages.
|
DM has been extended to dynamical systems previously @cite_30 @cite_6 @cite_0 @cite_33 . In particular, Talmon and Coifman @cite_6 @cite_0 introduced an approach called empirical intrinsic geometry (EIG) that builds a diffusion geometry using a noise resilient distance. The resulting embedding learned from the geometry is thus noise-free and captures the true structure of the underlying process. However, EIG and other extensions of DM to dynamical systems are still not optimized for visualization as the learned structure of the data is encoded in higher dimensions. In this work, we introduce a new visualization method DIG that is well-suited for visualizing high-dimensional dynamical processes by preserving an information distance between the diffusion probabilities constructed from a noise resilient distance. This results in a visualization method that represents the true structure of the underlying dynamical process.
|
{
"cite_N": [
"@cite_30",
"@cite_0",
"@cite_33",
"@cite_6"
],
"mid": [
"1985606345",
"1988147465",
"2888816943",
"2023717942"
],
"abstract": [
"Dimensionality reduction in multivariate time series analysis has broad applications, ranging from financial data analysis to biomedical research. However, high levels of ambient noise and various interferences result in nonstationary signals, which may lead to inefficient performance of conventional methods. In this paper, we propose a nonlinear dimensionality reduction framework using diffusion maps on a learned statistical manifold, which gives rise to the construction of a low-dimensional representation of the high-dimensional nonstationary time series. We show that diffusion maps, with affinity kernels based on the Kullback-Leibler divergence between the local statistics of samples, allow for efficient approximation of pairwise geodesic distances. To construct the statistical manifold, we estimate time-evolving parametric distributions by designing a family of Bayesian generative models. The proposed framework can be applied to problems in which the time-evolving distributions (of temporally localized data), rather than the samples themselves, are driven by a low-dimensional underlying process. We provide efficient parameter estimation and dimensionality reduction methodologies, and apply them to two applications: music analysis and epileptic-seizure prediction. Author-HighlightsWe build a class of Bayesian models to learn the evolving statistics of time series.We construct diffusion maps based on the time-evolving distributional information.The proposed method recovers the underlying process controlling the time series.The proposed framework is applied to the analysis of music and icEEG recordings.",
"In this paper, we present a method for time series analysis based on empirical intrinsic geometry (EIG). EIG enables one to reveal the low-dimensional parametric manifold as well as to infer the underlying dynamics of high-dimensional time series. By incorporating concepts of information geometry, this method extends existing geometric analysis tools to support stochastic settings and parametrizes the geometry of empirical distributions. However, the statistical models are not required as priors; hence, EIG may be applied to a wide range of real signals without existing definitive models. We show that the inferred model is noise-resilient and invariant under different observation and instrumental modalities. In addition, we show that it can be extended efficiently to newly acquired measurements in a sequential manner. These two advantages enable us to revisit the Bayesian approach and incorporate empirical dynamics and intrinsic geometry into a nonlinear filtering framework. We show applications to nonlinear and non-Gaussian tracking problems as well as to acoustic signal localization.",
"This paper presents a data-driven approach for analyzing multivariate time series. It relies on the hypothesis that highdimensional data often lie on a low-dimensional manifold whose geometry may be revealed using manifold learning techniques. We define a notion of distance between multivariate time series and use it to determine a low-dimensional embedding capable of describing the statistics of the signals at hand using just a few parameters. We illustrate our method on two simulated examples and two real datasets containing electroencephalographic recordings (EEG).",
"Abstract In a broad range of natural and real-world dynamical systems, measured signals are controlled by underlying processes or drivers. As a result, these signals exhibit highly redundant representations, while their temporal evolution can often be compactly described by dynamical processes on a low-dimensional manifold. In this paper, we propose a graph-based method for revealing the low-dimensional manifold and inferring the processes. This method provides intrinsic models for measured signals, which are noise resilient and invariant under different random measurements and instrumental modalities. Such intrinsic models may enable mathematical calibration of complex measurements and build an empirical geometry driven by the observations, which is especially suitable for applications without a priori knowledge of models and solutions. We exploit the temporal dynamics and natural small perturbations of the signals to explore the local tangent spaces of the low-dimensional manifold of empirical probability densities. This information is used to define an intrinsic Riemannian metric, which in turn gives rise to the construction of a graph that represents the desired low-dimensional manifold. Such a construction is equivalent to an inverse problem, which is formulated as a nonlinear differential equation and is solved empirically through eigenvectors of an appropriate Laplace operator. We examine our method on two nonlinear filtering applications: a nonlinear and non-Gaussian tracking problem as well as a non-stationary hidden Markov chain scheme. The experimental results demonstrate the power of our theory by extracting the underlying processes, which were measured through different nonlinear instrumental conditions, in an entirely data-driven nonparametric way."
]
}
|
1906.10725
|
2954148993
|
Manifold learning techniques for dynamical systems and time series have shown their utility for a broad spectrum of applications in recent years. While these methods are effective at learning a low-dimensional representation, they are often insufficient for visualizing the global and local structure of the data. In this paper, we present DIG (Dynamical Information Geometry), a visualization method for multivariate time series data that extracts an information geometry from a diffusion framework. Specifically, we implement a novel group of distances in the context of diffusion operators, which may be useful to reveal structure in the data that may not be accessible by the commonly used diffusion distances. Finally, we present a case study applying our visualization tool to EEG data to visualize sleep stages.
|
EEG signals have been embedded in low dimensional representations for detecting emotional states @cite_29 , pre-seizure states @cite_17 @cite_22 and sleep dynamics @cite_33 . In the latter DM is implemented by building the affinity matrix using both the cross-spectrum distance and the covariance matrix distance as similarity measures between multivariate time series. EIG has also been applied to data including both respiratory and EEG signals @cite_27 .
|
{
"cite_N": [
"@cite_22",
"@cite_33",
"@cite_29",
"@cite_27",
"@cite_17"
],
"mid": [
"",
"2888816943",
"2139564752",
"2025020821",
"2403175639"
],
"abstract": [
"",
"This paper presents a data-driven approach for analyzing multivariate time series. It relies on the hypothesis that highdimensional data often lie on a low-dimensional manifold whose geometry may be revealed using manifold learning techniques. We define a notion of distance between multivariate time series and use it to determine a low-dimensional embedding capable of describing the statistics of the signals at hand using just a few parameters. We illustrate our method on two simulated examples and two real datasets containing electroencephalographic recordings (EEG).",
"Recently, emotion classification from EEG data has attracted much attention with the rapid development of dry electrode techniques, machine learning algorithms, and various real-world applications of brain-computer interface for normal people. Until now, however, researchers had little understanding of the details of relationship between different emotional states and various EEG features. To improve the accuracy of EEG-based emotion classification and visualize the changes of emotional states with time, this paper systematically compares three kinds of existing EEG features for emotion classification, introduces an efficient feature smoothing method for removing the noise unrelated to emotion task, and proposes a simple approach to tracking the trajectory of emotion changes with manifold learning. To examine the effectiveness of these methods introduced in this paper, we design a movie induction experiment that spontaneously leads subjects to real emotional states and collect an EEG data set of six subjects. From experimental results on our EEG data set, we found that (a) power spectrum feature is superior to other two kinds of features; (b) a linear dynamic system based feature smoothing method can significantly improve emotion classification accuracy; and (c) the trajectory of emotion changes can be visualized by reducing subject-independent features with manifold learning.",
"In this paper, two modern adaptive signal processing techniques, empirical intrinsic geometry and synchrosqueezing transform, are applied to quantify different dynamical features of the respiratory and electroencephalographic signals. We show that the proposed features are theoretically rigorously supported, as well as capture the sleep information hidden inside the signals. The features are used as input to multiclass support vector machines with the radial basis function to automatically classify sleep stages. The effectiveness of the classification based on the proposed features is shown to be comparable to human expert classification—the proposed classification of awake, REM, N1, N2, and N3 sleeping stages based on the respiratory signal (resp. respiratory and EEG signals) has the overall accuracy @math (resp. @math ) in the relatively normal subject group. In addition, by examining the combination of the respiratory signal with the electroencephalographic signal, we conclude that the respiratory signal consists of ample sleep information, which supplements to the information stored in the electroencephalographic signal.",
"We study the inference of latent intrinsic variables of dynamical systems from output signal measurements. The primary focus is the construction of an intrinsic distance between signal measurements, which is independent of the measurement device. This distance enables us to infer the latent intrinsic variables through the solution of an eigenvector problem with a Laplace operator based on a kernel. The signal geometry and its dynamics are represented with nonlinear observers. An analysis of the properties of the observers that allow for accurate recovery of the latent variables is given, and a way to test whether these properties are satisfied from the measurements is proposed. Scattering and window Fourier transform observers are compared. Applications are shown on simulated data, and on real intracranial Electroencephalography (EEG) signals of epileptic patients recorded prior to seizures."
]
}
|
1906.10897
|
2905551908
|
This paper introduces a novel deep learning based method, named bridge neural network (BNN) to dig the potential relationship between two given data sources task by task. The proposed approach employs two convolutional neural networks that project the two data sources into a feature space to learn the desired common representation required by the specific task. The training objective with artificial negative samples is introduced with the ability of mini-batch training and it's asymptotically equivalent to maximizing the total correlation of the two data sources, which is verified by the theoretical analysis. The experiments on the tasks, including pair matching, canonical correlation analysis, transfer learning, and reconstruction demonstrate the state-of-the-art performance of BNN, which may provide new insights into the aspect of common representation learning.
|
Canonical Correlation Analysis (CCA) @cite_14 is often used to build common representations. It is also a general procedure for investigating the relationships between two sets of variables by computing a linear projection for data pairs into a common space which can maximize their linear correlation. It plays a significant role in many fields including biology and neurology @cite_9 , natural language processing @cite_19 , speech processing @cite_4 and computer vision tasks, e.g., action recognition @cite_15 , linking text and image @cite_24 . CCA is also a basic method in multi-view learning, see @cite_18 @cite_23 for details.
|
{
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_9",
"@cite_24",
"@cite_19",
"@cite_23",
"@cite_15"
],
"mid": [
"1670132599",
"2025341678",
"2063036810",
"2166403493",
"2508827254",
"2129625650",
"",
"2111411921"
],
"abstract": [
"In recent years, a great many methods of learning from multi-view data by considering the diversity of different views have been proposed. These views may be obtained from multiple sources or different feature subsets. In trying to organize and highlight similarities and differences between the variety of multi-view learning approaches, we review a number of representative multi-view learning algorithms in different areas and classify them into three groups: 1) co-training, 2) multiple kernel learning, and 3) subspace learning. Notably, co-training style algorithms train alternately to maximize the mutual agreement on two distinct views of the data; multiple kernel learning algorithms exploit kernels that naturally correspond to different views and combine kernels either linearly or non-linearly to improve learning performance; and subspace learning algorithms aim to obtain a latent subspace shared by multiple views by assuming that the input views are generated from this latent subspace. Though there is significant variance in the approaches to integrating multiple views to improve learning performance, they mainly exploit either the consensus principle or the complementary principle to ensure the success of multi-view learning. Since accessing multiple views is the fundament of multi-view learning, with the exception of study on learning a model from multiple views, it is also valuable to study how to construct multiple views and how to evaluate these views. Overall, by exploring the consistency and complementary properties of different views, multi-view learning is rendered more effective, more promising, and has better generalization ability than single-view learning.",
"Concepts of correlation and regression may be applied not only to ordinary one-dimensional variates but also to variates of two or more dimensions. Marksmen side by side firing simultaneous shots at targets, so that the deviations are in part due to independent individual errors and in part to common causes such as wind, provide a familiar introduction to the theory of correlation; but only the correlation of the horizontal components is ordinarily discussed, whereas the complex consisting of horizontal and vertical deviations may be even more interesting. The wind at two places may be compared, using both components of the velocity in each place. A fluctuating vector is thus matched at each moment with another fluctuating vector. The study of individual differences in mental and physical traits calls for a detailed study of the relations between sets of correlated variates. For example the scores on a number of mental tests may be compared with physical measurements on the same persons. The questions then arise of determining the number and nature of the independent relations of mind and body shown by these data to exist, and of extracting from the multiplicity of correlations in the system suitable characterizations of these independent relations. As another example, the inheritance of intelligence in rats might be studied by applying not one but s different mental tests to N mothers and to a daughter of each",
"Canonical correlation analysis (CCA) and kernel CCA can be used for unsupervised learning of acoustic features when a second view (e.g., articulatory measurements) is available for some training data, and such projections have been used to improve phonetic frame classification. Here we study the behavior of CCA-based acoustic features on the task of phonetic recognition, and investigate to what extent they are speaker-independent or domain-independent. The acoustic features are learned using data drawn from the University of Wisconsin X-ray Microbeam Database (XRMB). The features are evaluated within and across speakers on XRMB data, as well as on out-of-domain TIMIT and MOCHA-TIMIT data. Experimental results show consistent improvement with the learned acoustic features over baseline MFCCs and PCA projections. In both speaker-dependent and cross-speaker experiments, phonetic error rates are improved by 4-9 absolute (10-23 relative) using CCA-based features over baseline MFCCs. In cross-domain phonetic recognition (training on XRMB and testing on MOCHA or TIMIT), the learned projections provide smaller improvements.",
"Abstract We introduce a new unsupervised fMRI analysis method based on kernel canonical correlation analysis which differs from the class of supervised learning methods (e.g., the support vector machine) that are increasingly being employed in fMRI data analysis. Whereas SVM associates properties of the imaging data with simple specific categorical labels (e.g., − 1, 1 indicating experimental conditions 1 and 2), KCCA replaces these simple labels with a label vector for each stimulus containing details of the features of that stimulus. We have compared KCCA and SVM analyses of an fMRI data set involving responses to emotionally salient stimuli. This involved first training the algorithm (SVM, KCCA) on a subset of fMRI data and the corresponding labels label vectors (of pleasant and unpleasant), then testing the algorithms on data withheld from the original training phase. The classification accuracies of SVM and KCCA proved to be very similar. However, the most important result arising form this study is the KCCA is able to extract some regions that SVM also identifies as the most important in task discrimination and these are located manly in the visual cortex. The results of the KCCA were achieved blind to the categorical task labels. Instead, the stimulus category is effectively derived from the vector of image features.",
"Linking two data sources is a basic building block in numerous computer vision problems. Canonical Correlation Analysis (CCA) achieves this by utilizing a linear optimizer in order to maximize the correlation between the two views. Recent work makes use of non-linear models, including deep learning techniques, that optimize the CCA loss in some feature space. In this paper, we introduce a novel, bi-directional neural network architecture for the task of matching vectors from two data sources. Our approach employs two tied neural network channels that project the two views into a common, maximally correlated space using the Euclidean loss. We show a direct link between the correlation-based loss and Euclidean loss, enabling the use of Euclidean loss for correlation maximization. To overcome common Euclidean regression optimization problems, we modify well-known techniques to our problem, including batch normalization and dropout. We show state of the art results on a number of computer vision matching tasks including MNIST image matching and sentence-image matching on the Flickr8k, Flickr30k and COCO datasets.",
"Recently, there has been substantial interest in using large amounts of unlabeled data to learn word representations which can then be used as features in supervised classifiers for NLP tasks. However, most current approaches are slow to train, do not model the context of the word, and lack theoretical grounding. In this paper, we present a new learning method, Low Rank Multi-View Learning (LR-MVL) which uses a fast spectral method to estimate low dimensional context-specific word representations from unlabeled data. These representation features can then be used with any supervised learner. LR-MVL is extremely fast, gives guaranteed convergence to a global optimum, is theoretically elegant, and achieves state-of-the-art performance on named entity recognition (NER) and chunking problems.",
"",
"We introduce a new framework, namely tensor canonical correlation analysis (TCCA) which is an extension of classical canonical correlation analysis (CCA) to multidimensional data arrays (or tensors) and apply this for action gesture classification in videos. By tensor CCA, joint space-time linear relationships of two video volumes are inspected to yield flexible and descriptive similarity features of the two videos. The TCCA features are combined with a discriminative feature selection scheme and a nearest neighbor classifier for action classification. In addition, we propose a time-efficient action detection method based on dynamic learning of subspaces for tensor CCA for the case that actions are not aligned in the space-time domain. The proposed method delivered significantly better accuracy and comparable detection speed over state-of-the-art methods on the KTH action data set as well as self-recorded hand gesture data sets."
]
}
|
1906.10897
|
2905551908
|
This paper introduces a novel deep learning based method, named bridge neural network (BNN) to dig the potential relationship between two given data sources task by task. The proposed approach employs two convolutional neural networks that project the two data sources into a feature space to learn the desired common representation required by the specific task. The training objective with artificial negative samples is introduced with the ability of mini-batch training and it's asymptotically equivalent to maximizing the total correlation of the two data sources, which is verified by the theoretical analysis. The experiments on the tasks, including pair matching, canonical correlation analysis, transfer learning, and reconstruction demonstrate the state-of-the-art performance of BNN, which may provide new insights into the aspect of common representation learning.
|
In order to fix these problems, this paper proposes bridge neural network (BNN) with lightweight convolution layers to learn common representations by mining the potential relationship of the specified data sources according to a given task. The most related work is DCCA @cite_5 , 2WayNet @cite_24 and Siamese Network @cite_7 , which is a similar approach of using two networks to learn the similarity between two inputs. Differences between BNN and each of them will be described in next section.
|
{
"cite_N": [
"@cite_24",
"@cite_5",
"@cite_7"
],
"mid": [
"2508827254",
"1523385540",
"2157364932"
],
"abstract": [
"Linking two data sources is a basic building block in numerous computer vision problems. Canonical Correlation Analysis (CCA) achieves this by utilizing a linear optimizer in order to maximize the correlation between the two views. Recent work makes use of non-linear models, including deep learning techniques, that optimize the CCA loss in some feature space. In this paper, we introduce a novel, bi-directional neural network architecture for the task of matching vectors from two data sources. Our approach employs two tied neural network channels that project the two views into a common, maximally correlated space using the Euclidean loss. We show a direct link between the correlation-based loss and Euclidean loss, enabling the use of Euclidean loss for correlation maximization. To overcome common Euclidean regression optimization problems, we modify well-known techniques to our problem, including batch normalization and dropout. We show state of the art results on a number of computer vision matching tasks including MNIST image matching and sentence-image matching on the Flickr8k, Flickr30k and COCO datasets.",
"We introduce Deep Canonical Correlation Analysis (DCCA), a method to learn complex nonlinear transformations of two views of data such that the resulting representations are highly linearly correlated. Parameters of both transformations are jointly learned to maximize the (regularized) total correlation. It can be viewed as a nonlinear extension of the linear method canonical correlation analysis (CCA). It is an alternative to the nonparametric method kernel canonical correlation analysis (KCCA) for learning correlated nonlinear transformations. Unlike KCCA, DCCA does not require an inner product, and has the advantages of a parametric method: training time scales well with data size and the training data need not be referenced when computing the representations of unseen instances. In experiments on two real-world datasets, we find that DCCA learns representations with significantly higher correlation than those learned by CCA and KCCA. We also introduce a novel non-saturating sigmoid function based on the cube root that may be useful more generally in feedforward neural networks.",
"We present a method for training a similarity metric from data. The method can be used for recognition or verification applications where the number of categories is very large and not known during training, and where the number of training samples for a single category is very small. The idea is to learn a function that maps input patterns into a target space such that the L sub 1 norm in the target space approximates the \"semantic\" distance in the input space. The method is applied to a face verification task. The learning process minimizes a discriminative loss function that drives the similarity metric to be small for pairs of faces from the same person, and large for pairs from different persons. The mapping from raw to the target space is a convolutional network whose architecture is designed for robustness to geometric distortions. The system is tested on the Purdue AR face database which has a very high degree of variability in the pose, lighting, expression, position, and artificial occlusions such as dark glasses and obscuring scarves."
]
}
|
1811.05939
|
2951417335
|
We address the issue of domain gap when making use of synthetic data to train a scene-specific object detector and pose estimator. While previous works have shown that the constraints of learning a scene-specific model can be leveraged to create geometrically and photometrically consistent synthetic data, care must be taken to design synthetic content which is as close as possible to the real-world data distribution. In this work, we propose to solve domain gap through the use of appearance randomization to generate a wide range of synthetic objects to span the space of realistic images for training. An ablation study of our results is presented to delineate the individual contribution of different components in the randomization process. We evaluate our method on VIRAT, UA-DETRAC, EPFL-Car datasets, where we demonstrate that using scene specific domain randomized synthetic data is better than fine-tuning off-the-shelf models on limited real data.
|
There are many researches using synthetic data for computer vision tasks. Dhome used synthetic models to recognize objects from a single image @cite_12 . For pedestrian detection, computer generated pedestrian images were used to train classifiers @cite_10 . 3D simulation has been used for multi-view car detection @cite_23 @cite_29 @cite_7 . Sun and Saenko @cite_22 trained 2D object detector with synthetic data generated by 3D simulation.
|
{
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_29",
"@cite_23",
"@cite_10",
"@cite_12"
],
"mid": [
"2083544878",
"2059704894",
"2056083843",
"1964201035",
"2789309368",
"2064515151"
],
"abstract": [
"The most successful 2D object detection methods require a large number of images annotated with object bounding boxes to be collected for training. We present an alternative approach that trains on virtual data rendered from 3D models, avoiding the need for manual labeling. Growing demand for virtual reality applications is quickly bringing about an abundance of available 3D models for a large variety of object categories. While mainstream use of 3D models in vision has focused on predicting the 3D pose of objects, we investigate the use of such freely available 3D models for multicategory 2D object detection. To address the issue of dataset bias that arises from training on virtual data and testing on real images, we propose a simple and fast adaptation approach based on decorrelated features. We also compare two kinds of virtual data, one rendered with real-image textures and one without. Evaluation on a benchmark domain adaptation dataset demonstrates that our method performs comparably to existing methods trained on large-scale real image domains.",
"We introduce a new approach for recognizing and reconstructing 3D objects in images. Our approach is based on an analysis by synthesis strategy. A forward synthesis model constructs possible geometric interpretations of the world, and then selects the interpretation that best agrees with the measured visual evidence. The forward model synthesizes visual templates defined on invariant (HOG) features. These visual templates are discriminatively trained to be accurate for inverse estimation. We introduce an efficient \"brute-force\" approach to inference that searches through a large number of candidate reconstructions, returning the optimal one. One benefit of such an approach is that recognition is inherently (re)constructive. We show state of the art performance for detection and reconstruction on two challenging 3D object recognition datasets of cars and cuboids.",
"Estimating the precise pose of a 3D model in an image is challenging; explicitly identifying correspondences is difficult, particularly at smaller scales and in the presence of occlusion. Exemplar classifiers have demonstrated the potential of detection-based approaches to problems where precision is required. In particular, correlation filters explicitly suppress classifier response caused by slight shifts in the bounding box. This property makes them ideal exemplar classifiers for viewpoint discrimination, as small translational shifts can often be confounded with small rotational shifts. However, exemplar based pose-by-detection is not scalable because, as the desired precision of viewpoint estimation increases, the number of exemplars needed increases as well. We present a training framework to reduce an ensemble of exemplar correlation filters for viewpoint estimation by directly optimizing a discriminative objective. We show that the discriminatively reduced ensemble outperforms the state-of-the-art on three publicly available datasets and we introduce a new dataset for continuous car pose estimation in street scene images.",
"Current object class recognition systems typically target 2D bounding box localization, encouraged by benchmark data sets, such as Pascal VOC. While this seems suitable for the detection of individual objects, higher-level applications such as 3D scene understanding or 3D object tracking would benefit from more fine-grained object hypotheses incorporating 3D geometric information, such as viewpoints or the locations of individual parts. In this paper, we help narrowing the representational gap between the ideal input of a scene understanding system and object class detector output, by designing a detector particularly tailored towards 3D geometric reasoning. In particular, we extend the successful discriminatively trained deformable part models to include both estimates of viewpoint and 3D parts that are consistent across viewpoints. We experimentally verify that adding 3D geometric information comes at minimal performance loss w.r.t. 2D bounding box localization, but outperforms prior work in 3D viewpoint estimation and ultra-wide baseline matching.",
"We consider scenarios where we have zero instances of real pedestrian data (e.g., a newly installed surveillance system in a novel location in which no labeled real data or unsupervised real data exists yet) and a pedestrian detector must be developed prior to any observations of pedestrians. Given a single image and auxiliary scene information in the form of camera parameters and geometric layout of the scene, our approach infers and generates a large variety of geometrically and photometrically accurate potential images of synthetic pedestrians along with purely accurate ground-truth labels through the use of computer graphics rendering engine. We first present an efficient discriminative learning method that takes these synthetic renders and generates a unique spatially-varying and geometry-preserving pedestrian appearance classifier customized for every possible location in the scene. In order to extend our approach to multi-task learning for further analysis (i.e., estimating pose and segmentation of pedestrians besides detection), we build a more generalized model employing a fully convolutional neural network architecture for multi-task learning leveraging the “free\" ground-truth annotations that can be obtained from our pedestrian synthesizer. We demonstrate that when real human annotated data is scarce or non-existent, our data generation strategy can provide an excellent solution for an array of tasks for human activity analysis including detection, pose estimation and segmentation. Experimental results show that our approach (1) outperforms classical models and hybrid synthetic-real models, (2) outperforms various combinations of off-the-shelf state-of-the-art pedestrian detectors and pose estimators that are trained on real data, and (3) surprisingly, our method using purely synthetic data is able to outperform models trained on real scene-specific data when data is limited.",
"This paper presents a new method that permits to estimate, in the viewer coordinate system, the spatial attitude of an articulated object from a single perspective image. Its principle is based on the interpretation of some image lines as the perspective projection of linear ridges of the object model, and on an iterative search of the model attitude consistent with these projections. The presented method doesn't locate separately the different parts of the object by using for each of them a technics devoted to the localization of rigid object but computes a global attitude which respects the mechanical articulations of the objet. In fact, the geometrical transformations applied to the model to bring it into the correct attitude are obtained in two steps. The first one is devoted to the estimation of the attitude parameters corresponding to a rotation and involves an iterative process. The second step permits by the resolution of a linear system to estimate the translation parameters. The presented experiments correspond to the localization of robot arms from synthetical and real images. The former case permits to appreciate the accuracy of the method since the final result of the pose estimation can be compared with the attitude parameters used to create the synthetical image. The latter case presents an experiment made in an industrial environment and involves the estimation of twelve paramaters since the observed robot arm owns six inner degrees of freedom. The presented method can be useful in some operation driven by remote control."
]
}
|
1811.05939
|
2951417335
|
We address the issue of domain gap when making use of synthetic data to train a scene-specific object detector and pose estimator. While previous works have shown that the constraints of learning a scene-specific model can be leveraged to create geometrically and photometrically consistent synthetic data, care must be taken to design synthetic content which is as close as possible to the real-world data distribution. In this work, we propose to solve domain gap through the use of appearance randomization to generate a wide range of synthetic objects to span the space of realistic images for training. An ablation study of our results is presented to delineate the individual contribution of different components in the randomization process. We evaluate our method on VIRAT, UA-DETRAC, EPFL-Car datasets, where we demonstrate that using scene specific domain randomized synthetic data is better than fine-tuning off-the-shelf models on limited real data.
|
Ganin and Lempitsky proposed a domain adaptation method where the learned feature are invariant to the domain shift @cite_5 . @cite_20 suggested a multichannel autoencoder to reduce the domain gap between real and synthetic data. SimGAN @cite_15 used domain adaption for training eye gaze estimation systems on synthetic eye images. They solved the domain shift problem of synthetic images using a GAN based refiner that converts the synthetic images to the refined images. The refined images have similar noise distribution to real eye images.
|
{
"cite_N": [
"@cite_5",
"@cite_15",
"@cite_20"
],
"mid": [
"2963826681",
"2567101557",
"2172248380"
],
"abstract": [
"Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of \"deep\" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard back propagation. Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.",
"With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations. However, learning from synthetic images may not achieve the desired performance due to a gap between synthetic and real image distributions. To reduce this gap, we propose Simulated+Unsupervised (S+U) learning, where the task is to learn a model to improve the realism of a simulator's output using unlabeled real data, while preserving the annotation information from the simulator. We develop a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors. We make several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training: (i) a 'self-regularization' term, (ii) a local adversarial loss, and (iii) updating the discriminator using a history of refined images. We show that this enables generation of highly realistic images, which we demonstrate both qualitatively and with a user study. We quantitatively evaluate the generated images by training models for gaze estimation and hand pose estimation. We show a significant improvement over using synthetic images, and achieve state-of-the-art results on the MPIIGaze dataset without any labeled real data.",
"We propose a method for using synthetic data to help learning classifiers. Synthetic data, even is generated based on real data, normally results in a shift from the distribution of real data in feature space. To bridge the gap between the real and synthetic data, and jointly learn from synthetic and real data, this paper proposes a Multichannel Autoencoder(MCAE). We show that by suing MCAE, it is possible to learn a better feature representation for classification. To evaluate the proposed approach, we conduct experiments on two types of datasets. Experimental results on two datasets validate the efficiency of our MCAE model and our methodology of generating synthetic data."
]
}
|
1811.05939
|
2951417335
|
We address the issue of domain gap when making use of synthetic data to train a scene-specific object detector and pose estimator. While previous works have shown that the constraints of learning a scene-specific model can be leveraged to create geometrically and photometrically consistent synthetic data, care must be taken to design synthetic content which is as close as possible to the real-world data distribution. In this work, we propose to solve domain gap through the use of appearance randomization to generate a wide range of synthetic objects to span the space of realistic images for training. An ablation study of our results is presented to delineate the individual contribution of different components in the randomization process. We evaluate our method on VIRAT, UA-DETRAC, EPFL-Car datasets, where we demonstrate that using scene specific domain randomized synthetic data is better than fine-tuning off-the-shelf models on limited real data.
|
@cite_24 used domain randomization to fly a quadrotor through indoor environments. @cite_9 trained an agent to play Doom and generalize to unseen game levels. @cite_16 , @cite_25 , @cite_26 , @cite_14 used domain randomization for grasping objects. Tremblay performed car detection with domain randomization @cite_17 . @cite_18 proposed object orientation estimation for industrial part shapes that is solely trained on synthetic views rendered from a 3D model. They used domain randomization to reduce the gap between synthetic data and real data.
|
{
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_14",
"@cite_9",
"@cite_24",
"@cite_16",
"@cite_25",
"@cite_17"
],
"mid": [
"2952188290",
"2766614170",
"2767050701",
"2952578114",
"2565902248",
"2605102758",
"2725320964",
"2796981088"
],
"abstract": [
"We propose a real-time RGB-based pipeline for object detection and 6D pose estimation. Our novel 3D orientation estimation is based on a variant of the Denoising Autoencoder that is trained on simulated views of a 3D model using Domain Randomization. This so-called Augmented Autoencoder has several advantages over existing methods: It does not require real, pose-annotated training data, generalizes to various test sensors and inherently handles object and view symmetries. Instead of learning an explicit mapping from input images to object poses, it provides an implicit representation of object orientations defined by samples in a latent space. Experiments on the T-LESS and LineMOD datasets show that our method outperforms similar model-based approaches and competes with state-of-the art approaches that require real pose-annotated images.",
"Deep reinforcement learning (RL) has proven a powerful technique in many sequential decision making domains. However, Robotics poses many challenges for RL, most notably training on a physical system can be expensive and dangerous, which has sparked significant interest in learning control policies using a physics simulator. While several recent works have shown promising results in transferring policies trained in simulation to the real world, they often do not fully utilize the advantage of working with a simulator. In this work, we exploit the full state observability in the simulator to train better policies which take as input only partial observations (RGBD images). We do this by employing an actor-critic training algorithm in which the critic is trained on full states while the actor (or policy) gets rendered images as input. We show experimentally on a range of simulated tasks that using these asymmetric inputs significantly improves performance. Finally, we combine this method with domain randomization and show real robot experiments for several tasks like picking, pushing, and moving a block. We achieve this simulation to real world transfer without training on any real world data.",
"Simulations are attractive environments for training agents as they provide an abundant source of data and alleviate certain safety concerns during the training process. But the behaviours developed by agents in simulation are often specific to the characteristics of the simulator. Due to modeling error, strategies that are successful in simulation may not transfer to their real world counterparts. In this paper, we demonstrate a simple method to bridge this \"reality gap.\" By randomizing the dynamics of the simulator during training, we are able to develop policies that are capable of adapting to very different dynamics, including ones that differ significantly from the dynamics on which the policies were trained. This adaptivity enables the policies to generalize to the dynamics of the real world without any training on the physical system. Our approach is demonstrated on an object pushing task using a robotic arm. Despite being trained exclusively in simulation, our policies are able to maintain a similar level of performance when deployed on a real robot, reliably moving an object to a desired location from random initial configurations. We explore the impact of various design decisions and show that the resulting policies are robust to significant calibration error.",
"We present an approach to sensorimotor control in immersive environments. Our approach utilizes a high-dimensional sensory stream and a lower-dimensional measurement stream. The cotemporal structure of these streams provides a rich supervisory signal, which enables training a sensorimotor control model by interacting with the environment. The model is trained using supervised learning techniques, but without extraneous supervision. It learns to act based on raw sensory input from a complex three-dimensional environment. The presented formulation enables learning without a fixed goal at training time, and pursuing dynamically changing goals at test time. We conduct extensive experiments in three-dimensional simulations based on the classical first-person game Doom. The results demonstrate that the presented approach outperforms sophisticated prior formulations, particularly on challenging tasks. The results also show that trained models successfully generalize across environments and goals. A model trained using the presented approach won the Full Deathmatch track of the Visual Doom AI Competition, which was held in previously unseen environments.",
"Deep reinforcement learning has emerged as a promising and powerful technique for automatically acquiring control policies that can process raw sensory inputs, such as images, and perform complex behaviors. However, extending deep RL to real-world robotic tasks has proven challenging, particularly in safety-critical domains such as autonomous flight, where a trial-and-error learning process is often impractical. In this paper, we explore the following question: can we train vision-based navigation policies entirely in simulation, and then transfer them into the real world to achieve real-world flight without a single real training image? We propose a learning method that we call CAD @math RL, which can be used to perform collision-free indoor flight in the real world while being trained entirely on 3D CAD models. Our method uses single RGB images from a monocular camera, without needing to explicitly reconstruct the 3D geometry of the environment or perform explicit motion planning. Our learned collision avoidance policy is represented by a deep convolutional neural network that directly processes raw monocular images and outputs velocity commands. This policy is trained entirely on simulated images, with a Monte Carlo policy evaluation algorithm that directly optimizes the network's ability to produce collision-free flight. By highly randomizing the rendering settings for our simulated training set, we show that we can train a policy that generalizes to the real world, without requiring the simulator to be particularly realistic or high-fidelity. We evaluate our method by flying a real quadrotor through indoor environments, and further evaluate the design choices in our simulator through a series of ablation studies on depth prediction. For supplementary video see: this https URL",
"Bridging the ‘reality gap’ that separates simulated robotics from experiments on hardware could accelerate robotic research through improved data availability. This paper explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator. With enough variability in the simulator, the real world may appear to the model as just another variation. We focus on the task of object localization, which is a stepping stone to general robotic manipulation skills. We find that it is possible to train a real-world object detector that is accurate to 1.5 cm and robust to distractors and partial occlusions using only data from a simulator with non-realistic random textures. To demonstrate the capabilities of our detectors, we show they can be used to perform grasping in a cluttered environment. To our knowledge, this is the first successful transfer of a deep neural network trained only on simulated RGB images (without pre-training on real images) to the real world for the purpose of robotic control.",
"End-to-end control for robot manipulation and grasping is emerging as an attractive alternative to traditional pipelined approaches. However, end-to-end methods tend to either be slow to train, exhibit little or no generalisability, or lack the ability to accomplish long-horizon or multi-stage tasks. In this paper, we show how two simple techniques can lead to end-to-end (image to velocity) execution of a multi-stage task, which is analogous to a simple tidying routine, without having seen a single real image. This involves locating, reaching for, and grasping a cube, then locating a basket and dropping the cube inside. To achieve this, robot trajectories are computed in a simulator, to collect a series of control velocities which accomplish the task. Then, a CNN is trained to map observed images to velocities, using domain randomisation to enable generalisation to real world images. Results show that we are able to successfully accomplish the task in the real world with the ability to generalise to novel environments, including those with dynamic lighting conditions, distractor objects, and moving objects, including the basket itself. We believe our approach to be simple, highly scalable, and capable of learning long-horizon tasks that have until now not been shown with the state-of-the-art in end-to-end robot control.",
"We present a system for training deep neural networks for object detection using synthetic images. To handle the variability in real-world data, the system relies upon the technique of domain randomization, in which the parameters of the simulator @math such as lighting, pose, object textures, etc. @math are randomized in non-realistic ways to force the neural network to learn the essential features of the object of interest. We explore the importance of these parameters, showing that it is possible to produce a network with compelling performance using only non-artistically-generated synthetic data. With additional fine-tuning on real data, the network yields better performance than using real data alone. This result opens up the possibility of using inexpensive synthetic data for training neural networks while avoiding the need to collect large amounts of hand-annotated real-world data or to generate high-fidelity synthetic worlds @math both of which remain bottlenecks for many applications. The approach is evaluated on bounding box detection of cars on the KITTI dataset."
]
}
|
1811.05945
|
2901787515
|
Writing desktop applications in JavaScript offers developers the opportunity to create cross-platform applications with cutting-edge capabilities. However, in doing so, they are potentially submitting their code to a number of unsanctioned modifications from malicious actors. Electron is one such JavaScript application framework which facilitates this multi-platform out-the-box paradigm and is based upon the Node.js JavaScript runtime—an increasingly popular server-side technology. By bringing this technology to the client-side environment, previously unrealized risks are exposed to users due to the powerful system programming interface that Node.js exposes. In a concerted effort to highlight previously unexposed risks in these rapidly expanding frameworks, this paper presents the Mayall Framework, an extensible toolkit aimed at JavaScript security auditing and post-exploitation analysis. This paper also exposes fifteen highly popular Electron applications and demonstrates that two-thirds of applications were found to be using known vulnerable elements with high CVSS (Common Vulnerability Scoring System) scores. Moreover, this paper discloses a wide-reaching and overlooked vulnerability within the Electron Framework which is a direct byproduct of shipping the runtime unaltered with each application, allowing malicious actors to modify source code and inject covert malware inside verified and signed applications without restriction. Finally, a number of injection vectors are explored and appropriate remediations are proposed.
|
While eval() is arguably the most relevant security risk when applied to modern cross-platform JavaScript applications due to it's alarming prevalence in Node.js, there still remain a large number of additional JavaScript vulnerabilities that apply to both web and desktop JavaScript, the most common of which is cross-site scripting (XSS) @cite_31 . Marked as third in the OWASP Top 10 vulnerabilities list, XSS works similarly to injection vulnerabilities (such as those described above) in that it arises from a mishandling of user input and can result in arbitrary JavaScript execution within the web application @cite_27 . Furthermore, JavaScript is used in the development of browser add-ons i.e. Web Extensions in Mozilla Firefox. Such vulnerabilities have the potential to allow for the development of malicious extensions, posing a security risk @cite_17 .
|
{
"cite_N": [
"@cite_27",
"@cite_31",
"@cite_17"
],
"mid": [
"2551973953",
"",
"1972700774"
],
"abstract": [
"In 2014 over 70 of people in Great Britain accessed the Internet every day. This resource is an optimal vector for malicious attackers to penetrate home computers and as such compromised pages have been increasing in both number and complexity. This paper presents X-Secure, a novel browser plug-in designed to present and raise the awareness of inexperienced users by analysing web-pages before malicious scripts are executed by the host computer. X-Secure was able to detect over 90 of the tested attacks and provides a danger level based on cumulative analysis of the source code, the URL, and the remote server, by using a set of heuristics, hence increasing the situational awareness of users browsing the internet.",
"",
"Despite the number of tools created to help end-users reduce risky security behaviours, users are still falling victim to online attacks. This paper proposes a browser extension utilising affective feedback to provide warnings on detection of risky behaviour. The paper provides an overview of behaviour considered to be risky, explaining potential threats users may face online. Existing tools developed to reduce risky security behaviours in end-users have been compared, discussing the success rates of various methodologies. Ongoing research is described which attempts to educate users regarding the risks and consequences of poor security behaviour by providing the appropriate feedback on the automatic recognition of risky behaviour. The paper concludes that a solution utilising a browser extension is a suitable method of monitoring potentially risky security behaviour. Ultimately, future work seeks to implement an affective feedback mechanism within the browser extension with the aim of improving security awareness."
]
}
|
1811.05894
|
2969890494
|
In this work, we outline the set of problems, which any neural network for object detection faces when its development comes to the deployment stage and propose methods to deal with such difficulties. We show that these practices allow one to get neural network for object detection, which can recognize two classes: vehicles and pedestrians and achieves more than 60 frames per second inference speed on Core (^ TM ) i5-6500 CPU. The proposed model is built on top of the popular Single Shot MultiBox Object Detection framework but with substantial improvements, which were inspired by the discovered problems. The network has just 1.96 GMAC (GMAC – billions of multiply-accumulate operations) complexity and less than 7 MB model size. It is publicly available as a part of Intel® OpenVINO (^ TM ) Toolkit.
|
There are two major groups of DL OD: one and two-stage methods. For two-stage methods, Faster R-CNN @cite_25 provides the best quality, but it is the slowest one. R-FCN @cite_31 aims to improve the speed by making all computations shared with position sensitive score maps but at the cost of accuracy. One-stage methods, such as SSD @cite_10 are the fastest ones. However, their speed degrades on high-resolution input.
|
{
"cite_N": [
"@cite_31",
"@cite_10",
"@cite_25"
],
"mid": [
"2950800384",
"2193145675",
"2613718673"
],
"abstract": [
"We present region-based, fully convolutional networks for accurate and efficient object detection. In contrast to previous region-based detectors such as Fast Faster R-CNN that apply a costly per-region subnetwork hundreds of times, our region-based detector is fully convolutional with almost all computation shared on the entire image. To achieve this goal, we propose position-sensitive score maps to address a dilemma between translation-invariance in image classification and translation-variance in object detection. Our method can thus naturally adopt fully convolutional image classifier backbones, such as the latest Residual Networks (ResNets), for object detection. We show competitive results on the PASCAL VOC datasets (e.g., 83.6 mAP on the 2007 set) with the 101-layer ResNet. Meanwhile, our result is achieved at a test-time speed of 170ms per image, 2.5-20x faster than the Faster R-CNN counterpart. Code is made publicly available at: this https URL",
"We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn."
]
}
|
1811.05894
|
2969890494
|
In this work, we outline the set of problems, which any neural network for object detection faces when its development comes to the deployment stage and propose methods to deal with such difficulties. We show that these practices allow one to get neural network for object detection, which can recognize two classes: vehicles and pedestrians and achieves more than 60 frames per second inference speed on Core (^ TM ) i5-6500 CPU. The proposed model is built on top of the popular Single Shot MultiBox Object Detection framework but with substantial improvements, which were inspired by the discovered problems. The network has just 1.96 GMAC (GMAC – billions of multiply-accumulate operations) complexity and less than 7 MB model size. It is publicly available as a part of Intel® OpenVINO (^ TM ) Toolkit.
|
An important part of research is conducted by the design of lightweight backbones, which can perform on par with the top networks for classification. CNNs, that utilize depth-wise convolutions @cite_0 , @cite_27 , allow achieving dramatic parameter reduction and faster inference time. Authors in @cite_30 show that only SSD-like OD can adopt lightweight backbones without a huge drop in accuracy.
|
{
"cite_N": [
"@cite_0",
"@cite_27",
"@cite_30"
],
"mid": [
"2612445135",
"2531409750",
"2557728737"
],
"abstract": [
"We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.",
"We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution). In this light, a depthwise separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset (which Inception V3 was designed for), and significantly outperforms Inception V3 on a larger image classification dataset comprising 350 million images and 17,000 classes. Since the Xception architecture has the same number of parameters as Inception V3, the performance gains are not due to increased capacity but rather to a more efficient use of model parameters.",
"The goal of this paper is to serve as a guide for selecting a detection architecture that achieves the right speed memory accuracy balance for a given application and platform. To this end, we investigate various ways to trade accuracy for speed and memory usage in modern convolutional object detection systems. A number of successful systems have been proposed in recent years, but apples-toapples comparisons are difficult due to different base feature extractors (e.g., VGG, Residual Networks), different default image resolutions, as well as different hardware and software platforms. We present a unified implementation of the Faster R-CNN [30], R-FCN [6] and SSD [25] systems, which we view as meta-architectures and trace out the speed accuracy trade-off curve created by using alternative feature extractors and varying other critical parameters such as image size within each of these meta-architectures. On one extreme end of this spectrum where speed and memory are critical, we present a detector that achieves real time speeds and can be deployed on a mobile device. On the opposite end in which accuracy is critical, we present a detector that achieves state-of-the-art performance measured on the COCO detection task."
]
}
|
1811.05894
|
2969890494
|
In this work, we outline the set of problems, which any neural network for object detection faces when its development comes to the deployment stage and propose methods to deal with such difficulties. We show that these practices allow one to get neural network for object detection, which can recognize two classes: vehicles and pedestrians and achieves more than 60 frames per second inference speed on Core (^ TM ) i5-6500 CPU. The proposed model is built on top of the popular Single Shot MultiBox Object Detection framework but with substantial improvements, which were inspired by the discovered problems. The network has just 1.96 GMAC (GMAC – billions of multiply-accumulate operations) complexity and less than 7 MB model size. It is publicly available as a part of Intel® OpenVINO (^ TM ) Toolkit.
|
Most of the modern OD is based on backbones pre-trained on ImageNet. In many cases, pre-training is a separate task which usually requires a lot of time. Recent works @cite_22 , @cite_5 suggest the way how to specifically design CNN, which can be trained directly from scratch for OD. Here we propose steps how to train lightweight OD directly, without specifically designed CNN blocks or need for many hours backbone pre-training on additional data.
|
{
"cite_N": [
"@cite_5",
"@cite_22"
],
"mid": [
"2772989637",
"2963813458"
],
"abstract": [
"In this paper, we propose gated recurrent feature pyramid for the problem of learning object detection from scratch. Our approach is motivated by the recent work of deeply supervised object detector (DSOD), but explores new network architecture that dynamically adjusts the supervision intensities of intermediate layers for various scales in object detection. The benefits of the proposed method are two-fold: First, we propose a recurrent feature-pyramid structure to squeeze rich spatial and semantic features into a single prediction layer that further reduces the number of parameters to learn (DSOD need learn 1 2, but our method need only 1 3). Thus our new model is more fit for learning from scratch, and can converge faster than DSOD (using only 50 of iterations). Second, we introduce a novel gate-controlled prediction strategy to adaptively enhance or attenuate supervision at different scales based on the input object size. As a result, our model is more suitable for detecting small objects. To the best of our knowledge, our study is the best performed model of learning object detection from scratch. Our method in the PASCAL VOC 2012 comp3 leaderboard (which compares object detectors that are trained only with PASCAL VOC data) demonstrates a significant performance jump, from previous 64 to our 77 (VOC 07++12) and 72.5 (VOC 12). We also evaluate the performance of our method on PASCAL VOC 2007, 2012 and MS COCO datasets, and find that the accuracy of our learning from scratch method can even beat a lot of the state-of-the-art detection methods which use pre-trained models from ImageNet. Code is available at: this https URL .",
"We present Deeply Supervised Object Detector (DSOD), a framework that can learn object detectors from scratch. State-of-the-art object objectors rely heavily on the off the-shelf networks pre-trained on large-scale classification datasets like Image Net, which incurs learning bias due to the difference on both the loss functions and the category distributions between classification and detection tasks. Model fine-tuning for the detection task could alleviate this bias to some extent but not fundamentally. Besides, transferring pre-trained models from classification to detection between discrepant domains is even more difficult (e.g. RGB to depth images). A better solution to tackle these two critical problems is to train object detectors from scratch, which motivates our proposed DSOD. Previous efforts in this direction mostly failed due to much more complicated loss functions and limited training data in object detection. In DSOD, we contribute a set of design principles for training object detectors from scratch. One of the key findings is that deep supervision, enabled by dense layer-wise connections, plays a critical role in learning a good detector. Combining with several other principles, we develop DSOD following the single-shot detection (SSD) framework. Experiments on PASCAL VOC 2007, 2012 and MS COCO datasets demonstrate that DSOD can achieve better results than the state-of-the-art solutions with much more compact models. For instance, DSOD outperforms SSD on all three benchmarks with real-time detection speed, while requires only 1 2 parameters to SSD and 1 10 parameters to Faster RCNN."
]
}
|
1811.05894
|
2969890494
|
In this work, we outline the set of problems, which any neural network for object detection faces when its development comes to the deployment stage and propose methods to deal with such difficulties. We show that these practices allow one to get neural network for object detection, which can recognize two classes: vehicles and pedestrians and achieves more than 60 frames per second inference speed on Core (^ TM ) i5-6500 CPU. The proposed model is built on top of the popular Single Shot MultiBox Object Detection framework but with substantial improvements, which were inspired by the discovered problems. The network has just 1.96 GMAC (GMAC – billions of multiply-accumulate operations) complexity and less than 7 MB model size. It is publicly available as a part of Intel® OpenVINO (^ TM ) Toolkit.
|
Usually, before running any OD, one should select the threshold of confidence value for a detector. This is the number, above which we consider all detected objects as positives, and the objects below such threshold are considered false positives. So, when running a good OD one will see the box around the object most of the time, but sometimes it blinks. This happens due to the low confidence value of the detected object, so it is filtered by the threshold. It means, that we missed the object in some frames. Such a situation can be compensated with trackers @cite_6 . However, a tracker is a separate algorithm, that can be computationally expensive. We outline the extremely cheap tracking strategy, based on re-detection, which utilizes the nature of OD.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2291627510"
],
"abstract": [
"Standardized benchmarks are crucial for the majority of computer vision applications. Although leaderboards and ranking tables should not be over-claimed, benchmarks often provide the most objective measure of performance and are therefore important guides for reseach. Recently, a new benchmark for Multiple Object Tracking, MOTChallenge, was launched with the goal of collecting existing and new data and creating a framework for the standardized evaluation of multiple object tracking methods. The first release of the benchmark focuses on multiple people tracking, since pedestrians are by far the most studied object in the tracking community. This paper accompanies a new release of the MOTChallenge benchmark. Unlike the initial release, all videos of MOT16 have been carefully annotated following a consistent protocol. Moreover, it not only offers a significant increase in the number of labeled boxes, but also provides multiple object classes beside pedestrians and the level of visibility for every single object of interest."
]
}
|
1811.06042
|
2964184998
|
Abstract Recent advances in deep learning methods have redefined the state-of-the-art for many medical imaging applications, surpassing previous approaches and sometimes even competing with human judgment in several tasks. Those models, however, when trained to reduce the empirical risk on a single domain, fail to generalize when applied to other domains, a very common scenario in medical imaging due to the variability of images and anatomical structures, even across the same imaging modality. In this work, we extend the method of unsupervised domain adaptation using self-ensembling for the semantic segmentation task and explore multiple facets of the method on a small and realistic publicly-available magnetic resonance (MRI) dataset. Through an extensive evaluation, we show that self-ensembling can indeed improve the generalization of the models even when using a small amount of unlabeled data.
|
Deep learning based methods for segmentation in medical imaging are being vastly explored in recent years @cite_40 and may vary in the specifics on how they handle the task. Most of the initial work was focused on patch-based segmentation @cite_50 , preceding the pioneering deep learning models. With the growing interest on deep learning for several computer vision tasks, the first attempts on using Convolutional Neural Networks (CNNs) for image segmentation were based on processing image patches through a sliding window, which yielded segmented patches. Those independent segmented patches were then concatenated for the creation of the final segmented image @cite_23 . The main drawbacks of this approach are regarding computational cost -- several forward passes for generating the final result -- as well as regarding inconsistency in predictions -- which can be fixed by overlapping sliding windows.
|
{
"cite_N": [
"@cite_40",
"@cite_23",
"@cite_50"
],
"mid": [
"2592929672",
"274818618",
"2010587020"
],
"abstract": [
"Abstract Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks. Concise overviews are provided of studies per application area: neuro, retinal, pulmonary, digital pathology, breast, cardiac, abdominal, musculoskelet al. We end with a summary of the current state-of-the-art, a critical discussion of open challenges and directions for future research.",
"This report provides an overview of the current state of the art deep learning architectures and optimisation techniques, and uses the ADNI hippocampus MRI dataset as an example to compare the effectiveness and efficiency of different convolutional architectures on the task of patch-based 3-dimensional hippocampal segmentation, which is important in the diagnosis of Alzheimer's Disease. We found that a slightly unconventional \"stacked 2D\" approach provides much better classification performance than simple 2D patches without requiring significantly more computational power. We also examined the popular \"tri-planar\" approach used in some recently published studies, and found that it provides much better results than the 2D approaches, but also with a moderate increase in computational power requirement. Finally, we evaluated a full 3D convolutional architecture, and found that it provides marginally better results than the tri-planar approach, but at the cost of a very significant increase in computational power requirement.",
"Quantitative magnetic resonance analysis often requires accurate, robust, and reliable automatic extraction of anatomical structures. Recently, template-warping methods incorporating a label fusion strategy have demonstrated high accuracy in segmenting cerebral structures. In this study, we propose a novel patch-based method using expert manual segmentations as priors to achieve this task. Inspired by recent work in image denoising, the proposed nonlocal patch-based label fusion produces accurate and robust segmentation. Validation with two different datasets is presented. In our experiments, the hippocampi of 80 healthy subjects and the lateral ventricles of 80 patients with Alzheimer's disease were segmented. The influence on segmentation accuracy of different parameters such as patch size and number of training subjects was also studied. A comparison with an appearance-based method and a template-based method was also carried out. The highest median kappa index values obtained with the proposed method were 0.884 for hippocampus segmentation and 0.959 for lateral ventricle segmentation."
]
}
|
1811.06042
|
2964184998
|
Abstract Recent advances in deep learning methods have redefined the state-of-the-art for many medical imaging applications, surpassing previous approaches and sometimes even competing with human judgment in several tasks. Those models, however, when trained to reduce the empirical risk on a single domain, fail to generalize when applied to other domains, a very common scenario in medical imaging due to the variability of images and anatomical structures, even across the same imaging modality. In this work, we extend the method of unsupervised domain adaptation using self-ensembling for the semantic segmentation task and explore multiple facets of the method on a small and realistic publicly-available magnetic resonance (MRI) dataset. Through an extensive evaluation, we show that self-ensembling can indeed improve the generalization of the models even when using a small amount of unlabeled data.
|
Even though patch-wise methods are still being researched @cite_47 and have led to several advances in segmentation @cite_23 , the most common deep architecture for segmentation nowadays is the so-called Fully Convolutional Network (FCN) @cite_38 . This architecture is based solely on convolutional layers with the final result not depending on the use of fully-connected layers. FCNs can provide a fully-segmented image within a single forward step with variable output size depending on the input tensor size. One of the most well-known FCNs for medical imaging is U-net @cite_31 , which combines convolutional, downsampling, and upsampling operations with skip non-residual connections. We make use of U-Net throughout this work aiming for generalizable conclusions. This architecture is further discussed in .
|
{
"cite_N": [
"@cite_38",
"@cite_47",
"@cite_31",
"@cite_23"
],
"mid": [
"2952632681",
"2949362008",
"2952232639",
"274818618"
],
"abstract": [
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.",
"Convolutional Neural Networks (CNN) are state-of-the-art models for many image classification tasks. However, to recognize cancer subtypes automatically, training a CNN on gigapixel resolution Whole Slide Tissue Images (WSI) is currently computationally impossible. The differentiation of cancer subtypes is based on cellular-level visual features observed on image patch scale. Therefore, we argue that in this situation, training a patch-level classifier on image patches will perform better than or similar to an image-level classifier. The challenge becomes how to intelligently combine patch-level classification results and model the fact that not all patches will be discriminative. We propose to train a decision fusion model to aggregate patch-level predictions given by patch-level CNNs, which to the best of our knowledge has not been shown before. Furthermore, we formulate a novel Expectation-Maximization (EM) based method that automatically locates discriminative patches robustly by utilizing the spatial relationships of patches. We apply our method to the classification of glioma and non-small-cell lung carcinoma cases into subtypes. The classification accuracy of our method is similar to the inter-observer agreement between pathologists. Although it is impossible to train CNNs on WSIs, we experimentally demonstrate using a comparable non-cancer dataset of smaller images that a patch-based CNN can outperform an image-based CNN.",
"There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at this http URL .",
"This report provides an overview of the current state of the art deep learning architectures and optimisation techniques, and uses the ADNI hippocampus MRI dataset as an example to compare the effectiveness and efficiency of different convolutional architectures on the task of patch-based 3-dimensional hippocampal segmentation, which is important in the diagnosis of Alzheimer's Disease. We found that a slightly unconventional \"stacked 2D\" approach provides much better classification performance than simple 2D patches without requiring significantly more computational power. We also examined the popular \"tri-planar\" approach used in some recently published studies, and found that it provides much better results than the 2D approaches, but also with a moderate increase in computational power requirement. Finally, we evaluated a full 3D convolutional architecture, and found that it provides marginally better results than the tri-planar approach, but at the cost of a very significant increase in computational power requirement."
]
}
|
1811.06042
|
2964184998
|
Abstract Recent advances in deep learning methods have redefined the state-of-the-art for many medical imaging applications, surpassing previous approaches and sometimes even competing with human judgment in several tasks. Those models, however, when trained to reduce the empirical risk on a single domain, fail to generalize when applied to other domains, a very common scenario in medical imaging due to the variability of images and anatomical structures, even across the same imaging modality. In this work, we extend the method of unsupervised domain adaptation using self-ensembling for the semantic segmentation task and explore multiple facets of the method on a small and realistic publicly-available magnetic resonance (MRI) dataset. Through an extensive evaluation, we show that self-ensembling can indeed improve the generalization of the models even when using a small amount of unlabeled data.
|
Deep Domain Adaptation (DDA), which is a field unrelated in essence to medical imaging, has been widely studied in the recent years @cite_35 . We can divide the literature on DDA as follows: (i) methods based on building domain-invariant feature spaces through auto-encoders @cite_19 , adversarial training @cite_29 , GANs @cite_53 @cite_43 , or disentanglement strategies @cite_16 @cite_0 ; (ii) methods based on the analysis of higher-order statistics @cite_26 @cite_48 ; (iii) methods based on explicit discrepancy between source and target domains @cite_20 ; and (iv) methods based on implicit discrepancy between domains, also known as self-ensembling @cite_34 @cite_28 .
|
{
"cite_N": [
"@cite_35",
"@cite_26",
"@cite_28",
"@cite_48",
"@cite_29",
"@cite_53",
"@cite_34",
"@cite_0",
"@cite_19",
"@cite_43",
"@cite_16",
"@cite_20"
],
"mid": [
"",
"2299668505",
"2592691248",
"2467286621",
"1731081199",
"2767657961",
"2767722847",
"",
"2950790587",
"2605488490",
"2951380757",
"1565327149"
],
"abstract": [
"",
"Deep neural networks (DNN) have shown unprecedented success in various computer vision applications such as image classification and object detection. However, it is still a common annoyance during the training phase, that one has to prepare at least thousands of labeled images to fine-tune a network to a specific domain. Recent study ( 2015) shows that a DNN has strong dependency towards the training dataset, and the learned features cannot be easily transferred to a different but relevant task without fine-tuning. In this paper, we propose a simple yet powerful remedy, called Adaptive Batch Normalization (AdaBN) to increase the generalization ability of a DNN. By modulating the statistics in all Batch Normalization layers across the network, our approach achieves deep adaptation effect for domain adaptation tasks. In contrary to other deep learning domain adaptation methods, our method does not require additional components, and is parameter-free. It archives state-of-the-art performance despite its surprising simplicity. Furthermore, we demonstrate that our method is complementary with other existing methods. Combining AdaBN with existing domain adaptation treatments may further improve model performance.",
"The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional benefit, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Without changing the network architecture, Mean Teacher achieves an error rate of 4.35 on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels. We also show that a good network architecture is crucial to performance. Combining Mean Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with 4000 labels from 10.55 to 6.28 , and on ImageNet 2012 with 10 of the labels from 35.24 to 9.11 .",
"Deep neural networks are able to learn powerful representations from large quantities of labeled input data, however they cannot always generalize well across changes in input distributions. Domain adaptation algorithms have been proposed to compensate for the degradation in performance due to domain shift. In this paper, we address the case when the target domain is unlabeled, requiring unsupervised adaptation. CORAL is a \"frustratingly easy\" unsupervised domain adaptation method that aligns the second-order statistics of the source and target distributions with a linear transformation. Here, we extend CORAL to learn a nonlinear transformation that aligns correlations of layer activations in deep neural networks (Deep CORAL). Experiments on standard benchmark datasets show state-of-the-art performance.",
"We introduce a new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions. Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains. The approach implements this idea in the context of neural network architectures that are trained on labeled data from the source domain and unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of features that are (i) discriminative for the main learning task on the source domain and (ii) indiscriminate with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. The resulting augmented architecture can be trained using standard backpropagation and stochastic gradient descent, and can thus be implemented with little effort using any of the deep learning packages. We demonstrate the success of our approach for two distinct classification problems (document sentiment analysis and image classification), where state-of-the-art domain adaptation performance on standard benchmarks is achieved. We also validate the approach for descriptor learning task in the context of person re-identification application.",
"Domain adaptation is critical for success in new, unseen environments. Adversarial adaptation models applied in feature spaces discover domain invariant representations, but are difficult to visualize and sometimes fail to capture pixel-level and low-level domain shifts. Recent work has shown that generative adversarial networks combined with cycle-consistency constraints are surprisingly effective at mapping images between domains, even without the use of aligned image pairs. We propose a novel discriminatively-trained Cycle-Consistent Adversarial Domain Adaptation model. CyCADA adapts representations at both the pixel-level and feature-level, enforces cycle-consistency while leveraging a task loss, and does not require aligned pairs. Our model can be applied in a variety of visual recognition and prediction settings. We show new state-of-the-art results across multiple adaptation tasks, including digit classification and semantic segmentation of road scenes demonstrating transfer from synthetic to real world domains.",
"This paper explores the use of self-ensembling for visual domain adaptation problems. Our technique is derived from the mean teacher variant (, 2017) of temporal ensembling (;, 2017), a technique that achieved state of the art results in the area of semi-supervised learning. We introduce a number of modifications to their approach for challenging domain adaptation scenarios and evaluate its effectiveness. Our approach achieves state of the art results in a variety of benchmarks, including our winning entry in the VISDA-2017 visual domain adaptation challenge. In small image benchmarks, our algorithm not only outperforms prior art, but can also achieve accuracy that is close to that of a classifier trained in a supervised fashion.",
"",
"In this paper, we propose a novel unsupervised domain adaptation algorithm based on deep learning for visual object recognition. Specifically, we design a new model called Deep Reconstruction-Classification Network (DRCN), which jointly learns a shared encoding representation for two tasks: i) supervised classification of labeled source data, and ii) unsupervised reconstruction of unlabeled target data.In this way, the learnt representation not only preserves discriminability, but also encodes useful information from the target domain. Our new DRCN model can be optimized by using backpropagation similarly as the standard neural networks. We evaluate the performance of DRCN on a series of cross-domain object recognition tasks, where DRCN provides a considerable improvement (up to 8 in accuracy) over the prior state-of-the-art algorithms. Interestingly, we also observe that the reconstruction pipeline of DRCN transforms images from the source domain into images whose appearance resembles the target dataset. This suggests that DRCN's performance is due to constructing a single composite representation that encodes information about both the structure of target images and the classification of source images. Finally, we provide a formal analysis to justify the algorithm's objective in domain adaptation context.",
"Domain Adaptation is an actively researched problem in Computer Vision. In this work, we propose an approach that leverages unsupervised data to bring the source and target distributions closer in a learned joint feature space. We accomplish this by inducing a symbiotic relationship between the learned embedding and a generative adversarial network. This is in contrast to methods which use the adversarial framework for realistic data generation and retraining deep models with such data. We demonstrate the strength and generality of our approach by performing experiments on three different tasks with varying levels of difficulty: (1) Digit classification (MNIST, SVHN and USPS datasets) (2) Object recognition using OFFICE dataset and (3) Domain adaptation from synthetic to real data. Our method achieves state-of-the art performance in most experimental settings and by far the only GAN-based method that has been shown to work well across different datasets such as OFFICE and DIGITS.",
"While representation learning aims to derive interpretable features for describing visual data, representation disentanglement further results in such features so that particular image attributes can be identified and manipulated. However, one cannot easily address this task without observing ground truth annotation for the training data. To address this problem, we propose a novel deep learning model of Cross-Domain Representation Disentangler (CDRD). By observing fully annotated source-domain data and unlabeled target-domain data of interest, our model bridges the information across data domains and transfers the attribute information accordingly. Thus, cross-domain joint feature disentanglement and adaptation can be jointly performed. In the experiments, we provide qualitative results to verify our disentanglement capability. Moreover, we further confirm that our model can be applied for solving classification tasks of unsupervised domain adaptation, and performs favorably against state-of-the-art image disentanglement and translation methods.",
"Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark. Fine-tuning deep models in a new domain can require a significant amount of data, which for many applications is simply not available. We propose a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant. We additionally show that a domain confusion metric can be used for model selection to determine the dimension of an adaptation layer and the best position for the layer in the CNN architecture. Our proposed adaptation method offers empirical performance which exceeds previously published results on a standard benchmark visual domain adaptation task."
]
}
|
1811.06042
|
2964184998
|
Abstract Recent advances in deep learning methods have redefined the state-of-the-art for many medical imaging applications, surpassing previous approaches and sometimes even competing with human judgment in several tasks. Those models, however, when trained to reduce the empirical risk on a single domain, fail to generalize when applied to other domains, a very common scenario in medical imaging due to the variability of images and anatomical structures, even across the same imaging modality. In this work, we extend the method of unsupervised domain adaptation using self-ensembling for the semantic segmentation task and explore multiple facets of the method on a small and realistic publicly-available magnetic resonance (MRI) dataset. Through an extensive evaluation, we show that self-ensembling can indeed improve the generalization of the models even when using a small amount of unlabeled data.
|
@cite_53 , the authors train GANs with cycle-consistent loss functions @cite_44 to remap the distribution from the source to the target dataset, thus creating target domain specific features for completing the task. @cite_43 , GANs are employed as a means of learning aligned embeddings for both domains. Similarly, disentangled representations for each domain have been proposed @cite_16 @cite_5 with the goal of generating a feature space capable of separating domain-dependent and domain-invariant information.
|
{
"cite_N": [
"@cite_53",
"@cite_44",
"@cite_43",
"@cite_5",
"@cite_16"
],
"mid": [
"2767657961",
"2962793481",
"2605488490",
"2804532943",
"2951380757"
],
"abstract": [
"Domain adaptation is critical for success in new, unseen environments. Adversarial adaptation models applied in feature spaces discover domain invariant representations, but are difficult to visualize and sometimes fail to capture pixel-level and low-level domain shifts. Recent work has shown that generative adversarial networks combined with cycle-consistency constraints are surprisingly effective at mapping images between domains, even without the use of aligned image pairs. We propose a novel discriminatively-trained Cycle-Consistent Adversarial Domain Adaptation model. CyCADA adapts representations at both the pixel-level and feature-level, enforces cycle-consistency while leveraging a task loss, and does not require aligned pairs. Our model can be applied in a variety of visual recognition and prediction settings. We show new state-of-the-art results across multiple adaptation tasks, including digit classification and semantic segmentation of road scenes demonstrating transfer from synthetic to real world domains.",
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.",
"Domain Adaptation is an actively researched problem in Computer Vision. In this work, we propose an approach that leverages unsupervised data to bring the source and target distributions closer in a learned joint feature space. We accomplish this by inducing a symbiotic relationship between the learned embedding and a generative adversarial network. This is in contrast to methods which use the adversarial framework for realistic data generation and retraining deep models with such data. We demonstrate the strength and generality of our approach by performing experiments on three different tasks with varying levels of difficulty: (1) Digit classification (MNIST, SVHN and USPS datasets) (2) Object recognition using OFFICE dataset and (3) Domain adaptation from synthetic to real data. Our method achieves state-of-the art performance in most experimental settings and by far the only GAN-based method that has been shown to work well across different datasets such as OFFICE and DIGITS.",
"Unsupervised domain adaptation aims at learning a shared model for two related, but not identical, domains by leveraging supervision from a source domain to an unsupervised target domain. A number of effective domain adaptation approaches rely on the ability to extract discriminative, yet domain-invariant, latent factors which are common to both domains. Extracting latent commonality is also useful for disentanglement analysis, enabling separation between the common and the domain-specific features of both domains. In this paper, we present a method for boosting domain adaptation performance by leveraging disentanglement analysis. The key idea is that by learning to separately extract both the common and the domain-specific features, one can synthesize more target domain data with supervision, thereby boosting the domain adaptation performance. Better common feature extraction, in turn, helps further improve the disentanglement analysis and disentangled synthesis. We show that iterating between domain adaptation and disentanglement analysis can consistently improve each other on several unsupervised domain adaptation tasks, for various domain adaptation backbone models.",
"While representation learning aims to derive interpretable features for describing visual data, representation disentanglement further results in such features so that particular image attributes can be identified and manipulated. However, one cannot easily address this task without observing ground truth annotation for the training data. To address this problem, we propose a novel deep learning model of Cross-Domain Representation Disentangler (CDRD). By observing fully annotated source-domain data and unlabeled target-domain data of interest, our model bridges the information across data domains and transfers the attribute information accordingly. Thus, cross-domain joint feature disentanglement and adaptation can be jointly performed. In the experiments, we provide qualitative results to verify our disentanglement capability. Moreover, we further confirm that our model can be applied for solving classification tasks of unsupervised domain adaptation, and performs favorably against state-of-the-art image disentanglement and translation methods."
]
}
|
1811.06042
|
2964184998
|
Abstract Recent advances in deep learning methods have redefined the state-of-the-art for many medical imaging applications, surpassing previous approaches and sometimes even competing with human judgment in several tasks. Those models, however, when trained to reduce the empirical risk on a single domain, fail to generalize when applied to other domains, a very common scenario in medical imaging due to the variability of images and anatomical structures, even across the same imaging modality. In this work, we extend the method of unsupervised domain adaptation using self-ensembling for the semantic segmentation task and explore multiple facets of the method on a small and realistic publicly-available magnetic resonance (MRI) dataset. Through an extensive evaluation, we show that self-ensembling can indeed improve the generalization of the models even when using a small amount of unlabeled data.
|
In @cite_26 , the authors propose changing parameters of the neural network layers for adapting domains by directly computing or optimizing higher-order statistics. More specifically, they propose an alternative for batch normalization called Adaptive Batch Normalization (AdaBN) that computes different statistics for the source and target domains, hence creating domain invariant features that are normalized according to the respective domain. In a similar fashion, Deep CORAL @cite_48 provides a loss function for minimizing the covariances between target and source domain features.
|
{
"cite_N": [
"@cite_48",
"@cite_26"
],
"mid": [
"2467286621",
"2299668505"
],
"abstract": [
"Deep neural networks are able to learn powerful representations from large quantities of labeled input data, however they cannot always generalize well across changes in input distributions. Domain adaptation algorithms have been proposed to compensate for the degradation in performance due to domain shift. In this paper, we address the case when the target domain is unlabeled, requiring unsupervised adaptation. CORAL is a \"frustratingly easy\" unsupervised domain adaptation method that aligns the second-order statistics of the source and target distributions with a linear transformation. Here, we extend CORAL to learn a nonlinear transformation that aligns correlations of layer activations in deep neural networks (Deep CORAL). Experiments on standard benchmark datasets show state-of-the-art performance.",
"Deep neural networks (DNN) have shown unprecedented success in various computer vision applications such as image classification and object detection. However, it is still a common annoyance during the training phase, that one has to prepare at least thousands of labeled images to fine-tune a network to a specific domain. Recent study ( 2015) shows that a DNN has strong dependency towards the training dataset, and the learned features cannot be easily transferred to a different but relevant task without fine-tuning. In this paper, we propose a simple yet powerful remedy, called Adaptive Batch Normalization (AdaBN) to increase the generalization ability of a DNN. By modulating the statistics in all Batch Normalization layers across the network, our approach achieves deep adaptation effect for domain adaptation tasks. In contrary to other deep learning domain adaptation methods, our method does not require additional components, and is parameter-free. It archives state-of-the-art performance despite its surprising simplicity. Furthermore, we demonstrate that our method is complementary with other existing methods. Combining AdaBN with existing domain adaptation treatments may further improve model performance."
]
}
|
1811.06042
|
2964184998
|
Abstract Recent advances in deep learning methods have redefined the state-of-the-art for many medical imaging applications, surpassing previous approaches and sometimes even competing with human judgment in several tasks. Those models, however, when trained to reduce the empirical risk on a single domain, fail to generalize when applied to other domains, a very common scenario in medical imaging due to the variability of images and anatomical structures, even across the same imaging modality. In this work, we extend the method of unsupervised domain adaptation using self-ensembling for the semantic segmentation task and explore multiple facets of the method on a small and realistic publicly-available magnetic resonance (MRI) dataset. Through an extensive evaluation, we show that self-ensembling can indeed improve the generalization of the models even when using a small amount of unlabeled data.
|
Discrepancy-based methods pose a different approach to DDA. By directly minimizing the discrepancy between activations from the source and target domain, the network learns to generate reasonable predictions while incorporating information from the target domain. The seminal work of @cite_20 directly minimizes the discrepancy between a specific layer with labeled samples from the source set and unlabeled samples from the target set.
|
{
"cite_N": [
"@cite_20"
],
"mid": [
"1565327149"
],
"abstract": [
"Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark. Fine-tuning deep models in a new domain can require a significant amount of data, which for many applications is simply not available. We propose a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant. We additionally show that a domain confusion metric can be used for model selection to determine the dimension of an adaptation layer and the best position for the layer in the CNN architecture. Our proposed adaptation method offers empirical performance which exceeds previously published results on a standard benchmark visual domain adaptation task."
]
}
|
1811.06042
|
2964184998
|
Abstract Recent advances in deep learning methods have redefined the state-of-the-art for many medical imaging applications, surpassing previous approaches and sometimes even competing with human judgment in several tasks. Those models, however, when trained to reduce the empirical risk on a single domain, fail to generalize when applied to other domains, a very common scenario in medical imaging due to the variability of images and anatomical structures, even across the same imaging modality. In this work, we extend the method of unsupervised domain adaptation using self-ensembling for the semantic segmentation task and explore multiple facets of the method on a small and realistic publicly-available magnetic resonance (MRI) dataset. Through an extensive evaluation, we show that self-ensembling can indeed improve the generalization of the models even when using a small amount of unlabeled data.
|
Implicit discrepancy-based methods such as self-ensembling @cite_34 have become widely used for unsupervised domain adaptation. Self-ensembling is based on the Mean Teacher network @cite_28 , which was first introduced for semi-supervised learning tasks. Due to the similarity between unsupervised domain adaptation and semi-supervised learning, there are very few adjustments that need to be made to employ the method for the purposes of DDA. Mean Teacher optimizes a task loss and a consistency loss, the latter minimizing the discrepancy between predictions on the source and target dataset. We further detail how Mean Teacher works in .
|
{
"cite_N": [
"@cite_28",
"@cite_34"
],
"mid": [
"2592691248",
"2767722847"
],
"abstract": [
"The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional benefit, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Without changing the network architecture, Mean Teacher achieves an error rate of 4.35 on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels. We also show that a good network architecture is crucial to performance. Combining Mean Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with 4000 labels from 10.55 to 6.28 , and on ImageNet 2012 with 10 of the labels from 35.24 to 9.11 .",
"This paper explores the use of self-ensembling for visual domain adaptation problems. Our technique is derived from the mean teacher variant (, 2017) of temporal ensembling (;, 2017), a technique that achieved state of the art results in the area of semi-supervised learning. We introduce a number of modifications to their approach for challenging domain adaptation scenarios and evaluate its effectiveness. Our approach achieves state of the art results in a variety of benchmarks, including our winning entry in the VISDA-2017 visual domain adaptation challenge. In small image benchmarks, our algorithm not only outperforms prior art, but can also achieve accuracy that is close to that of a classifier trained in a supervised fashion."
]
}
|
1811.06042
|
2964184998
|
Abstract Recent advances in deep learning methods have redefined the state-of-the-art for many medical imaging applications, surpassing previous approaches and sometimes even competing with human judgment in several tasks. Those models, however, when trained to reduce the empirical risk on a single domain, fail to generalize when applied to other domains, a very common scenario in medical imaging due to the variability of images and anatomical structures, even across the same imaging modality. In this work, we extend the method of unsupervised domain adaptation using self-ensembling for the semantic segmentation task and explore multiple facets of the method on a small and realistic publicly-available magnetic resonance (MRI) dataset. Through an extensive evaluation, we show that self-ensembling can indeed improve the generalization of the models even when using a small amount of unlabeled data.
|
There are few studies that report on the consequences of domain discrepancy for medical imaging by making use of the unsupervised domain adaptation literature. The work in @cite_51 discusses the impact of deep learning models across different institutions, showing a statistically-significant performance decrease in cross-institutional train-and-test protocols. A few studies attempt at directly approaching domain adaptation in medical imaging through adversarial training @cite_11 @cite_41 @cite_14 @cite_21 @cite_12 @cite_42 , some generating artificial images for leveraging training data @cite_32 @cite_1 . Nevertheless, to the best of our knowledge, we are the first to address the problem of domain shift in medical imaging segmentation by extending the unsupervised domain adaptation self-ensembling method for the semantic segmentation task.
|
{
"cite_N": [
"@cite_14",
"@cite_41",
"@cite_21",
"@cite_42",
"@cite_32",
"@cite_1",
"@cite_51",
"@cite_12",
"@cite_11"
],
"mid": [
"2809397050",
"2807687356",
"2964152645",
"2798785261",
"2770363598",
"2806321514",
"2791655542",
"2805899143",
"2562469482"
],
"abstract": [
"Automatic parsing of anatomical objects in X-ray images is critical to many clinical applications in particular towards image-guided invention and workflow automation. Existing deep network models require a large amount of labeled data. However, obtaining accurate pixel-wise labeling in X-ray images relies heavily on skilled clinicians due to the large overlaps of anatomy and the complex texture patterns. On the other hand, organs in 3D CT scans preserve clearer structures as well as sharper boundaries and thus can be easily delineated. In this paper, we propose a novel model framework for learning automatic X-ray image parsing from labeled CT scans. Specifically, a Dense Image-to-Image network (DI2I) for multi-organ segmentation is first trained on X-ray like Digitally Reconstructed Radiographs (DRRs) rendered from 3D CT volumes. Then we introduce a Task Driven Generative Adversarial Network (TD-GAN) architecture to achieve simultaneous style transfer and parsing for unseen real X-ray images. TD-GAN consists of a modified cycle-GAN substructure for pixel-to-pixel translation between DRRs and X-ray images and an added module leveraging the pre-trained DI2I to enforce segmentation consistency. The TD-GAN framework is general and can be easily adapted to other learning tasks. In the numerical experiments, we validate the proposed model on 815 DRRs and 153 topograms. While the vanilla DI2I without any adaptation fails completely on segmenting the topograms, the proposed model does not require any topogram labels and is able to provide a promising average dice of (85 ) which achieves the same level accuracy of supervised training (88 ).",
"In spite of the compelling achievements that deep neural networks (DNNs) have made in medical image computing, these deep models often suffer from degraded performance when being applied to new test datasets with domain shift. In this paper, we present a novel unsupervised domain adaptation approach for segmentation tasks by designing semantic-aware generative adversarial networks (GANs). Specifically, we transform the test image into the appearance of source domain, with the semantic structural information being well preserved, which is achieved by imposing a nested adversarial learning in semantic label space. In this way, the segmentation DNN learned from the source domain is able to be directly generalized to the transformed test image, eliminating the need of training a new model for every new target dataset. Our domain adaptation procedure is unsupervised, without using any target domain labels. The adversarial learning of our network is guided by a GAN loss for mapping data distributions, a cycle-consistency loss for retaining pixel-level content, and a semantic-aware loss for enhancing structural information. We validated our method on two different chest X-ray public datasets for left right lung segmentation. Experimental results show that the segmentation performance of our unsupervised approach is highly competitive with the upper bound of supervised transfer learning.",
"Preparing and scanning histopathology slides consists of several steps, each with a multitude of parameters. The parameters can vary between pathology labs and within the same lab over time, resulting in significant variability of the tissue appearance that hampers the generalization of automatic image analysis methods. Typically, this is addressed with ad-hoc approaches such as staining normalization that aim to reduce the appearance variability. In this paper, we propose a systematic solution based on domain-adversarial neural networks. We hypothesize that removing the domain information from the model representation leads to better generalization. We tested our hypothesis for the problem of mitosis detection in breast cancer histopathology images and made a comparative analysis with two other approaches. We show that combining color augmentation with domain-adversarial training is a better alternative than standard approaches to improve the generalization of deep learning methods.",
"Convolutional networks (ConvNets) have achieved great successes in various challenging vision tasks. However, the performance of ConvNets would degrade when encountering the domain shift. The domain adaptation is more significant while challenging in the field of biomedical image analysis, where cross-modality data have largely different distributions. Given that annotating the medical data is especially expensive, the supervised transfer learning approaches are not quite optimal. In this paper, we propose an unsupervised domain adaptation framework with adversarial learning for cross-modality biomedical image segmentations. Specifically, our model is based on a dilated fully convolutional network for pixel-wise prediction. Moreover, we build a plug-and-play domain adaptation module (DAM) to map the target input to features which are aligned with source domain feature space. A domain critic module (DCM) is set up for discriminating the feature space of both domains. We optimize the DAM and DCM via an adversarial loss without using any target domain label. Our proposed method is validated by adapting a ConvNet trained with MRI images to unpaired CT data for cardiac structures segmentations, and achieved very promising results.",
"To realize the full potential of deep learning for medical imaging, large annotated datasets are required for training. Such datasets are difficult to acquire due to privacy issues, lack of experts available for annotation, underrepresentation of rare conditions, and poor standardization. The lack of annotated data has been addressed in conventional vision applications using synthetic images refined via unsupervised adversarial training to look like real images. However, this approach is difficult to extend to general medical imaging because of the complex and diverse set of features found in real human tissues. We propose a novel framework that uses a reverse flow, where adversarial training is used to make real medical images more like synthetic images, and clinically-relevant features are preserved via self-regularization. These domain-adapted synthetic-like images can then be accurately interpreted by networks trained on large datasets of synthetic medical images. We implement this approach on the notoriously difficult task of depth-estimation from monocular endoscopy which has a variety of applications in colonoscopy, robotic surgery, and invasive endoscopic procedures. We train a depth estimator on a large data set of synthetic images generated using an accurate forward model of an endoscope and an anatomically-realistic colon. Our analysis demonstrates that the structural similarity of endoscopy depth estimation in a real pig colon predicted from a network trained solely on synthetic data improved by 78.7 by using reverse domain adaptation.",
"Deep learning algorithms require large amounts of labeled data which is difficult to attain for medical imaging. Even if a particular dataset is accessible, a learned classifier struggles to maintain the same level of performance on a different medical imaging dataset from a new or never-seen data source domain. Utilizing generative adversarial networks in a semi-supervised learning architecture, we address both problems of labeled data scarcity and data domain overfitting. For cardiac abnormality classification in chest X-rays, we demonstrate that an order of magnitude less data is required with semi-supervised learning generative adversarial networks than with conventional supervised learning convolutional neural networks. In addition, we demonstrate its robustness across different datasets for similar classification tasks.",
"",
"Many biomedical image analysis applications require segmentation. Convolutional neural networks (CNN) have become a promising approach to segment biomedical images; however, the accuracy of these methods is highly dependent on the training data. We focus on biomedical image segmentation in the context where there is variation between source and target datasets and ground truth for the target dataset is very limited or non-existent. We use an adversarial based training approach to train CNNs to achieve good accuracy on the target domain. We use the DRIVE and STARE eye vasculture segmentation datasets and show that our approach can significantly improve results where we only use labels of one domain in training and test on the other domain. We also show improvements on membrane detection between MIC-CAI 2016 CREMI challenge and ISBI2013 EM segmentation challenge datasets.",
"Significant advances have been made towards building accurate automatic segmentation systems for a variety of biomedical applications using machine learning. However, the performance of these systems often degrades when they are applied on new data that differ from the training data, for example, due to variations in imaging protocols. Manually annotating new data for each test domain is not a feasible solution. In this work we investigate unsupervised domain adaptation using adversarial neural networks to train a segmentation method which is more robust to differences in the input data, and which does not require any annotations on the test domain. Specifically, we derive domain-invariant features by learning to counter an adversarial network, which attempts to classify the domain of the input data by observing the activations of the segmentation network. Furthermore, we propose a multi-connected domain discriminator for improved adversarial training. Our system is evaluated using two MR databases of subjects with traumatic brain injuries, acquired using different scanners and imaging protocols. Using our unsupervised approach, we obtain segmentation accuracies which are close to the upper bound of supervised domain adaptation."
]
}
|
1811.05850
|
2900627125
|
Overfitting frequently occurs in deep learning. In this paper, we propose a novel regularization method called Drop-Activation to reduce overfitting and improve generalization. The key idea is to nonlinear activation functions by setting them to be identity functions randomly during training time. During testing, we use a deterministic network with a new activation function to encode the average effect of dropping activations randomly. Experimental results on CIFAR-10, CIFAR-100, SVHN, and EMNIST show that Drop-Activation generally improves the performance of popular neural network architectures. Furthermore, unlike dropout, as a regularizer Drop-Activation can be used in harmony with standard training and regularization techniques such as Batch Normalization and AutoAug. Our theoretical analyses support the regularization effect of Drop-Activation as implicit parameter reduction and its capability to be used together with Batch Normalization.
|
Various regularization methods have been proposed to reduce the risk of overfitting. Data augmentation achieves regularization by directly enlarging the original training dataset via randomly transforming the input images @cite_14 @cite_0 @cite_8 @cite_29 or output labels @cite_3 @cite_31 . Another class of methods regularize the network by adding randomness into various neural network structures such as nodes @cite_25 , connections @cite_4 , pooling layers @cite_23 , activations @cite_19 and residual blocks @cite_10 @cite_1 @cite_28 . In particular @cite_25 @cite_8 @cite_1 @cite_28 @cite_9 add randomness by dropping some structures of neural networks in training. We focus on reviewing this class of methods as they are most relevant to our method where the nonlinear activation functions are discarded randomly.
|
{
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_8",
"@cite_28",
"@cite_29",
"@cite_9",
"@cite_1",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_23",
"@cite_31",
"@cite_10",
"@cite_25"
],
"mid": [
"",
"4919037",
"",
"",
"",
"",
"",
"2765407302",
"",
"1921523184",
"1907282891",
"",
"",
"2095705004"
],
"abstract": [
"",
"We introduce DropConnect, a generalization of Dropout (, 2012), for regularizing large fully-connected layers within neural networks. When training with Dropout, a randomly selected subset of activations are set to zero within each layer. DropConnect instead sets a randomly selected subset of weights within the network to zero. Each unit thus receives input from a random subset of units in the previous layer. We derive a bound on the generalization performance of both Dropout and DropConnect. We then evaluate DropConnect on a range of datasets, comparing to Dropout, and show state-of-the-art results on several image recognition benchmarks by aggregating multiple DropConnect-trained models.",
"",
"",
"",
"",
"",
"Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple linear behavior in-between training examples. Our experiments on the ImageNet-2012, CIFAR-10, CIFAR-100, Google commands and UCI datasets show that mixup improves the generalization of state-of-the-art neural network architectures. We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.",
"",
"In this paper we investigate the performance of different types of rectified activation functions in convolutional neural network: standard rectified linear unit (ReLU), leaky rectified linear unit (Leaky ReLU), parametric rectified linear unit (PReLU) and a new randomized leaky rectified linear units (RReLU). We evaluate these activation function on standard image classification task. Our experiments suggest that incorporating a non-zero slope for negative part in rectified activation units could consistently improve the results. Thus our findings are negative on the common belief that sparsity is the key of good performance in ReLU. Moreover, on small scale dataset, using deterministic negative slope or learning it are both prone to overfitting. They are not as effective as using their randomized counterpart. By using RReLU, we achieved 75.68 accuracy on CIFAR-100 test set without multiple test or ensemble.",
"We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within the pooling region. The approach is hyper-parameter free and can be combined with other regularization approaches, such as dropout and data augmentation. We achieve state-of-the-art performance on four image datasets, relative to other approaches that do not utilize data augmentation.",
"",
"",
"Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets."
]
}
|
1811.05850
|
2900627125
|
Overfitting frequently occurs in deep learning. In this paper, we propose a novel regularization method called Drop-Activation to reduce overfitting and improve generalization. The key idea is to nonlinear activation functions by setting them to be identity functions randomly during training time. During testing, we use a deterministic network with a new activation function to encode the average effect of dropping activations randomly. Experimental results on CIFAR-10, CIFAR-100, SVHN, and EMNIST show that Drop-Activation generally improves the performance of popular neural network architectures. Furthermore, unlike dropout, as a regularizer Drop-Activation can be used in harmony with standard training and regularization techniques such as Batch Normalization and AutoAug. Our theoretical analyses support the regularization effect of Drop-Activation as implicit parameter reduction and its capability to be used together with Batch Normalization.
|
Dropout @cite_25 drops nodes along with its connection with some fixed probability during training. DropConnect @cite_4 has a similar idea but masks out some weights randomly. @cite_11 improves the performance of ResNet @cite_21 by dropping entire residual block at random during training and passing through skip connections (identity mapping) . The randomness of dropping entire block enables us to train a shallower network in expectation. This idea is also used in @cite_32 when training ResNeXt @cite_24 type 2-residual-branch network. The idea of dropping also arises in data augmentation. Cutout @cite_20 randomly cut out a square region of training images. In other words, they drop the input nodes in a patch-wise fashion, which prevents the neural network model from putting too much emphasis on the specific region of features.
|
{
"cite_N": [
"@cite_4",
"@cite_21",
"@cite_32",
"@cite_24",
"@cite_25",
"@cite_20",
"@cite_11"
],
"mid": [
"4919037",
"2949650786",
"",
"2549139847",
"2095705004",
"2746314669",
"2331143823"
],
"abstract": [
"We introduce DropConnect, a generalization of Dropout (, 2012), for regularizing large fully-connected layers within neural networks. When training with Dropout, a randomly selected subset of activations are set to zero within each layer. DropConnect instead sets a randomly selected subset of weights within the network to zero. Each unit thus receives input from a random subset of units in the previous layer. We derive a bound on the generalization performance of both Dropout and DropConnect. We then evaluate DropConnect on a range of datasets, comparing to Dropout, and show state-of-the-art results on several image recognition benchmarks by aggregating multiple DropConnect-trained models.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"",
"We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call cardinality (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online.",
"Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.",
"Convolutional neural networks are capable of learning powerful representational spaces, which are necessary for tackling complex learning tasks. However, due to the model capacity required to capture such representations, they are often susceptible to overfitting and therefore require proper regularization in order to generalize well. In this paper, we show that the simple regularization technique of randomly masking out square regions of input during training, which we call cutout, can be used to improve the robustness and overall performance of convolutional neural networks. Not only is this method extremely easy to implement, but we also demonstrate that it can be used in conjunction with existing forms of data augmentation and other regularizers to further improve model performance. We evaluate this method by applying it to current state-of-the-art architectures on the CIFAR-10, CIFAR-100, and SVHN datasets, yielding new state-of-the-art results of 2.56 , 15.20 , and 1.30 test error respectively. Code is available at this https URL",
"Very deep convolutional networks with hundreds of layers have led to significant reductions in error on competitive benchmarks. Although the unmatched expressiveness of the many layers can be highly desirable at test time, training very deep networks comes with its own set of challenges. The gradients can vanish, the forward flow often diminishes, and the training time can be painfully slow. To address these problems, we propose stochastic depth, a training procedure that enables the seemingly contradictory setup to train short networks and use deep networks at test time. We start with very deep networks but during training, for each mini-batch, randomly drop a subset of layers and bypass them with the identity function. This simple approach complements the recent success of residual networks. It reduces training time substantially and improves the test error significantly on almost all data sets that we used for evaluation. With stochastic depth we can increase the depth of residual networks even beyond 1200 layers and still yield meaningful improvements in test error (4.91 on CIFAR-10)."
]
}
|
1811.05905
|
2901957411
|
Recently, Autonomous Vehicles (AVs) have gained extensive attention from both academia and industry. AVs are a complex system composed of many subsystems, making them a typical target for attackers. Therefore, the firmware of the different subsystems needs to be updated to the latest version by the manufacturer to fix bugs and introduce new features, e.g., using security patches. In this paper, we propose a distributed firmware update scheme for the AVs' subsystems, leveraging blockchain and smart contract technology. A consortium blockchain made of different AVs manufacturers is used to ensure the authenticity and integrity of firmware updates. Instead of depending on centralized third parties to distribute the new updates, we enable AVs, namely distributors, to participate in the distribution process and we take advantage of their mobility to guarantee high availability and fast delivery of the updates. To incentivize AVs to distribute the updates, a reward system is established that maintains a credit reputation for each distributor account in the blockchain. A zero-knowledge proof protocol is used to exchange the update in return for a proof of distribution in a trust-less environment. Moreover, we use attribute-based encryption (ABE) scheme to ensure that only authorized AVs will be able to download and use a new update. Our analysis indicates that the additional cryptography primitives and exchanged transactions do not affect the operation of the AVs network. Also, our security analysis demonstrates that our scheme is efficient and secure against different attacks.
|
In the literature, the security of firmware update has been discussed in several contexts, including wireless sensor network @cite_4 @cite_1 , IoT @cite_2 @cite_17 , vehicular network @cite_7 , etc. The existing works can be classified either as centralized (client-server model) or decentralized. In the following we review some of the existing solutions in both classes.
|
{
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_1",
"@cite_2",
"@cite_17"
],
"mid": [
"2563665707",
"",
"2129951290",
"2519293460",
"2949771573"
],
"abstract": [
"Code dissemination is a main component of reprogramming which enables over-the-air software update in wireless sensor networks (WSNs). In this paper, we present an adaptive code dissemination based on link quality (ACODI), which aims to minimize energy consumption. Compared to prior works on code dissemination, ACODI has a variety of notable features. First, it dynamically adapts the payload size in terms of energy efficiency. Second, it provides very low-overhead link estimation method. Finally, it is gracefully integrated into Deluge, which is the de facto standard code dissemination protocol in WSNs, and implemented on the TinyOS platform with very small overhead in terms of computation and memory. Our experiments using TelosB motes in the indoor testbed show that ACODI outperforms Deluge-22 and Deluge-108, which are Deluge with a fixed payload size of 22 and 108 bytes, respectively, in terms of energy efficiency and completion time.",
"",
"A number of multi-hop, wireless, network programming systems have emerged for sensor network retasking but none of these systems support a cryptographically-strong, public-key-based system for source authentication and integrity verification. The traditional technique for authenticating a program binary, namely a digital signature of the program hash, is poorly suited to resource-contrained sensor nodes. Our solution to the secure programming problem leverages authenticated streams, is consistent with the limited resources of a typical sensor node, and can be used to secure existing network programming systems. Under our scheme, a program binary consists of several code and data segments that are mapped to a series of messages for transmission over the network. An advertisement, consisting of the program name, version number, and a hash of the very first message, is digitally signed and transmitted first. The advertisement authenticates the first message, which in turn contains a hash of the second message. Similarly, the second message contains a hash of the third message, and so on, binding each message to the one logically preceding it in the series through the hash chain. We augmented the Deluge network programming system with our protocol and evaluated the resulting system performance.",
"Embedded devices are going to be used extremely in Internet of Things (IoT) environments. The small and tiny IoT devices will operate and communicate each other without involvement of users, while their operations must be correct and protected against various attacks. In this paper, we focus on a secure firmware update issue, which is a fundamental security challenge for the embedded devices in an IoT environment. A new firmware update scheme that utilizes a blockchain technology is proposed to securely check a firmware version, validate the correctness of firmware, and download the latest firmware for the embedded devices. In the proposed scheme, an embedded device requests its firmware update to nodes in a blockchain network and gets a response to determine whether its firmware is up-to-date or not. If not latest, the embedded device downloads the latest firmware from a peer-to-peer firmware sharing network of the nodes. Even in the case that the version of the firmware is up-to-date, its integrity, i.e., correctness of firmware, is checked. The proposed scheme guarantees that the embedded device's firmware is up-to-date while not tampered. Attacks targeting known vulnerabilities on firmware of embedded devices are thus mitigated.",
"The prevalence of IoT devices makes them an ideal target for attackers. To reduce the risk of attacks vendors routinely deliver security updates (patches) for their devices. The delivery of security updates becomes challenging due to the issue of scalability as the number of devices may grow much quicker than vendors' distribution systems. Previous studies have suggested a permissionless and decentralized blockchain-based network in which nodes can host and deliver security updates, thus the addition of new nodes scales out the network. However, these studies do not provide an incentive for nodes to join the network, making it unlikely for nodes to freely contribute their hosting space, bandwidth, and computation resources. In this paper, we propose a novel decentralized IoT software update delivery network in which participating nodes referred to as distributors) are compensated by vendors with digital currency for delivering updates to devices. Upon the release of a new security update, a vendor will make a commitment to provide digital currency to distributors that deliver the update; the commitment will be made with the use of smart contracts, and hence will be public, binding, and irreversible. The smart contract promises compensation to any distributor that provides proof-of-distribution, which is unforgeable proof that a single update was delivered to a single device. A distributor acquires the proof-of-distribution by exchanging a security update for a device signature using the Zero-Knowledge Contingent Payment (ZKCP) trustless data exchange protocol. Eliminating the need for trust between the security update distributor and the security consumer (IoT device) by providing fair compensation, can significantly increase the number of distributors, thus facilitating rapid scale out."
]
}
|
1811.05905
|
2901957411
|
Recently, Autonomous Vehicles (AVs) have gained extensive attention from both academia and industry. AVs are a complex system composed of many subsystems, making them a typical target for attackers. Therefore, the firmware of the different subsystems needs to be updated to the latest version by the manufacturer to fix bugs and introduce new features, e.g., using security patches. In this paper, we propose a distributed firmware update scheme for the AVs' subsystems, leveraging blockchain and smart contract technology. A consortium blockchain made of different AVs manufacturers is used to ensure the authenticity and integrity of firmware updates. Instead of depending on centralized third parties to distribute the new updates, we enable AVs, namely distributors, to participate in the distribution process and we take advantage of their mobility to guarantee high availability and fast delivery of the updates. To incentivize AVs to distribute the updates, a reward system is established that maintains a credit reputation for each distributor account in the blockchain. A zero-knowledge proof protocol is used to exchange the update in return for a proof of distribution in a trust-less environment. Moreover, we use attribute-based encryption (ABE) scheme to ensure that only authorized AVs will be able to download and use a new update. Our analysis indicates that the additional cryptography primitives and exchanged transactions do not affect the operation of the AVs network. Also, our security analysis demonstrates that our scheme is efficient and secure against different attacks.
|
In sensor networks, several schemes such as @cite_4 , @cite_1 have been proposed to improve the reliability of delivering new updates security patches by ensuring their integrity. However, these schemes depend on a single entity to manage the distribution of firmware updates and not scale for large networks.
|
{
"cite_N": [
"@cite_1",
"@cite_4"
],
"mid": [
"2129951290",
"2563665707"
],
"abstract": [
"A number of multi-hop, wireless, network programming systems have emerged for sensor network retasking but none of these systems support a cryptographically-strong, public-key-based system for source authentication and integrity verification. The traditional technique for authenticating a program binary, namely a digital signature of the program hash, is poorly suited to resource-contrained sensor nodes. Our solution to the secure programming problem leverages authenticated streams, is consistent with the limited resources of a typical sensor node, and can be used to secure existing network programming systems. Under our scheme, a program binary consists of several code and data segments that are mapped to a series of messages for transmission over the network. An advertisement, consisting of the program name, version number, and a hash of the very first message, is digitally signed and transmitted first. The advertisement authenticates the first message, which in turn contains a hash of the second message. Similarly, the second message contains a hash of the third message, and so on, binding each message to the one logically preceding it in the series through the hash chain. We augmented the Deluge network programming system with our protocol and evaluated the resulting system performance.",
"Code dissemination is a main component of reprogramming which enables over-the-air software update in wireless sensor networks (WSNs). In this paper, we present an adaptive code dissemination based on link quality (ACODI), which aims to minimize energy consumption. Compared to prior works on code dissemination, ACODI has a variety of notable features. First, it dynamically adapts the payload size in terms of energy efficiency. Second, it provides very low-overhead link estimation method. Finally, it is gracefully integrated into Deluge, which is the de facto standard code dissemination protocol in WSNs, and implemented on the TinyOS platform with very small overhead in terms of computation and memory. Our experiments using TelosB motes in the indoor testbed show that ACODI outperforms Deluge-22 and Deluge-108, which are Deluge with a fixed payload size of 22 and 108 bytes, respectively, in terms of energy efficiency and completion time."
]
}
|
1811.05905
|
2901957411
|
Recently, Autonomous Vehicles (AVs) have gained extensive attention from both academia and industry. AVs are a complex system composed of many subsystems, making them a typical target for attackers. Therefore, the firmware of the different subsystems needs to be updated to the latest version by the manufacturer to fix bugs and introduce new features, e.g., using security patches. In this paper, we propose a distributed firmware update scheme for the AVs' subsystems, leveraging blockchain and smart contract technology. A consortium blockchain made of different AVs manufacturers is used to ensure the authenticity and integrity of firmware updates. Instead of depending on centralized third parties to distribute the new updates, we enable AVs, namely distributors, to participate in the distribution process and we take advantage of their mobility to guarantee high availability and fast delivery of the updates. To incentivize AVs to distribute the updates, a reward system is established that maintains a credit reputation for each distributor account in the blockchain. A zero-knowledge proof protocol is used to exchange the update in return for a proof of distribution in a trust-less environment. Moreover, we use attribute-based encryption (ABE) scheme to ensure that only authorized AVs will be able to download and use a new update. Our analysis indicates that the additional cryptography primitives and exchanged transactions do not affect the operation of the AVs network. Also, our security analysis demonstrates that our scheme is efficient and secure against different attacks.
|
@cite_2 , the authors proposed a decentralized solution based on a permission-less blockchain to ensure the integrity of updates by having multiple verification nodes instead of depending on a private centralized vendor network. For the distribution of updates, a peer-to-peer file sharing network such as BitTorrent is proposed to ensure integrity and versions tractability of updates. However, the scheme does not provide any incentive for devices to participate and distribute firmware updates to others.
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"2519293460"
],
"abstract": [
"Embedded devices are going to be used extremely in Internet of Things (IoT) environments. The small and tiny IoT devices will operate and communicate each other without involvement of users, while their operations must be correct and protected against various attacks. In this paper, we focus on a secure firmware update issue, which is a fundamental security challenge for the embedded devices in an IoT environment. A new firmware update scheme that utilizes a blockchain technology is proposed to securely check a firmware version, validate the correctness of firmware, and download the latest firmware for the embedded devices. In the proposed scheme, an embedded device requests its firmware update to nodes in a blockchain network and gets a response to determine whether its firmware is up-to-date or not. If not latest, the embedded device downloads the latest firmware from a peer-to-peer firmware sharing network of the nodes. Even in the case that the version of the firmware is up-to-date, its integrity, i.e., correctness of firmware, is checked. The proposed scheme guarantees that the embedded device's firmware is up-to-date while not tampered. Attacks targeting known vulnerabilities on firmware of embedded devices are thus mitigated."
]
}
|
1811.05467
|
2901616036
|
Paper presented at the NIPS 2018 Workshop on Machine Learning for the Developing World, December 2018, Montreal, Canada
|
Kato used statistical phrase-based translation, based on Moses, in order to perform English-to-Setswana translation @cite_4 . They achieve a BLEU score of 32.71 on a dataset that is not publically-available and so was excluded from the comparison. Wilken used a similar technique as @cite_4 , but focused on linguistically-motivated pre- and post-processing of the corpus in order to improve translation with phrase-based techniques @cite_8 . Wilken was trained on the same Autshumato dataset used in this paper, and also used an additional monolongual dataset for language modelling.
|
{
"cite_N": [
"@cite_4",
"@cite_8"
],
"mid": [
"1487541296",
"2763530858"
],
"abstract": [
"Statistical machine translation techniques offer great promise for the development of automatic translation systems. However, the realization of this potential requires the availability of significant amounts of parallel bilingual texts. This paper reports on an attempt to reduce the amount of text that is required to obtain an acceptable translation system, through the use of active and semi- supervised learning. Systems were built using resources collected from South African government websites and the results evaluated using a standard automatic evaluation metric (BLEU). We show that significant improvements in translation quality can be achieved with very limited parallel corpora, and that both active learning and semi-supervised learning are useful in this context.",
"In this paper, I draw on data collected from a research project carried out in four different regions of Kenya between 2006 and 2007, to provide and discuss results that examine the language perceptions that Kenyan youths in selected rural and urban areas exhibit. I also examine how such perceptions are reflected in their daily language practices and how they help in constructing their language identities. This paper utilizes both quantitative and qualitative sociolinguistic methods of data collection that include the use of questionnaires, informal interviews and participant observations to capture and characterize language practices of the target population. Using these strategies I examine the discrepancy between speakers’ stated perceptions and their actual language performances and how this mismatch helps explain the meaning of the different identities speakers perform, enact or project. In the discussion, I underscore the importance of underlying met alinguistic phenomena responsible for the ambivalence evident among the youths that were studied. This is made possible through an eclectic sociolinguistic approach that examines different language ideologies at play. This approach and the questions raised in this paper ultimately contribute to a better understanding of the larger question of the state of language maintenance and shift in Kenya and the ever dynamic identities of youth language, which could be replicated in other parts of Sub-Saharan Africa."
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.