aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1604.04994
2341977053
Deep convolutional neural network models pre-trained for the ImageNet classification task have been successfully adopted to tasks in other domains, such as texture description and object proposal generation, but these tasks require annotations for images in the new domain. In this paper, we focus on a novel and challenging task in the pure unsupervised setting: fine-grained image retrieval. Even with image labels, fine-grained images are difficult to classify, let alone the unsupervised retrieval task. We propose the Selective Convolutional Descriptor Aggregation (SCDA) method. SCDA firstly localizes the main object in fine-grained images, a step that discards the noisy background and keeps useful deep descriptors. The selected descriptors are then aggregated and dimensionality reduced into a short feature vector using the best practices we found. SCDA is unsupervised, using no image label or bounding box annotation. Experiments on six fine-grained datasets confirm the effectiveness of SCDA for fine-grained image retrieval. Besides, visualization of the SCDA features shows that they correspond to visual attributes (even subtle ones), which might explain SCDA's high mean average precision in fine-grained retrieval. Moreover, on general image retrieval datasets, SCDA achieves comparable retrieval results with state-of-the-art general image retrieval approaches.
Until recently, most image retrieval approaches were based on local features (with SIFT being a typical example) and feature aggregation strategies on top of these local features. Vector of Locally Aggregated Descriptors (VLAD) @cite_14 and Fisher Vector (FV) @cite_16 are two typical feature aggregation strategies. After the success of CNN @cite_42 , image retrieval also embraced deep learning. Out-of-the-box features from pre-trained deep networks were shown to achieve state-of-the-art results in many vision related tasks, including image retrieval @cite_2 .
{ "cite_N": [ "@cite_14", "@cite_42", "@cite_16", "@cite_2" ], "mid": [ "2012592962", "", "1966385142", "2953391683" ], "abstract": [ "We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms.", "", "A standard approach to describe an image for classification and retrieval purposes is to extract a set of local patch descriptors, encode them into a high dimensional vector and pool them into an image-level signature. The most common patch encoding strategy consists in quantizing the local descriptors into a finite set of prototypical elements. This leads to the popular Bag-of-Visual words representation. In this work, we propose to use the Fisher Kernel framework as an alternative patch encoding strategy: we describe patches by their deviation from an \"universal\" generative Gaussian mixture model. This representation, which we call Fisher vector has many advantages: it is efficient to compute, it leads to excellent results even with efficient linear classifiers, and it can be compressed with a minimal loss of accuracy using product quantization. We report experimental results on five standard datasets--PASCAL VOC 2007, Caltech 256, SUN 397, ILSVRC 2010 and ImageNet10K--with up to 9M images and 10K classes, showing that the FV framework is a state-of-the-art patch encoding technique.", "Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the network which was trained to perform object classification on ILSVRC13. We use features extracted from the network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or @math distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks." ] }
1604.04994
2341977053
Deep convolutional neural network models pre-trained for the ImageNet classification task have been successfully adopted to tasks in other domains, such as texture description and object proposal generation, but these tasks require annotations for images in the new domain. In this paper, we focus on a novel and challenging task in the pure unsupervised setting: fine-grained image retrieval. Even with image labels, fine-grained images are difficult to classify, let alone the unsupervised retrieval task. We propose the Selective Convolutional Descriptor Aggregation (SCDA) method. SCDA firstly localizes the main object in fine-grained images, a step that discards the noisy background and keeps useful deep descriptors. The selected descriptors are then aggregated and dimensionality reduced into a short feature vector using the best practices we found. SCDA is unsupervised, using no image label or bounding box annotation. Experiments on six fine-grained datasets confirm the effectiveness of SCDA for fine-grained image retrieval. Besides, visualization of the SCDA features shows that they correspond to visual attributes (even subtle ones), which might explain SCDA's high mean average precision in fine-grained retrieval. Moreover, on general image retrieval datasets, SCDA achieves comparable retrieval results with state-of-the-art general image retrieval approaches.
Additionally, several variants of image retrieval were studied in the past few years, e.g., multi-label image retrieval @cite_34 , sketch-based image retrieval @cite_28 and medical CT image retrieval @cite_39 . In this paper, we will focus on the novel and challenging fine-grained image retrieval task.
{ "cite_N": [ "@cite_28", "@cite_34", "@cite_39" ], "mid": [ "2183681795", "2296447001", "1961043134" ], "abstract": [ "A sketch-based image retrieval often needs to optimize the tradeoff between efficiency and precision. Index structures are typically applied to large-scale databases to realize efficient retrievals. However, the performance can be affected by quantization errors. Moreover, the ambiguousness of user-provided examples may also degrade the performance, when compared with traditional image retrieval methods. Sketch-based image retrieval systems that preserve the index structure are challenging. In this paper, we propose an effective sketch-based image retrieval approach with re-ranking and relevance feedback schemes. Our approach makes full use of the semantics in query sketches and the top ranked images of the initial results. We also apply relevance feedback to find more relevant images for the input query sketch. The integration of the two schemes results in mutual benefits and improves the performance of the sketch-based image retrieval.", "Similarity-preserving hashing is a commonly used method for nearest neighbor search in large-scale image retrieval. For image retrieval, deep-network-based hashing methods are appealing, since they can simultaneously learn effective image representations and compact hash codes. This paper focuses on deep-network-based hashing for multi-label images, each of which may contain objects of multiple categories. In most existing hashing methods, each image is represented by one piece of hash code, which is referred to as semantic hashing. This setting may be suboptimal for multi-label image retrieval. To solve this problem, we propose a deep architecture that learns instance-aware image representations for multi-label image data, which are organized in multiple groups, with each group containing the features for one category. The instance-aware representations not only bring advantages to semantic hashing but also can be used in category-aware hashing, in which an image is represented by multiple pieces of hash codes and each piece of code corresponds to a category. Extensive evaluations conducted on several benchmark data sets demonstrate that for both the semantic hashing and the category-aware hashing, the proposed method shows substantial improvement over the state-of-the-art supervised and unsupervised hashing methods.", "A new image feature description based on the local wavelet pattern (LWP) is proposed in this paper to characterize the medical computer tomography (CT) images for content-based CT image retrieval. In the proposed work, the LWP is derived for each pixel of the CT image by utilizing the relationship of center pixel with the local neighboring information. In contrast to the local binary pattern that only considers the relationship between a center pixel and its neighboring pixels, the presented approach first utilizes the relationship among the neighboring pixels using local wavelet decomposition, and finally considers its relationship with the center pixel. A center pixel transformation scheme is introduced to match the range of center value with the range of local wavelet decomposed values. Moreover, the introduced local wavelet decomposition scheme is centrally symmetric and suitable for CT images. The novelty of this paper lies in the following two ways: 1) encoding local neighboring information with local wavelet decomposition and 2) computing LWP using local wavelet decomposed values and transformed center pixel values. We tested the performance of our method over three CT image databases in terms of the precision and recall. We also compared the proposed LWP descriptor with the other state-of-the-art local image descriptors, and the experimental results suggest that the proposed method outperforms other methods for CT image retrieval." ] }
1604.04994
2341977053
Deep convolutional neural network models pre-trained for the ImageNet classification task have been successfully adopted to tasks in other domains, such as texture description and object proposal generation, but these tasks require annotations for images in the new domain. In this paper, we focus on a novel and challenging task in the pure unsupervised setting: fine-grained image retrieval. Even with image labels, fine-grained images are difficult to classify, let alone the unsupervised retrieval task. We propose the Selective Convolutional Descriptor Aggregation (SCDA) method. SCDA firstly localizes the main object in fine-grained images, a step that discards the noisy background and keeps useful deep descriptors. The selected descriptors are then aggregated and dimensionality reduced into a short feature vector using the best practices we found. SCDA is unsupervised, using no image label or bounding box annotation. Experiments on six fine-grained datasets confirm the effectiveness of SCDA for fine-grained image retrieval. Besides, visualization of the SCDA features shows that they correspond to visual attributes (even subtle ones), which might explain SCDA's high mean average precision in fine-grained retrieval. Moreover, on general image retrieval datasets, SCDA achieves comparable retrieval results with state-of-the-art general image retrieval approaches.
All the previous fine-grained methods needed image-level labels (others even needed part annotations) to train their deep networks. Few works have touched of fine-grained images. @cite_5 proposed Deep Ranking to learn similarity between fine-grained images. However, it requires image-level labels to build a set of triplets, which is not unsupervised and cannot scale well for large scale image retrieval tasks.
{ "cite_N": [ "@cite_5" ], "mid": [ "1975517671" ], "abstract": [ "Learning fine-grained image similarity is a challenging task. It needs to capture between-class and within-class image differences. This paper proposes a deep ranking model that employs deep learning techniques to learn similarity metric directly from images. It has higher learning capability than models based on hand-crafted features. A novel multiscale network structure has been developed to describe the images effectively. An efficient triplet sampling algorithm is also proposed to learn the model with distributed asynchronized stochastic gradient. Extensive experiments show that the proposed algorithm outperforms models based on hand-crafted visual features and deep classification models." ] }
1604.05107
2338293612
In current Data Center Networks (DCNs), Equal- Cost MultiPath (ECMP) is used as the de-facto routing protocol. However, ECMP does not differentiate between short and long flows, the two main categories of flows depending on their duration (lifetime). This issue causes hot-spots in the network, affecting negatively the Flow Completion Time (FCT) and the throughput, the two key performance metrics in data center networks. Previous work on load balancing proposed solutions such as splitting long flows into short flows, using per-packet forwarding approaches, and isolating the paths of short and long flows. We propose DiffFlow, a new load balancing solution which detects long flows and forwards packets using Random Packet Spraying (RPS) with help of SDN, whereas the flows with small duration are forwarded with ECMP by default. The use of ECMP for short flows is reasonable, as it does not create the out-of-order problem; at the same time, RPS for long flows can efficiently help to load balancing the entire network, given that long flows represent most of the traffic in DCNs. The results show that our DiffFlow solution outperforms both the individual usage of either RPS or ECMP, while the overall throughput achieved is maintained at the level comparable to RPS.
The general classification of different requirements and solutions for short and long flows in DCN has been discussed in @cite_3 . Here, we provide an overview of solutions related specifically to multipath forwarding schemes in DCNs.
{ "cite_N": [ "@cite_3" ], "mid": [ "2064941967" ], "abstract": [ "In this paper, we survey different existing schemes for the transmission of flows in Data Center Networks (DCNs). The transport of flows in DCNs must cope with the bandwidth demands of the traffic that a large number of data center applications generates and achieve high utilization of the data center infrastructure to make the data center financially viable. Traffic in DCNs roughly comprises short flows, which are generated by the Partition Aggregate model adopted by several applications and have sizes of a few kilobytes, and long flows, which are data for the operation and maintenance of the data center and have sizes on the order of megabytes. Short flows must be transmitted (or completed) as soon as possible or within a deadline, and long flows must be serviced with a minimum acceptable throughput. The coexistence of short and long flows may jeopardize achieving both performance objectives simultaneously. This challenge has motivated growing research on schemes for managing the transmission of flows in DCNs. We describe several recent schemes aimed at reducing the flow completion time in DCNs. We also present a summary of existing solutions for the incast traffic phenomenon. We provide a comparison and classification of the surveyed schemes, describe their advantages and disadvantages, and show the different trends for scheme design. For completeness, we describe some DCN architectures, discuss the traffic patterns of DCNs, and discuss why some existing versions of transport protocols may not be usable in DCNs. At the end, we discuss some of the identified research challenges." ] }
1604.05107
2338293612
In current Data Center Networks (DCNs), Equal- Cost MultiPath (ECMP) is used as the de-facto routing protocol. However, ECMP does not differentiate between short and long flows, the two main categories of flows depending on their duration (lifetime). This issue causes hot-spots in the network, affecting negatively the Flow Completion Time (FCT) and the throughput, the two key performance metrics in data center networks. Previous work on load balancing proposed solutions such as splitting long flows into short flows, using per-packet forwarding approaches, and isolating the paths of short and long flows. We propose DiffFlow, a new load balancing solution which detects long flows and forwards packets using Random Packet Spraying (RPS) with help of SDN, whereas the flows with small duration are forwarded with ECMP by default. The use of ECMP for short flows is reasonable, as it does not create the out-of-order problem; at the same time, RPS for long flows can efficiently help to load balancing the entire network, given that long flows represent most of the traffic in DCNs. The results show that our DiffFlow solution outperforms both the individual usage of either RPS or ECMP, while the overall throughput achieved is maintained at the level comparable to RPS.
One of the first solutions proposed to address the long flow problem was Hedera @cite_5 . Proposed was the use of a centralized flow scheduling, using OpenFlow switches, to relocate long flows for load balancing. Because of the use of central controller, the algorithm takes some time for the reallocation of flows which was shown to make Hedera slow reacting dynamically to changes in traffic patterns.
{ "cite_N": [ "@cite_5" ], "mid": [ "1698388015" ], "abstract": [ "Today's data centers offer tremendous aggregate bandwidth to clusters of tens of thousands of machines. However, because of limited port densities in even the highest-end switches, data center topologies typically consist of multi-rooted trees with many equal-cost paths between any given pair of hosts. Existing IP multipathing protocols usually rely on per-flow static hashing and can cause substantial bandwidth losses due to long-term collisions. In this paper, we present Hedera, a scalable, dynamic flow scheduling system that adaptively schedules a multi-stage switching fabric to efficiently utilize aggregate network resources. We describe our implementation using commodity switches and unmodified hosts, and show that for a simulated 8,192 host data center, Hedera delivers bisection bandwidth that is 96 of optimal and up to 113 better than static load-balancing methods." ] }
1604.05107
2338293612
In current Data Center Networks (DCNs), Equal- Cost MultiPath (ECMP) is used as the de-facto routing protocol. However, ECMP does not differentiate between short and long flows, the two main categories of flows depending on their duration (lifetime). This issue causes hot-spots in the network, affecting negatively the Flow Completion Time (FCT) and the throughput, the two key performance metrics in data center networks. Previous work on load balancing proposed solutions such as splitting long flows into short flows, using per-packet forwarding approaches, and isolating the paths of short and long flows. We propose DiffFlow, a new load balancing solution which detects long flows and forwards packets using Random Packet Spraying (RPS) with help of SDN, whereas the flows with small duration are forwarded with ECMP by default. The use of ECMP for short flows is reasonable, as it does not create the out-of-order problem; at the same time, RPS for long flows can efficiently help to load balancing the entire network, given that long flows represent most of the traffic in DCNs. The results show that our DiffFlow solution outperforms both the individual usage of either RPS or ECMP, while the overall throughput achieved is maintained at the level comparable to RPS.
Our solution was motivated by TinyFlow @cite_2 and RPS @cite_10 @cite_1 . The former (TinyFlow) splits long flows into short flows, and forwards them randomly by making use of ECMP. In order to detect long flows, OpenFlow switches perform sampling periodically. When two samples of the same flow are detected, the dynamic random re-routing algorithm changes the egress port of the switch. The latter method, i.e., RPS, uses random packet spraying technique to forward packets through multiple shortest paths. Unlike ECMP, RPS forwards each packet individually ("spraying") to the egress ports at the DCN switches. (Although this feature is not enabled by default, commodity switches can perform it.) The main drawback of this method is the out-of-order problem for TCP. As the authors prove, however, under the symmetry assumption in fat tree topologies and traffic patterns, this problem can be minimized, and even neglected. Also, custom queue management scheme can be applied to minimize the differences on path latencies.
{ "cite_N": [ "@cite_1", "@cite_10", "@cite_2" ], "mid": [ "", "2132320636", "1990837243" ], "abstract": [ "", "Modern data center networks are commonly organized in multi-rooted tree topologies. They typically rely on equal-cost multipath to split flows across multiple paths, which can lead to significant load imbalance. Splitting individual flows can provide better load balance, but is not preferred because of potential packet reordering that conventional wisdom suggests may negatively interact with TCP congestion control. In this paper, we revisit this “myth” in the context of data center networks which have regular topologies such as multi-rooted trees. We argue that due to symmetry, the multiple equal-cost paths between two hosts are composed of links that exhibit similar queuing properties. As a result, TCP is able to tolerate the induced packet reordering and maintain a single estimate of RTT. We validate the efficacy of random packet spraying (RPS) using a data center testbed comprising real hardware switches. We also reveal the adverse impact on the performance of RPS when the symmetry is disturbed (e.g., during link failures) and suggest solutions to mitigate this effect.", "In this paper, we evaluate the performance of per-packet load balancing in data center networks (DCNs). Throughput and flow completion time are considered among the main metrics to evaluate the performance of the transport of flows over the presence of long flows in a DCN. Load balancing in a DCN may benefit those performance metrics but also it may generate out-of-order packet delivery. We investigate the impact of out-of-order packet delivery on the throughput and flow completion time of long and short flows, respectively, in a DCN. Our simulations confirm the presence of out-of-order packet delivery in a DCN using per-packet load balancing. Simulation results also reveal that per-packet load balancing may yield smaller average flow completion time for short flows and larger average throughput for long flows than the single-path transport model used by TCP despite the presence of out-of-order packet delivery. As the delay difference between alternative paths decreases, the occurrence of out-of-order packet delivery in per-packet load balancing also decreases. Therefore, under the studied scenarios, the benefits of the per-packet load balancing prevail." ] }
1604.05107
2338293612
In current Data Center Networks (DCNs), Equal- Cost MultiPath (ECMP) is used as the de-facto routing protocol. However, ECMP does not differentiate between short and long flows, the two main categories of flows depending on their duration (lifetime). This issue causes hot-spots in the network, affecting negatively the Flow Completion Time (FCT) and the throughput, the two key performance metrics in data center networks. Previous work on load balancing proposed solutions such as splitting long flows into short flows, using per-packet forwarding approaches, and isolating the paths of short and long flows. We propose DiffFlow, a new load balancing solution which detects long flows and forwards packets using Random Packet Spraying (RPS) with help of SDN, whereas the flows with small duration are forwarded with ECMP by default. The use of ECMP for short flows is reasonable, as it does not create the out-of-order problem; at the same time, RPS for long flows can efficiently help to load balancing the entire network, given that long flows represent most of the traffic in DCNs. The results show that our DiffFlow solution outperforms both the individual usage of either RPS or ECMP, while the overall throughput achieved is maintained at the level comparable to RPS.
In spite of the ideas presented in these works, current data centers are slow to adopting new schemes, especially when they require modification of legacy network elements, or transport protocols. For that reason, our focus is on the solution deploying SDN, which is well-adopted in data centers. In our proposal, OpenFlow switches are essential to differentiate the flows. In our model, we use the packet sampling technique, also widely adopted in practice, such as sFlow @cite_0 . The main advantage of this method compared to others is that the sampling of packets is only realized locally in the TORs OpenFlow switches. Therefore, the SDN controller is not responsible for this task, assuring a high scalability for bigger networks, which is addressing an important concern of centralized SDN controllers. Packet sampling is however not the only detection technique that can be used, and our solution is open to other methods, for instance, per-flow statistics as in Hedera @cite_5 , or end-host based monitoring as in Mahout @cite_7 . On the other hand, it is important to underline that DiffFlow can coexist with traditional TCP protocol or newer versions, since its procedure is transparent to upper layer protocols.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_7" ], "mid": [ "2159262496", "1698388015", "" ], "abstract": [ "This memo defines InMon Coporation's sFlow system. sFlow is a technology for monitoring traffic in data networks containing switches and routers. In particular, it defines the sampling mechanisms implemented in an sFlow Agent for monitoring traffic, the sFlow MIB for controlling the sFlow Agent, and the format of sample data used by the sFlow Agent when forwarding data to a central data collector.", "Today's data centers offer tremendous aggregate bandwidth to clusters of tens of thousands of machines. However, because of limited port densities in even the highest-end switches, data center topologies typically consist of multi-rooted trees with many equal-cost paths between any given pair of hosts. Existing IP multipathing protocols usually rely on per-flow static hashing and can cause substantial bandwidth losses due to long-term collisions. In this paper, we present Hedera, a scalable, dynamic flow scheduling system that adaptively schedules a multi-stage switching fabric to efficiently utilize aggregate network resources. We describe our implementation using commodity switches and unmodified hosts, and show that for a simulated 8,192 host data center, Hedera delivers bisection bandwidth that is 96 of optimal and up to 113 better than static load-balancing methods.", "" ] }
1604.05086
2342260026
Social norms are powerful formalism in coordinating autonomous agents' behaviour to achieve certain objectives. In this paper, we propose a dynamic normative system to enable the reasoning of the changes of norms under different circumstances, which cannot be done in the existing static normative systems. We study two important problems (norm synthesis and norm recognition) related to the autonomy of the entire system and the agents, and characterise the computational complexities of solving these problems.
Normative multiagent systems have attracted many research interests in recent years, see e.g., @cite_16 @cite_9 for comprehensive reviews of the area. Here we can only review some closely related work.
{ "cite_N": [ "@cite_9", "@cite_16" ], "mid": [ "1541844269", "1912172167" ], "abstract": [ "A challenging problem currently addressed in the multi-agent systems area is the development of open systems; which are characterized by the heterogeneity of their participants and the dynamic features of both their participants and their environment. The main feature of agents in these systems is autonomy. It is this autonomy that requires regulation, and norms are a solution for this. Norms represent a tool for achieving coordination and cooperation among the members of a society. They have been employed in the field of Artificial Intelligence as a formal specification of deontic statements aimed at regulating the actions of software agents and the interactions among them. This article gives an overview of the most relevant works on norms for multi-agent systems. This review considers open multi-agent systems challenges and points out the main open questions that remain in norm representation, reasoning, creation, and implementation.", "This article introduces the research issues related to and definition of normative multiagent systems." ] }
1604.05086
2342260026
Social norms are powerful formalism in coordinating autonomous agents' behaviour to achieve certain objectives. In this paper, we propose a dynamic normative system to enable the reasoning of the changes of norms under different circumstances, which cannot be done in the existing static normative systems. We study two important problems (norm synthesis and norm recognition) related to the autonomy of the entire system and the agents, and characterise the computational complexities of solving these problems.
Norm synthesis for static normative systems. As stated, most current formalisms of normative systems are static. @cite_7 shows that this norm synthesis problem is NP-complete. @cite_3 proposes a norm synthesis algorithm in declarative planning domains for reachability objectives, and @cite_8 considers the on-line synthesis of norms. @cite_10 considers the norm synthesis problem by conditioning over agents' preferences, expresses as pairs of LTL formula and utility, and a normative behaviour function.
{ "cite_N": [ "@cite_3", "@cite_10", "@cite_7", "@cite_8" ], "mid": [ "81435041", "2204084130", "1986112535", "1703978911" ], "abstract": [ "Norms and social laws are one of the key mechanisms used to facilitate coordination in multiagent systems. In existing approaches the process of designing useful norms has to either be performed by a human expert, or requires a full enumeration of the state space which is bound to cause tractability problems in non-trivial domains. In this paper we propose a novel automated synthesis procedure for prohibitive norms in planning-based domains that disallow access to a set of predefined undesirable states. Our method performs local search around declarative specifications of states using AI planning methods. Using this approach, norms can be synthesised in a generalised way over incomplete state specifications to improve the efficiency of the process in many practical cases, while producing concise, generalised, social norms that are applicable to entire sets of system states. We present an algorithm that utilises traditional planning techniques to ensure continued accessibility under the prohibitions introduced by norms. An analysis of the computational properties of our algorithm is presented together with a discussion of possible heuristic improvements.", "The environment is an essential component of multi-agent systems and is often used to coordinate the behaviour of individual agents. Recently many languages have been proposed to specify and implement multi-agent environments in terms of social and normative concepts. In this paper, we first introduce a formal setting of multi-agent environment which abstracts from concrete specification languages. We extend this formal setting with norms and sanctions and show how concepts from mechanism design can be used to formally analyse and verify whether specific normative behaviours can be enforced (or implemented) if agents follow their subjective preferences. We also consider complexity issues of associated problems.", "Abstract We are concerned with the utility of social laws in a computational environment, laws which guarantee the successful coexistence of multiple programs and programmers. In this paper we are interested in the off-line design of social laws, where we as designers must decide ahead of time on useful social laws. In the first part of this paper we suggest the use of social laws in the domain of mobile robots, and prove analytic results about the usefulness of this approach in that setting. In the second part of this paper we present a general model of social law in a computational system, and investigate some of its properties. This includes a definition of the basic computational problem involved with the design of multi-agent systems, and an investigation of the automatic synthesis of useful social laws in the framework of a model which refers explicitly to social laws.", "Normative systems (norms) have been widely proposed as a technique for coordinating multi-agent systems. The automated synthesis of norms for coordination remains an open and complex problem, which we tackle in this paper. We propose a novel mechanism called IRON (Intelligent Robust On-line Norm synthesis mechanism), for the on-line synthesis of norms. IRON aims to synthesise conflict-free norms without lapsing into over-regulation. Thus, IRON produces norms that characterise necessary conditions for coordination, without over-regulation. In addition to defining the norm synthesis problem formally, we empirically show that IRON is capable of synthesising norms that are effective even in the presence of non-compliance behaviours in a system." ] }
1604.05086
2342260026
Social norms are powerful formalism in coordinating autonomous agents' behaviour to achieve certain objectives. In this paper, we propose a dynamic normative system to enable the reasoning of the changes of norms under different circumstances, which cannot be done in the existing static normative systems. We study two important problems (norm synthesis and norm recognition) related to the autonomy of the entire system and the agents, and characterise the computational complexities of solving these problems.
Changes of normative system. @cite_0 represents the norms as a set of atomic propositions and then employs a language to specify the update of norms. Although the updates are parameterised over actions, no considerations are taken to investigate, by either verification or norm synthesis, whether the normative system can be imposed to coordinate agents' behaviour to secure the objectives of the system.
{ "cite_N": [ "@cite_0" ], "mid": [ "321389052" ], "abstract": [ "The use of normative systems is widely accepted as an effective approach to control and regulate the behaviour of agents in multi-agent systems. When norms are added to a normative system, the behaviour of such a system changes. As of yet, there is no clear formal methodology to model the dynamics of a normative system under addition of various types of norms. In this paper we view the addition of a norm as an update of a normative system, and we provide update semantics to model this process." ] }
1604.05086
2342260026
Social norms are powerful formalism in coordinating autonomous agents' behaviour to achieve certain objectives. In this paper, we propose a dynamic normative system to enable the reasoning of the changes of norms under different circumstances, which cannot be done in the existing static normative systems. We study two important problems (norm synthesis and norm recognition) related to the autonomy of the entire system and the agents, and characterise the computational complexities of solving these problems.
Norm recognition. Norm recognition can be related to the norm learning problem, which employs various approaches, such as data mining @cite_18 and sampling and parsing @cite_5 @cite_15 , for the agent to learn social norms by observing other agents' behaviour. On the other hand, our norm recognition problems are based on formal verification, aiming to decide whether the agents are designed well so that they can recognise the current normative system from a set of possible ones. We also study the complexity of them.
{ "cite_N": [ "@cite_5", "@cite_18", "@cite_15" ], "mid": [ "2890487255", "2078761429", "2188532381" ], "abstract": [ "", "In normative multi-agent systems, the question of \"how an agent identifies norms in an open agent society\" has not received much attention. This paper aims at addressing this question. To this end, this paper proposes an architecture for norm identification for an agent. The architecture is based on observation of interactions between agents. This architecture enables an autonomous agent to identify prohibition norms in a society using the prohibition norm identification (PNI) algorithm. The PNI algorithm uses association rule mining, a data mining approach to identify sequences of events as candidate norms. When a norm changes, an agent using our architecture will be able to modify the norm and also remove a norm if it does not hold in the society. Using simulations of a park scenario we demonstrate how an agent makes use of the norm identification framework to identify prohibition norms.", "When entering a system, an agent should be aware of the obligations and prohibitions (collectively norms) that will aect it. Several solutions to this norm identication problem have been proposed, which make use of observations of either other’s norm compliant, or norm violating, behaviour. These solutions fail in situations where norms are typically violated, or complied with, respectively. In this paper we propose a Bayesian approach to norm identication which operates by learning from both norm compliant and norm violating behaviour. By utilising both types of behaviour, our work not only overcomes a major limitation of existing approaches, but also yields improved performance over the state-of-the-art. We evaluate its eectiveness empirically, showing, under certain conditions, high accuracy scores." ] }
1604.05086
2342260026
Social norms are powerful formalism in coordinating autonomous agents' behaviour to achieve certain objectives. In this paper, we propose a dynamic normative system to enable the reasoning of the changes of norms under different circumstances, which cannot be done in the existing static normative systems. We study two important problems (norm synthesis and norm recognition) related to the autonomy of the entire system and the agents, and characterise the computational complexities of solving these problems.
Application of social norms Social norms are to regulate the behaviour of the stakeholders in a system, including sociotechnical system @cite_17 which has both humans and computers. They are used to represent the commitments (by e.g., business contracts, etc) between humans and organisations. The dynamic norms of this paper can be useful to model more realistic scenarios in which commitments may be changed with the environmental changes.
{ "cite_N": [ "@cite_17" ], "mid": [ "2335915048" ], "abstract": [ "The overarching vision of social machines is to facilitate social processes by having computers provide administrative support. We conceive of a social machine as a sociotechnical system (STS): a software-supported system in which autonomous principals such as humans and organizations interact to exchange information and services. Existing approaches for social machines emphasize the technical aspects and inadequately support the meanings of social processes, leaving them informally realized in human interactions. We posit that a fundamental rethinking is needed to incorporate accountability, essential for addressing the openness of the Web and the autonomy of its principals. We introduce Interaction-Oriented Software Engineering (IOSE) as a paradigm expressly suited to capturing the social basis of STSs. Motivated by promoting openness and autonomy, IOSE focuses not on implementation but on social protocols, specifying how social relationships, characterizing the accountability of the concerned parties, progress as they interact. Motivated by providing computational support, IOSE adopts the accountability representation to capture the meaning of a social machine's states and transitions. We demonstrate IOSE via examples drawn from healthcare. We reinterpret the classical software engineering (SE) principles for the STS setting and show how IOSE is better suited than traditional software engineering for supporting social processes. The contribution of this paper is a new paradigm for STSs, evaluated via conceptual analysis." ] }
1604.04804
2338690345
Computation-as-a-Service (CaaS) offerings have gained traction in the last few years due to their effectiveness in balancing between the scalability of Software-as-a-Service and the customisation possibilities of Infrastructure-as-a-Service platforms. To function effectively, a CaaS platform must have three key properties: (i) reactive assignment of individual processing tasks to available cloud instances (compute units) according to availability and predetermined time-to-completion (TTC) constraints, (ii) accurate resource prediction, (iii) efficient control of the number of cloud instances servicing workloads, in order to optimize between completing workloads in a timely fashion and reducing resource utilization costs. In this paper, we propose three approaches that satisfy these properties (respectively): (i) a service rate allocation mechanism based on proportional fairness and TTC constraints, (ii) Kalman-filter estimates for resource prediction, and (iii) the use of additive increase multiplicative decrease (AIMD) algorithms (famous for being the resource management in the transport control protocol) for the control of the number of compute units servicing workloads. The integration of our three proposals into a single CaaS platform is shown to provide for more than 27 reduction in Amazon EC2 spot instance cost against methods based on reactive resource prediction and 38 to 60 reduction of the billing cost against the current state-of-the-art in CaaS platforms (Amazon Lambda and Autoscale).
In order to provide for a viable service, a CaaS provider must be able to minimize the monetary cost incurred by the use of cloud CUs and schedule workloads to be executed in the most effective manner. To this end, there have been numerous recent proposals for cloud resource management. Gandhi propose their own version of Autoscale, which terminates servers that have been idle for more than a specified time, while consolidating jobs on less CUs to lower cost @cite_12 . Paya propose a system which expands on this by using multiple sleep states to improve performance @cite_30 . Song propose optimal allocation of CUs according to pricing and demand distributions @cite_5 . Ranjan investigate architectural elements of content-delivery networks with cloud-computing support @cite_26 . Finally, Jung propose using genetic algorithms for multi-user workload scheduling on various CUs @cite_1 .
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_1", "@cite_5", "@cite_12" ], "mid": [ "2326506621", "2121651311", "762663386", "2099393515", "2075233755" ], "abstract": [ "In this paper, we introduce an energy-aware operation model used for load balancing and application scaling on a cloud. The basic philosophy of our approach is defining an energy-optimal operation regime and attempting to maximize the number of servers operating in this regime. Idle and lightly-loaded servers are switched to one of the sleep states to save energy. The load balancing and scaling algorithms also exploit some of the most desirable features of server consolidation mechanisms discussed in the literature.", "The growing ubiquity of Internet and cloud computing is having significant impact on media-related industries. These industries are using the Internet and cloud as a medium to enable creation, search, management and consumption of their content. Primarily, Content Delivery Networks (CDNs) are deployed for distributing multimedia content to the end-users. However, existing approaches to architecting CDNs have several limitations. Firstly, they do not harness multiple public cloud services for optimizing cost to performance ratio. Secondly, they lack support for dynamic and personalized content creation and distribution. Finally, they do not support end-to-end content lifecycle operations (production, deployment, consumption, personalization, and distribution). To overcome these limitations, in this paper, we propose, develop and validate a novel system called MediaWise Cloud Content Orchestrator (MCCO). MCCO expands the scope of existing CDNs with novel multi-cloud deployment. It enables content personalization and collaboration capabilities. Further, it facilitates do-it-yourself creation, search, management, and consumption of multimedia content. It inherits the pay-as-you-go models and elasticity that are offered by commercially available cloud services. In this paper, we discuss our vision, the challenges and the research objectives pertaining to MCCO for supporting next generation streamed, interactive, and collaborative high resolution multimedia content. We validated our system thorugh MCCO prototype implementation. Further, we conducted a set of experiments to demonstrate the functionality of MCCO. Finally, we compare the content orchestration features supported by MCCO to existing CDNs against the envisioned objectives of MCCO.", "Cloud computing is a computing paradigm in which users can rent computing resources from service providers according to their requirements. Cloud computing based on the spot market helps a user to obtain resources at a lower cost. However, these resources may be unreliable. In this paper, we propose an estimation-based distributed task workflow scheduling scheme that reduces the estimated generation compared to Genetic Algorithm (GA). Moreover, our scheme executes a user’s job within selected instances and stretches the user’s cost. The simulation results, based on a before-and-after estimation comparison, reveal that the task size is determined based on the performance of each instance and the task is distributed among the different instances. Therefore, our proposed estimation-based task load balancing scheduling technique achieves the task load balancing according to the performance of instances.", "Amazon introduced Spot Instance Market to utilize the idle resources of Amazon Elastic Compute Cloud (EC2) more efficiently. The price of a spot instance changes dynamically according to the current supply and demand for cloud resources. Users can bid for a spot instance and the job request will be granted if the current spot price falls below the bid, whereas the job will be terminated if the spot price exceeds the bid. In this paper, we investigate the problem of designing a bidding strategy from a cloud service broker's perspective, where the cloud service broker accepts job requests from cloud users, and leverages the opportunistic yet less expensive spot instances for computation in order to maximize its own profit. In this context, we propose a profit aware dynamic bidding (PADB) algorithm, which observes the current spot price and selects the bid adaptively to maximize the time average profit of the cloud service broker. We show that our bidding strategy achieves a near-optimal solution, i.e., (1−∈) of the optimal solution to the profit maximization problem, where ∈ can be arbitrarily small. The proposed dynamic bidding algorithm is self-adaptive and requires no a priori statistical knowledge on the distribution of random job sizes from cloud users.", "Energy costs for data centers continue to rise, already exceeding $15 billion yearly. Sadly much of this power is wasted. Servers are only busy 10--30p of the time on average, but they are often left on, while idle, utilizing 60p or more of peak power when in the idle state. We introduce a dynamic capacity management policy, AutoScale, that greatly reduces the number of servers needed in data centers driven by unpredictable, time-varying load, while meeting response time SLAs. AutoScale scales the data center capacity, adding or removing servers as needed. AutoScale has two key features: (i) it autonomically maintains just the right amount of spare capacity to handle bursts in the request rate; and (ii) it is robust not just to changes in the request rate of real-world traces, but also request size and server efficiency. We evaluate our dynamic capacity management approach via implementation on a 38-server multi-tier data center, serving a web site of the type seen in Facebook or Amazon, with a key-value store workload. We demonstrate that AutoScale vastly improves upon existing dynamic capacity management policies with respect to meeting SLAs and robustness." ] }
1604.05091
2337890890
In this work we present a novel end-to-end framework for tracking and classifying a robot's surroundings in complex, dynamic and only partially observable real-world environments. The approach deploys a recurrent neural network to filter an input stream of raw laser measurements in order to directly infer object locations, along with their identity in both visible and occluded areas. To achieve this we first train the network using unsupervised Deep Tracking, a recently proposed theoretical framework for end-to-end space occupancy prediction. We show that by learning to track on a large amount of unsupervised data, the network creates a rich internal representation of its environment which we in turn exploit through the principle of inductive transfer of knowledge to perform the task of it's semantic classification. As a result, we show that only a small amount of labelled data suffices to steer the network towards mastering this additional task. Furthermore we propose a novel recurrent neural network architecture specifically tailored to tracking and semantic classification in real-world robotics applications. We demonstrate the tracking and classification performance of the method on real-world data collected at a busy road junction. Our evaluation shows that the proposed end-to-end framework compares favourably to a state-of-the-art, model-free tracking solution and that it outperforms a conventional one-shot training scheme for semantic classification.
Deep learning approaches have been successful in a number of domains (see, for example, @cite_0 @cite_3 @cite_10 ) where they have benefited from large amounts of data in order to learn appropriate internal representations leading to significant performance gains above and beyond that achievable by classical methods. In our case the neural network is trained end-to-end to predict space occupancy and semantic labels directly from the raw laser data. While doing so it learns to perform an where the optimal internal representations about the hypotheses of moving objects and respective update procedures of classical tracking are inferred directly from the data.
{ "cite_N": [ "@cite_0", "@cite_10", "@cite_3" ], "mid": [ "", "1607307044", "2147768505" ], "abstract": [ "", "Full end-to-end text recognition in natural images is a challenging problem that has received much attention recently. Traditional systems in this area have relied on elaborate models incorporating carefully hand-engineered features or large amounts of prior knowledge. In this paper, we take a different route and combine the representational power of large, multilayer neural networks together with recent developments in unsupervised feature learning, which allows us to use a common framework to train highly-accurate text detector and character recognizer modules. Then, using only simple off-the-shelf methods, we integrate these two modules into a full end-to-end, lexicon-driven, scene text recognition system that achieves state-of-the-art performance on standard benchmarks, namely Street View Text and ICDAR 2003.", "We propose a novel context-dependent (CD) model for large-vocabulary speech recognition (LVSR) that leverages recent advances in using deep belief networks for phone recognition. We describe a pre-trained deep neural network hidden Markov model (DNN-HMM) hybrid architecture that trains the DNN to produce a distribution over senones (tied triphone states) as its output. The deep belief network pre-training algorithm is a robust and often helpful way to initialize deep neural networks generatively that can aid in optimization and reduce generalization error. We illustrate the key components of our model, describe the procedure for applying CD-DNN-HMMs to LVSR, and analyze the effects of various modeling choices on performance. Experiments on a challenging business search dataset demonstrate that CD-DNN-HMMs can significantly outperform the conventional context-dependent Gaussian mixture model (GMM)-HMMs, with an absolute sentence accuracy improvement of 5.8 and 9.2 (or relative error reduction of 16.0 and 23.2 ) over the CD-GMM-HMMs trained using the minimum phone error rate (MPE) and maximum-likelihood (ML) criteria, respectively." ] }
1604.05091
2337890890
In this work we present a novel end-to-end framework for tracking and classifying a robot's surroundings in complex, dynamic and only partially observable real-world environments. The approach deploys a recurrent neural network to filter an input stream of raw laser measurements in order to directly infer object locations, along with their identity in both visible and occluded areas. To achieve this we first train the network using unsupervised Deep Tracking, a recently proposed theoretical framework for end-to-end space occupancy prediction. We show that by learning to track on a large amount of unsupervised data, the network creates a rich internal representation of its environment which we in turn exploit through the principle of inductive transfer of knowledge to perform the task of it's semantic classification. As a result, we show that only a small amount of labelled data suffices to steer the network towards mastering this additional task. Furthermore we propose a novel recurrent neural network architecture specifically tailored to tracking and semantic classification in real-world robotics applications. We demonstrate the tracking and classification performance of the method on real-world data collected at a busy road junction. Our evaluation shows that the proposed end-to-end framework compares favourably to a state-of-the-art, model-free tracking solution and that it outperforms a conventional one-shot training scheme for semantic classification.
To successfully apply deep learning, an appropriate neural network architecture for the task must be chosen. Abundant literature exists on the topic of finding optimal architectures for different tasks such as convolutional networks for image processing @cite_4 and recurrent neural nets such as @cite_8 or @cite_6 for processing sequences. We propose a novel neural network architecture specifically tailored to real-world object tracking. The network shares similarity with architectures for semantic labelling of natural images @cite_15 in terms of the ability to produce output of the same resolution. In addition we provide effective mechanisms to track objects of different sizes over time, learn place-specific information and recurrent mechanisms to remember information for long periods of time in order to track objects effectively even through long occlusions.
{ "cite_N": [ "@cite_15", "@cite_4", "@cite_6", "@cite_8" ], "mid": [ "2286929393", "2156163116", "2950635152", "" ], "abstract": [ "State-of-the-art models for semantic segmentation are based on adaptations of convolutional networks that had originally been designed for image classification. However, dense prediction and image classification are structurally different. In this work, we develop a new convolutional network module that is specifically designed for dense prediction. The presented module uses dilated convolutions to systematically aggregate multi-scale contextual information without losing resolution. The architecture is based on the fact that dilated convolutions support exponential expansion of the receptive field without loss of resolution or coverage. We show that the presented context module increases the accuracy of state-of-the-art semantic segmentation systems. In addition, we examine the adaptation of image classification networks to dense prediction and show that simplifying the adapted network can increase accuracy.", "Neural networks are a powerful technology forclassification of visual inputs arising from documents.However, there is a confusing plethora of different neuralnetwork methods that are used in the literature and inindustry. This paper describes a set of concrete bestpractices that document analysis researchers can use toget good results with neural networks. The mostimportant practice is getting a training set as large aspossible: we expand the training set by adding a newform of distorted data. The next most important practiceis that convolutional neural networks are better suited forvisual document tasks than fully connected networks. Wepropose that a simple \"do-it-yourself\" implementation ofconvolution with a flexible architecture is suitable formany visual document problems. This simpleconvolutional neural network does not require complexmethods, such as momentum, weight decay, structure-dependentlearning rates, averaging layers, tangent prop,or even finely-tuning the architecture. The end result is avery simple yet general architecture which can yieldstate-of-the-art performance for document analysis. Weillustrate our claims on the MNIST set of English digitimages.", "In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases.", "" ] }
1604.04961
2339531592
We investigate the role of a relay in multiple access channels (MACs) with bursty user traffic, where intermittent data traffic restricts the users to bursty transmissions. As our main result, we characterize the degrees of freedom (DoF) region of a K-user bursty multi-input multi-output (MIMO) Gaussian MAC with a relay, where Bernoulli random states are introduced to govern bursty user transmissions. To that end, we extend the noisy network coding scheme to achieve the cut-set bound. Our main contribution is in exploring the role of a relay from various perspectives. First, we show that a relay can provide a DoF gain in bursty channels, unlike in conventional non-bursty channels. Interestingly, we find that the relaying gain can scale with additional antennas at the relay to some extent. Moreover, observing that a relay can help achieve collision-free performances, we establish the necessary and sufficient condition for attaining collision-free DoF. Lastly, we consider scenarios in which some physical perturbation shared around the users may generate data traffic simultaneously, causing transmission patterns across them to be correlated. We demonstrate that for most cases in such scenarios, the relaying gain is greater when the users’ transmission patterns are more correlated, hence when more severe collisions take place. Our results have practical implications in various scenarios of wireless networks such as device-to-device systems and random media access control protocols.
In terms of relaying operations, the aforementioned works mainly consider full-duplex and strictly causal relays. Most of the theories developed for full-duplex relays were shown extendable to cases where relays are half-duplex through discussions in @cite_14 which model half-duplex relays by imposing constraints such as fractions of time allowed for relays to be in either reception or transmission mode. Also, causal relaying strategies including instantaneous relaying as well as non-causal relaying strategies have been explored in @cite_11 .
{ "cite_N": [ "@cite_14", "@cite_11" ], "mid": [ "2159965721", "2153829573" ], "abstract": [ "Relay channels where terminals cannot receive and transmit at the same time are modeled as being memoryless with cost constraints. Cost functions are considered that measure the power consumed in each of three sleep-listen-or-talk (SLoT) modes, as well as the fraction of time the modes are used. It is shown that strategies that have the SLoT modes known ahead of time by all terminals are generally suboptimal. It is further shown that Gaussian input distributions are generally suboptimal for Gaussian channels. For several types of models and SLoT constraints, it is shown that multi-hopping (or decode-andforward) achieves the information-theoretic capacity if the relay is geometrically near the source terminal, and if the fraction of time the relay listens to the source is lower bounded by a positive number. SLoT constraints for which the capacity claim might not be valid are discussed. Finally, it is pointed out that a lack of symbol synchronization between the relays has little or no effect on the capacity theorems if the signals are bandlimited and if independent input signals are optimal.", "The paper investigates the effect of link delays on the capacity of relay networks. The relay-with-delay is defined as a relay channel with relay encoding delay d isin Z of units, or equivalently, a delay of units on the link from the sender to the relay, zero delay on the links from the transmitter to the receiver and from the relay to the receiver, and zero relay encoding delay. Two special cases are studied. The first is the relay-with-unlimited look-ahead, where each relay transmission can depend on its entire received sequence, and the second is the relay-without-delay, where the relay transmission can depend only on current and past received symbols, i.e., d=0. Upper and lower bounds on capacity for these two channels that are tight in some cases are presented. It is shown that the cut-set bound for the classical relay channel, corresponding to the case where d=1, does not hold for the relay-without-delay. Further, it is shown that instantaneous relaying can be optimal and can achieve higher rates than the classical cut-set bound. Capacity for the classes of degraded and semi-deterministic relay-with-unlimited-look-ahead and relay-without-delay are established. These results are then extended to the additive white Gaussian noise (AWGN) relay-with-delay case, where it is shown that for any dles0, capacity is achieved using amplify-and-forward when the channel from the sender to the relay is sufficiently weaker than the other two channels. In addition, it is shown that a superposition of amplify-and-forward and decode-and-forward can achieve higher rates than the classical cut-set bound. The relay-with-delay model is then extended to feedforward relay networks. It is shown that capacity is determined only by the relative delays of paths from the sender to the receiver and not by their absolute delays. A new cut-set upper bound that generalizes both the classical cut-set bound for the classical relay and the upper bound for the relay-without-delay on capacity is established." ] }
1604.04961
2339531592
We investigate the role of a relay in multiple access channels (MACs) with bursty user traffic, where intermittent data traffic restricts the users to bursty transmissions. As our main result, we characterize the degrees of freedom (DoF) region of a K-user bursty multi-input multi-output (MIMO) Gaussian MAC with a relay, where Bernoulli random states are introduced to govern bursty user transmissions. To that end, we extend the noisy network coding scheme to achieve the cut-set bound. Our main contribution is in exploring the role of a relay from various perspectives. First, we show that a relay can provide a DoF gain in bursty channels, unlike in conventional non-bursty channels. Interestingly, we find that the relaying gain can scale with additional antennas at the relay to some extent. Moreover, observing that a relay can help achieve collision-free performances, we establish the necessary and sufficient condition for attaining collision-free DoF. Lastly, we consider scenarios in which some physical perturbation shared around the users may generate data traffic simultaneously, causing transmission patterns across them to be correlated. We demonstrate that for most cases in such scenarios, the relaying gain is greater when the users’ transmission patterns are more correlated, hence when more severe collisions take place. Our results have practical implications in various scenarios of wireless networks such as device-to-device systems and random media access control protocols.
As we noted earlier, our results have practical implications in random access protocols such as well-known ALOHA @cite_12 , CSMA @cite_9 , and their extensions. In all such protocols, when users share a common communication medium, they should take part in avoiding and or recovering from collisions. Our results imply that introducing relays can be effective ways of taking a burden off the users in coping with such collisions, hence simplifying random access protocols.
{ "cite_N": [ "@cite_9", "@cite_12" ], "mid": [ "2010359062", "2150847784" ], "abstract": [ "Radio communication is considered as a method for providing remote terminal access to computers. Digital byte streams from each terminal are partitioned into packets (blocks) and transmitted in a burst mode over a shared radio channel. When many terminals operate in this fashion, transmissions may conflict with and destroy each other. A means for controlling this is for the terminal to sense the presence of other transmissions; this leads to a new method for multiplexing in a packet radio environment: carrier sense multiple access (CSMA). Two protocols are described for CSMA and their throughput-delay characteristics are given. These results show the large advantage CSMA provides as compared to the random ALOHA access modes.", "In September 1968 the University of Hawaii began work on a research program to investigate the use of radio communications for computer-computer and console-computer links. In this report we describe a remote-access computer system---THE ALOHA SYSTEM---under development as part of that research program and discuss some advantages of radio communications over conventional wire communications for interactive users of a large computer system. Although THE ALOHA SYSTEM research program is composed of a large number of research projects, in this report we shall be concerned primarily with a novel form of random-access radio communications developed for use within THE ALOHA SYSTEM." ] }
1604.04835
2336384382
Knowledge representation is an important, long-history topic in AI, and there have been a large amount of work for knowledge graph embedding which projects symbolic entities and relations into low-dimensional, real-valued vector space. However, most embedding methods merely concentrate on data fitting and ignore the explicit semantic expression, leading to uninterpretable representations. Thus, traditional embedding methods have limited potentials for many applications such as question answering, and entity classification. To this end, this paper proposes a semantic representation method for knowledge graph , which imposes a two-level hierarchical generative process that globally extracts many aspects and then locally assigns a specific category in each aspect for every triple. Since both aspects and categories are semantics-relevant, the collection of categories in each aspect is treated as the semantic representation of this triple. Extensive experiments justify our model outperforms other state-of-the-art baselines substantially.
Text-Aware'' Embedding, which attempts to representing knowledge graph with textual information, generally dates back to NTN @cite_6 . NTN makes use of entity name and embeds an entity as the average word embedding vectors of the name. @cite_25 attempts to aligning the knowledge graph with the corpus then jointly conducting knowledge embedding and word embedding. However, the necessity of the alignment information limits this method both in performance and practical applicability. Thus, @cite_13 proposes the Jointly'' method that only aligns the freebase entity to the corresponding wiki-page. DKRL @cite_7 extends the translation-based embedding methods from the triple-specific one to the Text-Aware'' model. More importantly, DKRL adopts a CNN-structure to represent words, which promotes the expressive ability of word semantics. Generally speaking, by jointly modeling knowledge and texts, obtains the state-of-the-art performance.
{ "cite_N": [ "@cite_7", "@cite_13", "@cite_25", "@cite_6" ], "mid": [ "2499696929", "2250807343", "2158028897", "2127426251" ], "abstract": [ "Representation learning (RL) of knowledge graphs aims to project both entities and relations into a continuous low-dimensional space. Most methods concentrate on learning representations with knowledge triples indicating relations between entities. In fact, in most knowledge graphs there are usually concise descriptions for entities, which cannot be well utilized by existing methods. In this paper, we propose a novel RL method for knowledge graphs taking advantages of entity descriptions. More specifically, we explore two encoders, including continuous bag-of-words and deep convolutional neural models to encode semantics of entity descriptions. We further learn knowledge representations with both triples and descriptions. We evaluate our method on two tasks, including knowledge graph completion and entity classification. Experimental results on real-world datasets show that, our method outperforms other baselines on the two tasks, especially under the zero-shot setting, which indicates that our method is capable of building representations for novel entities according to their descriptions. The source code of this paper can be obtained from https: github.com xrb92 DKRL.", "We study the problem of jointly embedding a knowledge base and a text corpus. The key issue is the alignment model making sure the vectors of entities, relations and words are in the same space. (2014a) rely on Wikipedia anchors, making the applicable scope quite limited. In this paper we propose a new alignment model based on text descriptions of entities, without dependency on anchors. We require the embedding vector of an entity not only to fit the structured constraints in KBs but also to be equal to the embedding vector computed from the text description. Extensive experiments show that, the proposed approach consistently performs comparably or even better than the method of (2014a), which is encouraging as we do not use any anchor information.", "We examine the embedding approach to reason new relational facts from a largescale knowledge graph and a text corpus. We propose a novel method of jointly embedding entities and words into the same continuous vector space. The embedding process attempts to preserve the relations between entities in the knowledge graph and the concurrences of words in the text corpus. Entity names and Wikipedia anchors are utilized to align the embeddings of entities and words in the same space. Large scale experiments on Freebase and a Wikipedia NY Times corpus show that jointly embedding brings promising improvement in the accuracy of predicting facts, compared to separately embedding knowledge graphs and text. Particularly, jointly embedding enables the prediction of facts containing entities out of the knowledge graph, which cannot be handled by previous embedding methods. At the same time, concerning the quality of the word embeddings, experiments on the analogical reasoning task show that jointly embedding is comparable to or slightly better than word2vec (Skip-Gram).", "Knowledge bases are an important resource for question answering and other tasks but often suffer from incompleteness and lack of ability to reason over their discrete entities and relationships. In this paper we introduce an expressive neural tensor network suitable for reasoning over relationships between two entities. Previous work represented entities as either discrete atomic units or with a single entity vector representation. We show that performance can be improved when entities are represented as an average of their constituting word vectors. This allows sharing of statistical strength between, for instance, facts involving the \"Sumatran tiger\" and \"Bengal tiger.\" Lastly, we demonstrate that all models improve when these word vectors are initialized with vectors learned from unsupervised large corpora. We assess the model by considering the problem of predicting additional true relations between entities given a subset of the knowledge base. Our model outperforms previous models and can classify unseen relationships in WordNet and FreeBase with an accuracy of 86.2 and 90.0 , respectively." ] }
1604.04724
2339134033
Automatic segmentation of objects from a single image is a challenging problem which generally requires training on large number of images. We consider the problem of automatically segmenting only the dynamic objects from a given pair of images of a scene captured from different positions. We exploit dense correspondences along with saliency measures in order to first localize the interest points on the dynamic objects from the two images. We propose a novel approach based on techniques from computational geometry in order to automatically segment the dynamic objects from both the images using a top-down segmentation strategy. We discuss how the proposed approach is unique in novelty compared to other state-of-the-art segmentation algorithms. We show that the proposed approach for segmentation is efficient in handling large motions and is able to achieve very good segmentation of the objects for different scenes. We analyse the results with respect to the manually marked ground truth segmentation masks created using our own dataset and provide key observations in order to improve the work in future.
Another class of segmentation algorithms consider image in the discrete domain as a graph and try to optimally segment desired regions from the image. One of the earliest methods based on graphs is the normalized cuts by Shi and Malik @cite_24 . This algorithm laid the foundation for over-segmentation approaches which opened up research on bottom-up segmentation through grouping @cite_0 . Another instance of the use of image as a graph in order to segment can be found in @cite_22 . These approaches work on the common objective that the nodes of the graph which are similar must be grouped together.
{ "cite_N": [ "@cite_24", "@cite_0", "@cite_22" ], "mid": [ "2121947440", "", "1999478155" ], "abstract": [ "We propose a novel approach for solving the perceptual grouping problem in vision. Rather than focusing on local features and their consistencies in the image data, our approach aims at extracting the global impression of an image. We treat image segmentation as a graph partitioning problem and propose a novel global criterion, the normalized cut, for segmenting the graph. The normalized cut criterion measures both the total dissimilarity between the different groups as well as the total similarity within the groups. We show that an efficient computational technique based on a generalized eigenvalue problem can be used to optimize this criterion. We applied this approach to segmenting static images, as well as motion sequences, and found the results to be very encouraging.", "", "This paper addresses the problem of segmenting an image into regions. We define a predicate for measuring the evidence for a boundary between two regions using a graph-based representation of the image. We then develop an efficient segmentation algorithm based on this predicate, and show that although this algorithm makes greedy decisions it produces segmentations that satisfy global properties. We apply the algorithm to image segmentation using two different kinds of local neighborhoods in constructing the graph, and illustrate the results with both real and synthetic images. The algorithm runs in time nearly linear in the number of graph edges and is also fast in practice. An important characteristic of the method is its ability to preserve detail in low-variability image regions while ignoring detail in high-variability regions." ] }
1604.04724
2339134033
Automatic segmentation of objects from a single image is a challenging problem which generally requires training on large number of images. We consider the problem of automatically segmenting only the dynamic objects from a given pair of images of a scene captured from different positions. We exploit dense correspondences along with saliency measures in order to first localize the interest points on the dynamic objects from the two images. We propose a novel approach based on techniques from computational geometry in order to automatically segment the dynamic objects from both the images using a top-down segmentation strategy. We discuss how the proposed approach is unique in novelty compared to other state-of-the-art segmentation algorithms. We show that the proposed approach for segmentation is efficient in handling large motions and is able to achieve very good segmentation of the objects for different scenes. We analyse the results with respect to the manually marked ground truth segmentation masks created using our own dataset and provide key observations in order to improve the work in future.
There are methods using graphs based on max-flow min-cut algorithms which are popularly known as graph cuts @cite_12 . The energy function in a graph can be minimized by using appropriate partitioning of the graphs through cuts on the edges which link vertices which are dissimilar @cite_5 . An interactive approach based on the initialisation of a bounding box on the image was proposed using graph cuts in order to obtain segmentation. This method is popularly known as GrabCut' @cite_3 . A more advanced version of the algorithm which makes use of Djikstra algorithm for segmentation of thin structures was proposed in @cite_21 . An overview of many other undirected graph (Markov random field) based image segmentation can be found in the book by Blake @cite_34 .
{ "cite_N": [ "@cite_21", "@cite_3", "@cite_5", "@cite_34", "@cite_12" ], "mid": [ "2147899407", "2124351162", "2101309634", "", "2143516773" ], "abstract": [ "Graph cut is a popular technique for interactive image segmentation. However, it has certain shortcomings. In particular, graph cut has problems with segmenting thin elongated objects due to the ldquoshrinking biasrdquo. To overcome this problem, we propose to impose an additional connectivity prior, which is a very natural assumption about objects. We formulate several versions of the connectivity constraint and show that the corresponding optimization problems are all NP-hard. For some of these versions we propose two optimization algorithms: (i) a practical heuristic technique which we call DijkstraGC, and (ii) a slow method based on problem decomposition which provides a lower bound on the problem. We use the second technique to verify that for some practical examples DijkstraGC is able to find the global minimum.", "The problem of efficient, interactive foreground background segmentation in still images is of great practical importance in image editing. Classical image segmentation tools use either texture (colour) information, e.g. Magic Wand, or edge (contrast) information, e.g. Intelligent Scissors. Recently, an approach based on optimization by graph-cut has been developed which successfully combines both types of information. In this paper we extend the graph-cut approach in three respects. First, we have developed a more powerful, iterative version of the optimisation. Secondly, the power of the iterative algorithm is used to simplify substantially the user interaction needed for a given quality of result. Thirdly, a robust algorithm for \"border matting\" has been developed to estimate simultaneously the alpha-matte around an object boundary and the colours of foreground pixels. We show that for moderately difficult examples the proposed method outperforms competitive tools.", "In the last few years, several new algorithms based on graph cuts have been developed to solve energy minimization problems in computer vision. Each of these techniques constructs a graph such that the minimum cut on the graph also minimizes the energy. Yet, because these graph constructions are complex and highly specific to a particular energy function, graph cuts have seen limited application to date. In this paper, we give a characterization of the energy functions that can be minimized by graph cuts. Our results are restricted to functions of binary variables. However, our work generalizes many previous constructions and is easily applicable to vision problems that involve large numbers of labels, such as stereo, motion, image restoration, and scene reconstruction. We give a precise characterization of what energy functions can be minimized using graph cuts, among the energy functions that can be written as a sum of terms containing three or fewer binary variables. We also provide a general-purpose construction to minimize such an energy function. Finally, we give a necessary condition for any energy function of binary variables to be minimized by graph cuts. Researchers who are considering the use of graph cuts to optimize a particular energy function can use our results to determine if this is possible and then follow our construction to create the appropriate graph. A software implementation is freely available.", "", "Many tasks in computer vision involve assigning a label (such as disparity) to every pixel. A common constraint is that the labels should vary smoothly almost everywhere while preserving sharp discontinuities that may exist, e.g., at object boundaries. These tasks are naturally stated in terms of energy minimization. The authors consider a wide class of energies with various smoothness constraints. Global minimization of these energy functions is NP-hard even in the simplest discontinuity-preserving case. Therefore, our focus is on efficient approximation algorithms. We present two algorithms based on graph cuts that efficiently find a local minimum with respect to two types of large moves, namely expansion moves and swap moves. These moves can simultaneously change the labels of arbitrarily large sets of pixels. In contrast, many standard algorithms (including simulated annealing) use small moves where only one pixel changes its label at a time. Our expansion algorithm finds a labeling within a known factor of the global minimum, while our swap algorithm handles more general energy functions. Both of these algorithms allow important cases of discontinuity preserving energies. We experimentally demonstrate the effectiveness of our approach for image restoration, stereo and motion. On real data with ground truth, we achieve 98 percent accuracy." ] }
1604.04724
2339134033
Automatic segmentation of objects from a single image is a challenging problem which generally requires training on large number of images. We consider the problem of automatically segmenting only the dynamic objects from a given pair of images of a scene captured from different positions. We exploit dense correspondences along with saliency measures in order to first localize the interest points on the dynamic objects from the two images. We propose a novel approach based on techniques from computational geometry in order to automatically segment the dynamic objects from both the images using a top-down segmentation strategy. We discuss how the proposed approach is unique in novelty compared to other state-of-the-art segmentation algorithms. We show that the proposed approach for segmentation is efficient in handling large motions and is able to achieve very good segmentation of the objects for different scenes. We analyse the results with respect to the manually marked ground truth segmentation masks created using our own dataset and provide key observations in order to improve the work in future.
Due to the recent proliferation of internet images, researchers have turned their attention to co-segment common objects present in multiple images. This objective can be addressed in a supervised learning framework @cite_26 or in a completely unsupervised framework @cite_19 . A recent approach for motion segmentation from video sequence can be found in @cite_11 . This work assumes that the frames are captured within a fraction of a second and does not apply to scenes which are captured with a delay of the order of seconds. The recent work tries to segment objects from two images of the scene captured with seconds of gap. This work is the closest recent attempt to the problem we address @cite_30 . This work proposes a new technique called global dimension reduction in order to achieve this objective. However, this method only estimates some points on the dynamic object and does not provide complete segmentation of the object.
{ "cite_N": [ "@cite_30", "@cite_19", "@cite_26", "@cite_11" ], "mid": [ "2019075198", "1996140089", "1964884769", "2076756823" ], "abstract": [ "We present a new approach to rigid-body motion segmentation from two views. We use a previously developed nonlinear embedding of two-view point correspondences into a 9-dimensional space and identify the different motions by segmenting lower-dimensional subspaces. In order to overcome nonuniform distributions along the subspaces, whose dimensions are unknown, we suggest the novel concept of global dimension and its minimization for clustering subspaces with some theoretical motivation. We propose a fast projected gradient algorithm for minimizing global dimension and thus segmenting motions from 2-views. We develop an outlier detection framework around the proposed method, and we present state-of-the-art results on outlier-free and outlier-corrupted two-view data for segmenting motion.", "We present a new unsupervised algorithm to discover and segment out common objects from large and diverse image collections. In contrast to previous co-segmentation methods, our algorithm performs well even in the presence of significant amounts of noise images (images not containing a common object), as typical for datasets collected from Internet search. The key insight to our algorithm is that common object patterns should be salient within each image, while being sparse with respect to smooth transformations across other images. We propose to use dense correspondences between images to capture the sparsity and visual variability of the common object over the entire database, which enables us to ignore noise objects that may be salient within their own images but do not commonly occur in others. We performed extensive numerical evaluation on established co-segmentation datasets, as well as several new datasets generated using Internet search. Our approach is able to effectively segment out the common object for diverse object categories, while naturally identifying images where the common object is not present.", "This paper presents an algorithm for Interactive Co-segmentation of a foreground object from a group of related images. While previous approaches focus on unsupervised co-segmentation, we use successful ideas from the interactive object-cutout literature. We develop an algorithm that allows users to decide what foreground is, and then guide the output of the co-segmentation algorithm towards it via scribbles. Interestingly, keeping a user in the loop leads to simpler and highly parallelizable energy functions, allowing us to work with significantly more images per group. However, unlike the interactive single image counterpart, a user cannot be expected to exhaustively examine all cutouts (from tens of images) returned by the system to make corrections. Hence, we propose iCoseg, an automatic recommendation system that intelligently recommends where the user should scribble next. We introduce and make publicly available the largest co-segmentation datasetyet, the CMU-Cornell iCoseg Dataset, with 38 groups, 643 images, and pixelwise hand-annotated groundtruth. Through machine experiments and real user studies with our developed interface, we show that iCoseg can intelligently recommend regions to scribble on, and users following these recommendations can achieve good quality cutouts with significantly lower time and effort than exhaustively examining all cutouts.", "Motion is a strong cue for unsupervised object-level grouping. In this paper, we demonstrate that motion will be exploited most effectively, if it is regarded over larger time windows. Opposed to classical two-frame optical flow, point trajectories that span hundreds of frames are less susceptible to short-term variations that hinder separating different objects. As a positive side effect, the resulting groupings are temporally consistent over a whole video shot, a property that requires tedious post-processing in the vast majority of existing approaches. We suggest working with a paradigm that starts with semi-dense motion cues first and that fills up textureless areas afterwards based on color. This paper also contributes the Freiburg-Berkeley motion segmentation (FBMS) dataset, a large, heterogeneous benchmark with 59 sequences and pixel-accurate ground truth annotation of moving objects." ] }
1604.04724
2339134033
Automatic segmentation of objects from a single image is a challenging problem which generally requires training on large number of images. We consider the problem of automatically segmenting only the dynamic objects from a given pair of images of a scene captured from different positions. We exploit dense correspondences along with saliency measures in order to first localize the interest points on the dynamic objects from the two images. We propose a novel approach based on techniques from computational geometry in order to automatically segment the dynamic objects from both the images using a top-down segmentation strategy. We discuss how the proposed approach is unique in novelty compared to other state-of-the-art segmentation algorithms. We show that the proposed approach for segmentation is efficient in handling large motions and is able to achieve very good segmentation of the objects for different scenes. We analyse the results with respect to the manually marked ground truth segmentation masks created using our own dataset and provide key observations in order to improve the work in future.
We shall now review some of the works related to estimation of dense correspondence which is used to match two images of a scene which has undergone significant changes. The estimation of accurate nearest neighbour for matching is computationally intractable. Approximate algorithms for estimating the nearest neighbour and thereby match two images was first proposed in @cite_15 . Considering image as a collection of patches, a faster algorithm to compute one nearest neighbour patch to a given patch called PatchMatch was proposed in @cite_13 . This approach was extended and generalized to obtain multiple nearest neighbours for a given patch in the following work @cite_9 . An approach called coherency sensitive hashing improved the idea of locality sensitive hashing in order to compute the approximate nearest neighbours much faster than PatchMatch @cite_42 .
{ "cite_N": [ "@cite_9", "@cite_15", "@cite_42", "@cite_13" ], "mid": [ "1763426478", "2427881153", "2145940484", "1993120651" ], "abstract": [ "PatchMatch is a fast algorithm for computing dense approximate nearest neighbor correspondences between patches of two image regions [1]. This paper generalizes PatchMatch in three ways: (1) to find k nearest neighbors, as opposed to just one, (2) to search across scales and rotations, in addition to just translations, and (3) to match using arbitrary descriptors and distances, not just sum-of-squared-differences on patch colors. In addition, we offer new search and parallelization strategies that further accelerate the method, and we show performance improvements over standard kd-tree techniques across a variety of inputs. In contrast to many previous matching algorithms, which for efficiency reasons have restricted matching to sparse interest points, or spatially proximate matches, our algorithm can efficiently find global, dense matches, even while matching across all scales and rotations. This is especially useful for computer vision applications, where our algorithm can be used as an efficient general-purpose component. We explore a variety of vision applications: denoising, finding forgeries by detecting cloned regions, symmetry detection, and object detection.", "Consider a set of S of n data points in real d -dimensional space, R d , where distances are measured using any Minkowski metric. In nearest neighbor searching, we preprocess S into a data structure, so that given any query point q ∈ R d , is the closest point of S to q can be reported quickly. Given any positive real e, data point p is a (1 +e)- approximate nearest neighbor of q if its distance from q is within a factor of (1 + e) of the distance to the true nearest neighbor. We show that it is possible to preprocess a set of n points in R d in O(dn log n ) time and O(dn) space, so that given a query point q ∈ R d , and e > 0, a (1 + e)-approximate nearest neighbor of q can be computed in O ( c d , e log n ) time, where c d,e ≤ d 1 + 6d e ; d is a factor depending only on dimension and e. In general, we show that given an integer k ≥ 1, (1 + e)-approximations to the k nearest neighbors of q can be computed in additional O(kd log n ) time.", "Coherency Sensitive Hashing (CSH) extends Locality Sensitivity Hashing (LSH) and PatchMatch to quickly find matching patches between two images. LSH relies on hashing, which maps similar patches to the same bin, in order to find matching patches. PatchMatch, on the other hand, relies on the observation that images are coherent, to propagate good matches to their neighbors, in the image plane. It uses random patch assignment to seed the initial matching. CSH relies on hashing to seed the initial patch matching and on image coherence to propagate good matches. In addition, hashing lets it propagate information between patches with similar appearance (i.e., map to the same bin). This way, information is propagated much faster because it can use similarity in appearance space or neighborhood in the image plane. As a result, CSH is at least three to four times faster than PatchMatch and more accurate, especially in textured regions, where reconstruction artifacts are most noticeable to the human eye. We verified CSH on a new, large scale, data set of 133 image pairs.", "This paper presents interactive image editing tools using a new randomized algorithm for quickly finding approximate nearest-neighbor matches between image patches. Previous research in graphics and vision has leveraged such nearest-neighbor searches to provide a variety of high-level digital image editing tools. However, the cost of computing a field of such matches for an entire image has eluded previous efforts to provide interactive performance. Our algorithm offers substantial performance improvements over the previous state of the art (20-100x), enabling its use in interactive editing tools. The key insights driving the algorithm are that some good patch matches can be found via random sampling, and that natural coherence in the imagery allows us to propagate such matches quickly to surrounding areas. We offer theoretical analysis of the convergence properties of the algorithm, as well as empirical and practical evidence for its high quality and performance. This one simple algorithm forms the basis for a variety of tools -- image retargeting, completion and reshuffling -- that can be used together in the context of a high-level image editing application. Finally, we propose additional intuitive constraints on the synthesis process that offer the user a level of control unavailable in previous methods." ] }
1604.04724
2339134033
Automatic segmentation of objects from a single image is a challenging problem which generally requires training on large number of images. We consider the problem of automatically segmenting only the dynamic objects from a given pair of images of a scene captured from different positions. We exploit dense correspondences along with saliency measures in order to first localize the interest points on the dynamic objects from the two images. We propose a novel approach based on techniques from computational geometry in order to automatically segment the dynamic objects from both the images using a top-down segmentation strategy. We discuss how the proposed approach is unique in novelty compared to other state-of-the-art segmentation algorithms. We show that the proposed approach for segmentation is efficient in handling large motions and is able to achieve very good segmentation of the objects for different scenes. We analyse the results with respect to the manually marked ground truth segmentation masks created using our own dataset and provide key observations in order to improve the work in future.
The above methods for dense correspondence were not able to handle non-rigid motion of the objects in the scene effectively. An algorithm based on matching of scale invariant feature transform (SIFT) features between two images called SIFT-flow has been proposed to take into account such complex motions @cite_32 . Another method called non-rigid dense correspondence is specifically meant to match features between two images of a scene where the objects have undergone non-rigid motion. This approach called non-rigid dense correspondence (NRDC) is shown to be more accurate compared to SIFT-flow @cite_2 . Another approach for the dense correspondence of deformable objects is proposed in @cite_6 . Dense correspondences have a number of applications such as image retargeting @cite_4 , high dynamic range image reconstruction ( @cite_28 , @cite_25 ), and image melding @cite_36 . A recent work which has been accepted to be published builds on these ideas and develops a framework to accurately match features between two images having common object present in different orientations @cite_41 .
{ "cite_N": [ "@cite_4", "@cite_28", "@cite_36", "@cite_41", "@cite_32", "@cite_6", "@cite_2", "@cite_25" ], "mid": [ "2115273023", "1995813543", "1975049209", "", "", "2124861766", "2106505277", "2054927225" ], "abstract": [ "We propose a principled approach to summarization of visual data (images or video) based on optimization of a well-defined similarity measure. The problem we consider is re-targeting (or summarization) of image video data into smaller sizes. A good ldquovisual summaryrdquo should satisfy two properties: (1) it should contain as much as possible visual information from the input data; (2) it should introduce as few as possible new visual artifacts that were not in the input data (i.e., preserve visual coherence). We propose a bi-directional similarity measure which quantitatively captures these two requirements: Two signals S and T are considered visually similar if all patches of S (at multiple scales) are contained in T, and vice versa. The problem of summarization re-targeting is posed as an optimization problem of this bi-directional similarity measure. We show summarization results for image and video data. We further show that the same approach can be used to address a variety of other problems, including automatic cropping, completion and synthesis of visual data, image collage, object removal, photo reshuffling and more.", "High dynamic range (HDR) imaging from a set of sequential exposures is an easy way to capture high-quality images of static scenes, but suffers from artifacts for scenes with significant motion. In this paper, we propose a new approach to HDR reconstruction that draws information from all the exposures but is more robust to camera scene motion than previous techniques. Our algorithm is based on a novel patch-based energy-minimization formulation that integrates alignment and reconstruction in a joint optimization through an equation we call the HDR image synthesis equation. This allows us to produce an HDR result that is aligned to one of the exposures yet contains information from all of them. We present results that show considerable improvement over previous approaches.", "Current methods for combining two different images produce visible artifacts when the sources have very different textures and structures. We present a new method for synthesizing a transition region between two source images, such that inconsistent color, texture, and structural properties all change gradually from one source to the other. We call this process image melding. Our method builds upon a patch-based optimization foundation with three key generalizations: First, we enrich the patch search space with additional geometric and photometric transformations. Second, we integrate image gradients into the patch representation and replace the usual color averaging with a screened Poisson equation solver. And third, we propose a new energy based on mixed L2 L0 norms for colors and gradients that produces a gradual transition between sources without sacrificing texture sharpness. Together, all three generalizations enable patch-based solutions to a broad class of image melding problems involving inconsistent sources: object cloning, stitching challenging panoramas, hole filling from multiple photos, and image harmonization. In several cases, our unified method outperforms previous state-of-the-art methods specifically designed for those applications.", "", "", "We introduce a fast deformable spatial pyramid (DSP) matching algorithm for computing dense pixel correspondences. Dense matching methods typically enforce both appearance agreement between matched pixels as well as geometric smoothness between neighboring pixels. Whereas the prevailing approaches operate at the pixel level, we propose a pyramid graph model that simultaneously regularizes match consistency at multiple spatial extents-ranging from an entire image, to coarse grid cells, to every single pixel. This novel regularization substantially improves pixel-level matching in the face of challenging image variations, while the \"deformable\" aspect of our model overcomes the strict rigidity of traditional spatial pyramids. Results on Label Me and Caltech show our approach outperforms state-of-the-art methods (SIFT Flow [15] and Patch-Match [2]), both in terms of accuracy and run time.", "This paper presents a new efficient method for recovering reliable local sets of dense correspondences between two images with some shared content. Our method is designed for pairs of images depicting similar regions acquired by different cameras and lenses, under non-rigid transformations, under different lighting, and over different backgrounds. We utilize a new coarse-to-fine scheme in which nearest-neighbor field computations using Generalized PatchMatch [ 2010] are interleaved with fitting a global non-linear parametric color model and aggregating consistent matching regions using locally adaptive constraints. Compared to previous correspondence approaches, our method combines the best of two worlds: It is dense, like optical flow and stereo reconstruction methods, and it is also robust to geometric and photometric variations, like sparse feature matching. We demonstrate the usefulness of our method using three applications for automatic example-based photograph enhancement: adjusting the tonal characteristics of a source image to match a reference, transferring a known mask to a new image, and kernel estimation for image deblurring.", "We present a novel method for aligning images in an HDR (high-dynamic-range) image stack to produce a new exposure stack where all the images are aligned and appear as if they were taken simultaneously, even in the case of highly dynamic scenes. Our method produces plausible results even where the image used as a reference is either too dark or bright to allow for an accurate registration." ] }
1604.04724
2339134033
Automatic segmentation of objects from a single image is a challenging problem which generally requires training on large number of images. We consider the problem of automatically segmenting only the dynamic objects from a given pair of images of a scene captured from different positions. We exploit dense correspondences along with saliency measures in order to first localize the interest points on the dynamic objects from the two images. We propose a novel approach based on techniques from computational geometry in order to automatically segment the dynamic objects from both the images using a top-down segmentation strategy. We discuss how the proposed approach is unique in novelty compared to other state-of-the-art segmentation algorithms. We show that the proposed approach for segmentation is efficient in handling large motions and is able to achieve very good segmentation of the objects for different scenes. We analyse the results with respect to the manually marked ground truth segmentation masks created using our own dataset and provide key observations in order to improve the work in future.
Visual saliency refers to the attention a human being has on some particular object in the scene. Visual attention was first proposed and models were built in the seminal work by Itti @cite_17 . The interest towards estimating the saliency from an image grew in the computer vision community. A recent approach exploits global contrast in order to produce a saliency map of a given image @cite_39 . Another recent work involves measuring saliency from the context in a given image @cite_27 . These approaches for saliency detection work at a pixel. A more efficient and accurate saliency detection approach involves estimating saliency at the patch level @cite_8 . By processing the patch corresponding to a given pixel location one can produce a good saliency map with this approach.
{ "cite_N": [ "@cite_8", "@cite_27", "@cite_39", "@cite_17" ], "mid": [ "2122076510", "", "2037954058", "2128272608" ], "abstract": [ "What makes an object salient? Most previous work assert that distinctness is the dominating factor. The difference between the various algorithms is in the way they compute distinctness. Some focus on the patterns, others on the colors, and several add high-level cues and priors. We propose a simple, yet powerful, algorithm that integrates these three factors. Our key contribution is a novel and fast approach to compute pattern distinctness. We rely on the inner statistics of the patches in the image for identifying unique patterns. We provide an extensive evaluation and show that our approach outperforms all state-of-the-art methods on the five most commonly-used datasets.", "", "Automatic estimation of salient object regions across images, without any prior assumption or knowledge of the contents of the corresponding scenes, enhances many computer vision and computer graphics applications. We introduce a regional contrast based salient object detection algorithm, which simultaneously evaluates global contrast differences and spatial weighted coherence scores. The proposed algorithm is simple, efficient, naturally multi-scale, and produces full-resolution, high-quality saliency maps. These saliency maps are further used to initialize a novel iterative version of GrabCut, namely SaliencyCut, for high quality unsupervised salient object segmentation. We extensively evaluated our algorithm using traditional salient object detection datasets, as well as a more challenging Internet image dataset. Our experimental results demonstrate that our algorithm consistently outperforms 15 existing salient object detection and segmentation methods, yielding higher precision and better recall rates. We also show that our algorithm can be used to efficiently extract salient object masks from Internet images, enabling effective sketch-based image retrieval (SBIR) via simple shape comparisons. Despite such noisy internet images, where the saliency regions are ambiguous, our saliency guided image retrieval achieves a superior retrieval rate compared with state-of-the-art SBIR methods, and additionally provides important target object region information.", "A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented. Multiscale image features are combined into a single topographical saliency map. A dynamical neural network then selects attended locations in order of decreasing saliency. The system breaks down the complex problem of scene understanding by rapidly selecting, in a computationally efficient manner, conspicuous locations to be analyzed in detail." ] }
1604.04570
2950985557
This is the first work investigating community structure and interaction dynamics through the lens of quotes in online discussion forums. We examine four forums of different size, language, and topic. Quote usage, which is surprisingly consistent over time and users, appears to have an important role in aiding intra-thread navigation, and uncovers a hidden “social” structure in communities otherwise lacking all trappings (from friends and followers to reputations) of today’s social networks.
This paper extends some of our preliminary research @cite_14 that examines the role of quotes in coagulating and organizing discussion, and also suggests they could reveal the social structure of the debating community. We can divide related literature into three areas: discussion organization, evolution and interpretation; user identification and characterization; and emergence of social structure from interaction.
{ "cite_N": [ "@cite_14" ], "mid": [ "2488888433" ], "abstract": [ "We analyse the usage of quotes in forum.rpg.net, the largest online forum on tabletop roleplaying games. Quote usage appears pervasive and surprisingly consistent over time and users; it seems to have a role in aiding intra-thread navigation; and it reveals an underlying \"social\" structure in a community that otherwise lacks all trappings (from friends and followers to reputations) of today's social networks. This is the first work to investigate community structure and interaction through the lens of quotes in an online forum." ] }
1604.04570
2950985557
This is the first work investigating community structure and interaction dynamics through the lens of quotes in online discussion forums. We examine four forums of different size, language, and topic. Quote usage, which is surprisingly consistent over time and users, appears to have an important role in aiding intra-thread navigation, and uncovers a hidden “social” structure in communities otherwise lacking all trappings (from friends and followers to reputations) of today’s social networks.
Considerable effort has been devoted to understanding how online discussion initiates, evolves, and is received by users. Conversation thread structure has been investigated mostly through patterns of post , rather than @cite_5 @cite_19 ; interestingly, information on timing and user identity allegedly improves accuracy in reconstructing thread structure, which suggests online discussion is governed by social conventions richer than simple turn taking. An increasingly popular topic is that of predicting the propagation of a piece of content through retweets @cite_20 , rumors @cite_8 , and memes @cite_1 ; although these citation mechanisms resemble quotes in affording information sharing and source attribution, they are embedded within the frame of social and news media, platforms not designed for peer discussion. Looking at citation content instead of dissemination, recent research has built tools to interpret public dialog through quotes, exposing e.g. the systematic bias in news media outlets @cite_17 , or what influences credibility in social media text @cite_11 .
{ "cite_N": [ "@cite_11", "@cite_8", "@cite_1", "@cite_19", "@cite_5", "@cite_20", "@cite_17" ], "mid": [ "2137449855", "", "2127492100", "138267615", "2128005952", "2101196063", "2166273866" ], "abstract": [ "How do journalists mark quoted content as certain or uncertain, and how do readers interpret these signals? Predicates such as thinks, claims, and admits offer a range of options for framing quoted content according to the author’s own perceptions of its credibility. We gather a new dataset of direct and indirect quotes from Twitter, and obtain annotations of the perceived certainty of the quoted statements. We then compare the ability of linguistic and extra-linguistic features to predict readers’ assessment of the certainty of quoted content. We see that readers are indeed influenced by such framing devices — and we find no evidence that they consider other factors, such as the source, journalist, or the content itself. In addition, we examine the impact of specific framing devices on perceptions of credibility.", "", "Tracking new topics, ideas, and \"memes\" across the Web has been an issue of considerable interest. Recent work has developed methods for tracking topic shifts over long time scales, as well as abrupt spikes in the appearance of particular named entities. However, these approaches are less well suited to the identification of content that spreads widely and then fades over time scales on the order of days - the time scale at which we perceive news and events. We develop a framework for tracking short, distinctive phrases that travel relatively intact through on-line text; developing scalable algorithms for clustering textual variants of such phrases, we identify a broad class of memes that exhibit wide spread and rich variation on a daily basis. As our principal domain of study, we show how such a meme-tracking approach can provide a coherent representation of the news cycle - the daily rhythms in the news media that have long been the subject of qualitative interpretation but have never been captured accurately enough to permit actual quantitative analysis. We tracked 1.6 million mainstream media sites and blogs over a period of three months with the total of 90 million articles and we find a set of novel and persistent temporal patterns in the news cycle. In particular, we observe a typical lag of 2.5 hours between the peaks of attention to a phrase in the news media and in blogs respectively, with divergent behavior around the overall peak and a \"heartbeat\"-like pattern in the handoff between news and blogs. We also develop and analyze a mathematical model for the kinds of temporal variation that the system exhibits.", "Online discussion boards, or Internet forums, are a significant part of the Internet. People use Internet forums to post questions, provide advice and participate in discussions. These online conversations are represented as threads, and the conversation trees within these threads are important in understanding the behaviour of online users. Unfortunately, the reply structures of these threads are generally not publicly accessible or not maintained. Hence, in this paper, we introduce an efficient and simple approach to reconstruct the reply structure in threaded conversations. We contrast its accuracy against three baseline algorithms, and show that our algorithm can accurately recreate the in and out degree distributions of forum reply graphs built from the reconstructed reply structures.", "How do online conversations build? Is there a common model that human communication follows? In this work we explore these questions in detail. We analyze the structure of conversations in three different social datasets, namely, Usenet groups, Yahoo! Groups, and Twitter. We propose a simple mathematical model for the generation of basic conversation structures and then refine this model to take into account the identities of each member of the conversation.", "Twitter, a microblogging service less than three years old, commands more than 41 million users as of July 2009 and is growing fast. Twitter users tweet about any topic within the 140-character limit and follow others to receive their tweets. The goal of this paper is to study the topological characteristics of Twitter and its power as a new medium of information sharing. We have crawled the entire Twitter site and obtained 41.7 million user profiles, 1.47 billion social relations, 4,262 trending topics, and 106 million tweets. In its follower-following topology analysis we have found a non-power-law follower distribution, a short effective diameter, and low reciprocity, which all mark a deviation from known characteristics of human social networks [28]. In order to identify influentials on Twitter, we have ranked users by the number of followers and by PageRank and found two rankings to be similar. Ranking by retweets differs from the previous two rankings, indicating a gap in influence inferred from the number of followers and that from the popularity of one's tweets. We have analyzed the tweets of top trending topics and reported on their temporal behavior and user participation. We have classified the trending topics based on the active period and the tweets and show that the majority (over 85 ) of topics are headline news or persistent news in nature. A closer look at retweets reveals that any retweeted tweet is to reach an average of 1,000 users no matter what the number of followers is of the original tweet. Once retweeted, a tweet gets retweeted almost instantly on next hops, signifying fast diffusion of information after the 1st retweet. To the best of our knowledge this work is the first quantitative study on the entire Twittersphere and information diffusion on it.", "Given the extremely large pool of events and stories available, media outlets need to focus on a subset of issues and aspects to convey to their audience. Outlets are often accused of exhibiting a systematic bias in this selection process, with different outlets portraying different versions of reality. However, in the absence of objective measures and empirical evidence, the direction and extent of systematicity remains widely disputed. In this paper we propose a framework based on quoting patterns for quantifying and characterizing the degree to which media outlets exhibit systematic bias. We apply this framework to a massive dataset of news articles spanning the six years of Obama's presidency and all of his speeches, and reveal that a systematic pattern does indeed emerge from the outlet's quoting behavior. Moreover, we show that this pattern can be successfully exploited in an unsupervised prediction setting, to determine which new quotes an outlet will select to broadcast. By encoding bias patterns in a low-rank space we provide an analysis of the structure of political media coverage. This reveals a latent media bias space that aligns surprisingly well with political ideology and outlet type. A linguistic analysis exposes striking differences across these latent dimensions, showing how the different types of media outlets portray different realities even when reporting on the same events. For example, outlets mapped to the mainstream conservative side of the latent space focus on quotes that portray a presidential persona disproportionately characterized by negativity." ] }
1604.04570
2950985557
This is the first work investigating community structure and interaction dynamics through the lens of quotes in online discussion forums. We examine four forums of different size, language, and topic. Quote usage, which is surprisingly consistent over time and users, appears to have an important role in aiding intra-thread navigation, and uncovers a hidden “social” structure in communities otherwise lacking all trappings (from friends and followers to reputations) of today’s social networks.
A problem related to, but different from, identification is that of user : for example, retweets have been used to infer the “Big Five” personality traits of the users @cite_10 . While we believe the analysis of quotes could be profitably applied to this task, it is a line of research beyond the scope of this paper.
{ "cite_N": [ "@cite_10" ], "mid": [ "2039779211" ], "abstract": [ "In this paper, we examine to which degree behavioral measures can be used to predict personality. Personality is one factor that dictates people's propensity to trust and their relationships with others. In previous work, we have shown that personality can be predicted relatively accurately by analyzing social media profiles. We demonstrated this using public data from facebook profiles and text from Twitter streams. As social situations are crucial in the formation of one's personality, one's social behavior could be a strong indicator of her personality. Given most users of social media sites typically have a large number of friends and followers, considering only these aspects may not provide an accurate picture of personality. To overcome this problem, we develop a set of measures based on one's behavior towards her friends and followers. We introduce a number of measures that are based on the intensity and number of social interactions one has with friends along a number of dimensions such as reciprocity and priority. We analyze these features along with a set of features based on the textual analysis of the messages sent by the users. We show that behavioral features are very useful in determining personality and perform as well as textual features." ] }
1604.04570
2950985557
This is the first work investigating community structure and interaction dynamics through the lens of quotes in online discussion forums. We examine four forums of different size, language, and topic. Quote usage, which is surprisingly consistent over time and users, appears to have an important role in aiding intra-thread navigation, and uncovers a hidden “social” structure in communities otherwise lacking all trappings (from friends and followers to reputations) of today’s social networks.
An extensive body of research has focused on understanding the social mechanisms triggering the creation of an edge in a social network -- both at the link level @cite_12 @cite_0 , and at the entire network level @cite_18 . However, it is an open question whether links in online social networks are reliable indicators of bonding.
{ "cite_N": [ "@cite_0", "@cite_18", "@cite_12" ], "mid": [ "", "2151078464", "2148847267" ], "abstract": [ "", "We present a detailed study of network evolution by analyzing four large online social networks with full temporal information about node and edge arrivals. For the first time at such a large scale, we study individual node arrival and edge creation processes that collectively lead to macroscopic properties of networks. Using a methodology based on the maximum-likelihood principle, we investigate a wide variety of network formation strategies, and show that edge locality plays a critical role in evolution of networks. Our findings supplement earlier network models based on the inherently non-local preferential attachment. Based on our observations, we develop a complete model of network evolution, where nodes arrive at a prespecified rate and select their lifetimes. Each node then independently initiates edges according to a \"gap\" process, selecting a destination for each edge according to a simple triangle-closing model free of any parameters. We show analytically that the combination of the gap distribution with the node lifetime leads to a power law out-degree distribution that accurately reflects the true network in all four cases. Finally, we give model parameter settings that allow automatic evolution and generation of realistic synthetic networks of arbitrary scale.", "Given a snapshot of a social network, can we infer which new interactions among its members are likely to occur in the near future? We formalize this question as the link-prediction problem, and we develop approaches to link prediction based on measures for analyzing the “proximity” of nodes in a network. Experiments on large coauthorship networks suggest that information about future interactions can be extracted from network topology alone, and that fairly subtle measures for detecting node proximity can outperform more direct measures. © 2007 Wiley Periodicals, Inc." ] }
1604.04570
2950985557
This is the first work investigating community structure and interaction dynamics through the lens of quotes in online discussion forums. We examine four forums of different size, language, and topic. Quote usage, which is surprisingly consistent over time and users, appears to have an important role in aiding intra-thread navigation, and uncovers a hidden “social” structure in communities otherwise lacking all trappings (from friends and followers to reputations) of today’s social networks.
A related research line of considerable practical interest involves inferring social networks from the actual observed interactions (such as exchanged messages or co-presence at events) @cite_9 . Platforms analyzed in the literature include academic citation networks @cite_7 , online college communities @cite_15 , email @cite_2 , and phone call logs @cite_3 ; yet quotes in online forums have never been investigated to date.
{ "cite_N": [ "@cite_7", "@cite_9", "@cite_3", "@cite_2", "@cite_15" ], "mid": [ "2111708605", "2009919904", "2156491077", "2144009057", "2107838985" ], "abstract": [ "How do real graphs evolve over time? What are \"normal\" growth patterns in social, technological, and information networks? Many studies have discovered patterns in static graphs, identifying properties in a single snapshot of a large network, or in a very small number of snapshots; these include heavy tails for in- and out-degree distributions, communities, small-world phenomena, and others. However, given the lack of information about network evolution over long periods, it has been hard to convert these findings into statements about trends over time.Here we study a wide range of real graphs, and we observe some surprising phenomena. First, most of these graphs densify over time, with the number of edges growing super-linearly in the number of nodes. Second, the average distance between nodes often shrinks over time, in contrast to the conventional wisdom that such distance parameters should increase slowly as a function of the number of nodes (like O(log n) or O(log(log n)).Existing graph generation models do not exhibit these types of behavior, even at a qualitative level. We provide a new graph generator, based on a \"forest fire\" spreading process, that has a simple, intuitive justification, requires very few parameters (like the \"flammability\" of nodes), and produces graphs exhibiting the full range of properties observed both in prior work and in the present study.", "A majority of social network research deals with explicitly formed social networks. Although only rarely acknowledged for its existence, we believe that implicit social networks play a significant role in the overall dynamics of social networks. We propose a framework to evaluate the dynamics and characteristics of a set of explicit and associated implicit social networks. Specifically, we propose a social network matrix to measure the implicit relationships among the entities in various social networks. We also derive several indicators to characterize the dynamics in online social networks. We proceed by incorporating implicit social networks in a traditional network flow context to evaluate key network performance indicators such as the lowest communication cost, maximum information flow, and the budgetary constraints. We consider (online) implicit social networks.We propose a social network matrix to measure the implicit relationships among the entities in various social networks.We derive several indicators to characterize the dynamics in online social networks.", "Given a set of people and a set of events attended by them, we address the problem of measuring connectedness or tie strength between each pair of persons. The underlying assumption is that attendance at mutual events gives an implicit social network between people. We take an axiomatic approach to this problem. Starting from a list of axioms, which a measure of tie strength must satisfy, we characterize functions that satisfy all the axioms. We then show that there is a range of tie-strength measures that satisfy this characterization. A measure of tie strength induces a ranking on the edges of the social network (and on the set of neighbors for every person). We show that for applications where the ranking, and not the absolute value of the tie strength, is the important thing about the measure, the axioms are equivalent to a natural partial order. To settle on a particular measure, we must make a non-obvious decision about extending this partial order to a total order. This decision is best left to particular applications. We also classify existing tie-strength measures according to the axioms that they satisfy; and observe that none of the \"self-referential\" tie-strength measures satisfy the axioms. In our experiments, we demonstrate the efficacy of our approach; show the completeness and soundness of our axioms, and present Kendall Tau Rank Correlation between various tie-strength measures.", "Although users of online communication tools rarely categorize their contacts into groups such as \"family\", \"co-workers\", or \"jogging buddies\", they nonetheless implicitly cluster contacts, by virtue of their interactions with them, forming implicit groups. In this paper, we describe the implicit social graph which is formed by users' interactions with contacts and groups of contacts, and which is distinct from explicit social graphs in which users explicitly add other individuals as their \"friends\". We introduce an interaction-based metric for estimating a user's affinity to his contacts and groups. We then describe a novel friend suggestion algorithm that uses a user's implicit social graph to generate a friend group, given a small seed set of contacts which the user has already labeled as friends. We show experimental results that demonstrate the importance of both implicit group relationships and interaction-based affinity ranking in suggesting friends. Finally, we discuss two applications of the Friend Suggest algorithm that have been released as Gmail Labs features.", "This research draws on longitudinal network data from an online community to examine patterns of users' behavior and social interaction, and infer the processes underpinning dynamics of system use. The online community represents a prototypical example of a complex evolving social network in which connections between users are established over time by online messages. We study the evolution of a variety of properties since the inception of the system, including how users create, reciprocate, and deepen relationships with one another, variations in users' gregariousness and popularity, reachability and typical distances among users, and the degree of local redundancy in the system. Results indicate that the system is a “small world” characterized by the emergence, in its early stages, of a hub-dominated structure with heterogeneity in users' behavior. We investigate whether hubs are responsible for holding the system together and facilitating information flow, examine first-mover advantages underpinning users' ability to rise to system prominence, and uncover gender differences in users' gregariousness, popularity, and local redundancy. We discuss the implications of the results for research on system use and evolving social networks, and for a host of applications, including information diffusion, communities of practice, and the security and robustness of information systems. © 2009 Wiley Periodicals, Inc." ] }
1604.04528
2338710835
Kinect skeleton tracker is able to achieve considerable human body tracking performance in convenient and a low-cost manner. However, The tracker often captures unnatural human poses such as discontinuous and vibrated motions when self-occlusions occur. A majority of approaches tackle this problem by using multiple Kinect sensors in a workspace. Combination of the measurements from different sensors is then conducted in Kalman filter framework or optimization problem is formulated for sensor fusion. However, these methods usually require heuristics to measure reliability of measurements observed from each Kinect sensor. In this paper, we developed a method to improve Kinect skeleton using single Kinect sensor, in which supervised learning technique was employed to correct unnatural tracking motions. Specifically, deep recurrent neural networks were used for improving joint positions and velocities of Kinect skeleton, and three methods were proposed to integrate the refined positions and velocities for further enhancement. Moreover, we suggested a novel measure to evaluate naturalness of captured motions. We evaluated the proposed approach by comparison with the ground truth obtained using a commercial optical maker-based motion capture system.
Skeleton tracking algorithms can be classified into single-view based models @cite_9 , @cite_8 , @cite_10 and multi-view based model @cite_11 , @cite_18 . Shotton @cite_7 proposed a new method to predict 3D positions of body joints from a single depth image. In their method, an intermediate representation of body parts was designed to map the pose estimation problem onto a per-pixel classification problem. An extensively large and highly varied training data set is employed for the random forest classifier to estimate body parts invariant to pose, body shape, clothing, etc. Finally, confidence-scored 3D proposals of several body joints are generated by re-projecting the classification results to the 3D world and finding local modes. As a result, this approach can quickly and accurately predict the 3D positions of body joints. The skeleton trackers in both the first and second versions of the Kinect SDK are based on this algorithm. However, the 3D body pose that is estimated using a single view frequently has problems of determining positions of joints during self-occlusion motions. Consequently, Kinect skeleton tracker has problems of capturing discontinuous movements or unwanted vibration.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_8", "@cite_9", "@cite_10", "@cite_11" ], "mid": [ "2027087416", "2172156083", "2141827760", "2088946584", "2131417778", "2168415715" ], "abstract": [ "This paper presents a novel method to extract skeletons of complex articulated objects from 3D point cloud sequences collected by the Kinect. Our approach is more robust than the traditional video-based and stereo-based approaches, as the Kinect directly provides 3D information without any markers, 2D-to-3D-transition assumptions, and feature point extraction. We track all the raw 3D points on the object, and utilize the point trajectories to determine the object skeleton. The point tracking is achieved by the 3D non-rigid matching based on the Markov Random Field (MRF) Deformation Model. To reduce the large computational cost of the non-rigid matching, a coarse-to-fine procedure is proposed. To the best of our knowledge, this is the first to extract skeletons of highly deformable objects from 3D point cloud sequences by point tracking. Experiments prove our method's good performance, and the extracted skeletons are successfully applied to the motion capture.", "We propose a new method to quickly and accurately predict 3D positions of body joints from a single depth image, using no temporal information. We take an object recognition approach, designing an intermediate body parts representation that maps the difficult pose estimation problem into a simpler per-pixel classification problem. Our large and highly varied training dataset allows the classifier to estimate body parts invariant to pose, body shape, clothing, etc. Finally we generate confidence-scored 3D proposals of several body joints by reprojecting the classification result and finding local modes. The system runs at 200 frames per second on consumer hardware. Our evaluation shows high accuracy on both synthetic and real test sets, and investigates the effect of several training parameters. We achieve state of the art accuracy in our comparison with related work and demonstrate improved generalization over exact whole-skeleton nearest neighbor matching.", "We propose a novel method for tracking an articulated model in a 3D-point cloud. The tracking problem is formulated as the registration of two point sets, one of them parameterised by the model’s state vector and the other acquired from a 3D-sensor system. Finding the correct parameter vector is posed as a linear estimation problem, which is solved by means of a scaled unscented Kalman filter. Our method draws on concepts from the widely used iterative closest point registration algorithm (ICP), basing the measurement model on point correspondences established between the synthesised model point cloud and the measured 3D-data. We apply the algorithm to kinematically track a model of the human upper body on a point cloud obtained through stereo image processing from one or more stereo cameras. We determine torso position and orientation as well as joint angles of shoulders and elbows. The algorithm has been successfully tested on thousands of frames of real image data. Challenging sequences of several minutes length where tracked correctly. Complete processing time remains below one second per frame.", "This paper presents a new two-stage multi-view framework for the analysis of human interactions and activities. The analysis is performed in a distributed multi-view vision system that synergistically integrates track- and body-level processing. The proposed framework is geared toward versatile and easily-deployable systems that do not require careful camera calibration. The main contributions of the paper are as follows; (1) context-dependent view switching for occlusion handling, (2) a method for switching the two-stage analysis between the track- and body-level processing, and (3) a hypothesis-verification paradigm for top-down feedback that exploits the spatio-temporal constraints inherent in human interaction. An experimental evaluation shows the efficacy of the proposed system for analyzing multi-person interactions.", "We introduce a framework for unconstrained 3D human upper body pose estimation from multiple camera views in complex environment. Its main novelty lies in the integration of three components: single-frame pose recovery, temporal integration and model texture adaptation. Single-frame pose recovery consists of a hypothesis generation stage, in which candidate 3D poses are generated, based on probabilistic hierarchical shape matching in each camera view. In the subsequent hypothesis verification stage, the candidate 3D poses are re-projected into the other camera views and ranked according to a multi-view likelihood measure. Temporal integration consists of computing K-best trajectories combining a motion model and observations in a Viterbi-style maximum-likelihood approach. Poses that lie on the best trajectories are used to generate and adapt a texture model, which in turn enriches the shape likelihood measure used for pose recovery. The multiple trajectory hypotheses are used to generate pose predictions, augmenting the 3D pose candidates generated at the next time step. We demonstrate that our approach outperforms the state-of-the-art in experiments with large and challenging real-world data from an outdoor setting.", "In recent years, depth cameras have become a widely available sensor type that captures depth images at real-time frame rates. Even though recent approaches have shown that 3D pose estimation from monocular 2.5D depth images has become feasible, there are still challenging problems due to strong noise in the depth data and self-occlusions in the motions being captured. In this paper, we present an efficient and robust pose estimation framework for tracking full-body motions from a single depth image stream. Following a data-driven hybrid strategy that combines local optimization with global retrieval techniques, we contribute several technical improvements that lead to speed-ups of an order of magnitude compared to previous approaches. In particular, we introduce a variant of Dijkstra's algorithm to efficiently extract pose features from the depth data and describe a novel late-fusion scheme based on an efficiently computable sparse Hausdorff distance to combine local and global pose estimates. Our experiments show that the combination of these techniques facilitates real-time tracking with stable results even for fast and complex motions, making it applicable to a wide range of inter-active scenarios." ] }
1604.04528
2338710835
Kinect skeleton tracker is able to achieve considerable human body tracking performance in convenient and a low-cost manner. However, The tracker often captures unnatural human poses such as discontinuous and vibrated motions when self-occlusions occur. A majority of approaches tackle this problem by using multiple Kinect sensors in a workspace. Combination of the measurements from different sensors is then conducted in Kalman filter framework or optimization problem is formulated for sensor fusion. However, these methods usually require heuristics to measure reliability of measurements observed from each Kinect sensor. In this paper, we developed a method to improve Kinect skeleton using single Kinect sensor, in which supervised learning technique was employed to correct unnatural tracking motions. Specifically, deep recurrent neural networks were used for improving joint positions and velocities of Kinect skeleton, and three methods were proposed to integrate the refined positions and velocities for further enhancement. Moreover, we suggested a novel measure to evaluate naturalness of captured motions. We evaluated the proposed approach by comparison with the ground truth obtained using a commercial optical maker-based motion capture system.
Therefore, approaches that utilize multiple views have recently begun to receive significant attention. For example, Zhang @cite_16 fused individual depth images to a joint point cloud and used an efficient particle filtering approach for pose estimation. Likewise, Liu @cite_3 presented a markerless motion capture approach for multi-view video that reconstructs the skelet al motion and detailed surface geometries of two closely interacting people. The approach presented in this paper differs from the methods used by studies described above. Specifically, our goal was not to develop a method that estimates 3D positions of body joint directly from raw depth images or RGB images, but rather to investigate how to generate more human-like natural motion by improving the estimated Kinect v2 skeleton.
{ "cite_N": [ "@cite_16", "@cite_3" ], "mid": [ "2047975213", "2089384364" ], "abstract": [ "In this paper, we consider the problem of tracking human motion with a 22-DOF kinematic model from depth images. In contrast to existing approaches, our system naturally scales to multiple sensors. The motivation behind our approach, termed Multiple Depth Camera Approach (MDCA), is that by using several cameras, we can significantly improve the tracking quality and reduce ambiguities as for example caused by occlusions. By fusing the depth images of all available cameras into one joint point cloud, we can seamlessly incorporate the available information from multiple sensors into the pose estimation. To track the high-dimensional human pose, we employ state-of-the-art annealed particle filtering and partition sampling. We compute the particle likelihood based on the truncated signed distance of each observed point to a parameterized human shape model. We apply a coarse-to-fine scheme to recognize a wide range of poses to initialize the tracker. In our experiments, we demonstrate that our approach can accurately track human motion in real-time (15Hz) on a GPGPU. In direct comparison to two existing trackers (OpenNI, Microsoft Kinect SDK), we found that our approach is significantly more robust for unconstrained motions and under (partial) occlusions.", "Capturing the skeleton motion and detailed time-varying surface geometry of multiple, closely interacting peoples is a very challenging task, even in a multicamera setup, due to frequent occlusions and ambiguities in feature-to-person assignments. To address this task, we propose a framework that exploits multiview image segmentation. To this end, a probabilistic shape and appearance model is employed to segment the input images and to assign each pixel uniquely to one person. Given the articulated template models of each person and the labeled pixels, a combined optimization scheme, which splits the skeleton pose optimization problem into a local one and a lower dimensional global one, is applied one by one to each individual, followed with surface estimation to capture detailed nonrigid deformations. We show on various sequences that our approach can capture the 3D motion of humans accurately even if they move rapidly, if they wear wide apparel, and if they are engaged in challenging multiperson motions, including dancing, wrestling, and hugging." ] }
1604.04528
2338710835
Kinect skeleton tracker is able to achieve considerable human body tracking performance in convenient and a low-cost manner. However, The tracker often captures unnatural human poses such as discontinuous and vibrated motions when self-occlusions occur. A majority of approaches tackle this problem by using multiple Kinect sensors in a workspace. Combination of the measurements from different sensors is then conducted in Kalman filter framework or optimization problem is formulated for sensor fusion. However, these methods usually require heuristics to measure reliability of measurements observed from each Kinect sensor. In this paper, we developed a method to improve Kinect skeleton using single Kinect sensor, in which supervised learning technique was employed to correct unnatural tracking motions. Specifically, deep recurrent neural networks were used for improving joint positions and velocities of Kinect skeleton, and three methods were proposed to integrate the refined positions and velocities for further enhancement. Moreover, we suggested a novel measure to evaluate naturalness of captured motions. We evaluated the proposed approach by comparison with the ground truth obtained using a commercial optical maker-based motion capture system.
Indeed, there have been relatively few studies to determine skeleton pose by enhancing Kinect skeleton tracking. Masse @cite_5 presented a framework that obtains 3D positions of body joints from multiple Kinect sensors and then inputs the measured skeletons into a Gated Kalman Filter. In their method, the gated Kalman Filter rejects skeleton poses if the measurement residual referred to as is lower than the gating threshold. This is done in order to discard faulty sensor readings and retain correct measurements. For quantitative evaluation, commercial motion capture system is used to get access to the ground truth. However, the processing step to reject measurement is quite simple and entirely relies on . This might be often possible to lead ineffective measurement fusion.
{ "cite_N": [ "@cite_5" ], "mid": [ "201697742" ], "abstract": [ "Joint advent of affordable color and depth sensors and super-realtime skeleton detection, has produced a surge of research on Human Motion Capture. They provide a very important key to communication between Man and Machine. But the design was willing and closed-loop interaction, which allowed approximations and mandates a particular sensor setup. In this paper, we present a multiple sensor-based approach, designed to augment the robustness and precision of human joint positioning, based on delayed logic and filtering, of skeleton detected on each sensor." ] }
1604.04728
2017435659
In this article, an agent-based negotiation model for negotiation teams that negotiate a deal with an opponent is presented. Agent-based negotiation teams are groups of agents that join together as a single negotiation party because they share an interest that is related to the negotiation process. The model relies on a trusted mediator that coordinates and helps team members in the decisions that they have to take during the negotiation process: which offer is sent to the opponent, and whether the offers received from the opponent are accepted. The main strength of the proposed negotiation model is the fact that it guarantees unanimity within team decisions since decisions report a utility to team members that is greater than or equal to their aspiration levels at each negotiation round. This work analyzes how unanimous decisions are taken within the team and the robustness of the model against different types of manipulations. An empirical evaluation is also performed to study the impact of the different parameters of the model.
In the last few years, there has been growing interest in multiagent systems as a support for complex and distributed systems. Among these complex systems, there is special interest in scenarios where multiple agents, with possibly conflicting goals, cooperate with each other to reach their own goals. The benefits of cooperation and coordination are well known, and, as stated by Klein @cite_15 , computer systems may help us to identify and apply the appropriate coordination mechanism. Due to the inherent conflict among agents, techniques that allow agents to solve their own conflicts and cooperate are needed. This need is what has given birth to a group of technologies which have recently been referred to as agreement technologies @cite_25 . Trust and reputation @cite_6 , norms @cite_8 , agent organizations @cite_16 @cite_4 , argumentation @cite_17 @cite_2 and automated negotiation @cite_27 @cite_22 are part of the core that makes up this new family of technologies.
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_8", "@cite_6", "@cite_27", "@cite_2", "@cite_15", "@cite_16", "@cite_25", "@cite_17" ], "mid": [ "", "2039710942", "1599773397", "2120312633", "2115852343", "", "1488037644", "2039048406", "", "3558650" ], "abstract": [ "", "Ambient Intelligence aims to offer personalized services and easier ways of interaction between people and systems. Since several users and systems may coexist in these environments, it is quite possible that entities with opposing preferences need to cooperate to reach their respective goals. Automated negotiation is pointed as one of the mechanisms that may provide a solution to this kind of problems. In this article, a multi-issue bilateral bargaining model for Ambient Intelligence domains is presented where it is assumed that agents have computational bounded resources and do not know their opponents' preferences. The main goal of this work is to provide negotiation models that obtain efficient agreements while maintaining the computational cost low. A niching genetic algorithm is used before the negotiation process to sample one's own utility function (self-sampling). During the negotiation process, genetic operators are applied over the opponent's and one's own offers in order to sample new offers that are interesting for both parties. Results show that the proposed model is capable of outperforming similarity heuristics which only sample before the negotiation process and of obtaining similar results to similarity heuristics which have access to all of the possible offers.", "In this paper we present some concepts and their relations that are necessary for modeling autonomous agents in an environment that is governed by some (social) norms. We divide the norms over three levels: the private level the contract level and the convention level. We show how deontic logic can be used to model the concepts and how the theory of speech acts can be used to model the generation of (some of) the norms. Finally we give some idea about an agent architecture incorporating the social norms based on a BDI framework.", "The scientific research in the area of computational mechanisms for trust and reputation in virtual societies is a recent discipline oriented to increase the reliability and performance of electronic communities. Computer science has moved from the paradigm of isolated machines to the paradigm of networks and distributed computing. Likewise, artificial intelligence is quickly moving from the paradigm of isolated and non-situated intelligence to the paradigm of situated, social and collective intelligence. The new paradigm of the so called intelligent or autonomous agents and multi-agent systems (MAS) together with the spectacular emergence of the information society technologies (specially reflected by the popularization of electronic commerce) are responsible for the increasing interest on trust and reputation mechanisms applied to electronic societies. This review wants to offer a panoramic view on current computational trust and reputation models.", "", "", "Several distinct kinds of “coordination technology” have evolved to support effective coordination in cooperative work. This paper reviews some of the major weaknesses with current coordination technology and suggests several technical directions for addressing these weaknesses. These directions include developing semi-structured process representations that explicitly capture cooperative work inter-dependencies, exploiting advanced product and software design technologies for process design, and integrating coordination technologies to synergistically combine their strengths and avoid their individual weaknesses.", "Many researchers have demonstrated that the organizational design employed by an agent system can have a significant, quantitative effect on its performance characteristics. A range of organizational strategies have emerged from this line of research, each with different strengths and weaknesses. In this article we present a survey of the major organizational paradigms used in multi-agent systems. These include hierarchies, holarchies, coalitions, teams, congregations, societies, federations, markets, and matrix organizations. We will provide a description of each, discuss their advantages and disadvantages, and provide examples of how they may be instantiated and maintained. This summary will facilitate the comparative evaluation of organizational styles, allowing designers to first recognize the spectrum of possibilities, and then guiding the selection of an appropriate organizational design for a particular domain and environment.", "", "This contribution proposes a model for argumentation-based multi-agent planning, with a focus on cooperative scenarios. It consists in a multi-agent extension of DeLP-POP, partial order planning on top of argumentation-based defeasible logic programming. In DeLP-POP, actions and arguments (combinations of rules and facts) may be used to enforce some goal, if their conditions (are known to) apply and arguments are not defeated by other arguments applying. In a cooperative planning problem a team of agents share a set of goals but have diverse abilities and beliefs. In order to plan for these goals, agents start a stepwise dialogue consisting of exchanges of plan proposals, plus arguments against them. Since these dialogues instantiate an A* search algorithm, these agents will find a solution if some solution exists, and moreover, it will be provably optimal (according to their knowledge)." ] }
1604.04728
2017435659
In this article, an agent-based negotiation model for negotiation teams that negotiate a deal with an opponent is presented. Agent-based negotiation teams are groups of agents that join together as a single negotiation party because they share an interest that is related to the negotiation process. The model relies on a trusted mediator that coordinates and helps team members in the decisions that they have to take during the negotiation process: which offer is sent to the opponent, and whether the offers received from the opponent are accepted. The main strength of the proposed negotiation model is the fact that it guarantees unanimity within team decisions since decisions report a utility to team members that is greater than or equal to their aspiration levels at each negotiation round. This work analyzes how unanimous decisions are taken within the team and the robustness of the model against different types of manipulations. An empirical evaluation is also performed to study the impact of the different parameters of the model.
Despite being part of agreement technologies, automated negotiation has been studied by scholars for a few years. Automated negotiation consists of an automated search process for an agreement between two or more parties where participants exchange proposals. Two different research trends can be distinguished in automated negotiation models. The first type of model aims to calculate the optimum strategy given certain information about the opponent and the negotiation environment @cite_11 @cite_10 . The second type of model encloses heuristics that do not calculate the optimum strategy but obtain results that aim to be as close to the optimum as possible @cite_23 @cite_13 @cite_24 @cite_3 . These models assume imperfect knowledge about the opponent and the environment, and aim to be computationally tractable while obtaining good results. This present work can be classified into the latter type of models.
{ "cite_N": [ "@cite_10", "@cite_3", "@cite_24", "@cite_23", "@cite_13", "@cite_11" ], "mid": [ "2144187367", "1523459335", "2110872636", "2105440797", "1531563433", "2035409777" ], "abstract": [ "This paper studies bilateral multi-issue negotiation between self-interested autonomous agents. Now, there are a number of different procedures that can be used for this process; the three main ones being the package deal procedure in which all the issues are bundled and discussed together, the simultaneous procedure in which the issues are discussed simultaneously but independently of each other, and the sequential procedure in which the issues are discussed one after another. Since each of them yields a different outcome, a key problem is to decide which one to use in which circumstances. Specifically, we consider this question for a model in which the agents have time constraints (in the form of both deadlines and discount factors) and information uncertainty (in that the agents do not know the opponent's utility function). For this model, we consider issues that are both independent and those that are interdependent and determine equilibria for each case for each procedure. In so doing, we show that the package deal is in fact the optimal procedure for each party. We then go on to show that, although the package deal may be computationally more complex than the other two procedures, it generates Pareto optimal outcomes (unlike the other two), it has similar earliest and latest possible times of agreement to the simultaneous procedure (which is better than the sequential procedure), and that it (like the other two procedures) generates a unique outcome only under certain conditions (which we define).", "This paper presents a decentralized model that allows self-interested agents to reach \"win-win\" agreements in a multi-attribute negotiation. The model is based on an alternating-offer protocol. In each period, the proposing agent is allowed to make a limited number of offers. The responding agent can select the best out of these offers. In the case of rejection, agents exchange their roles and the negotiation proceeds to the next period. To make counteroffers, an agent first uses the heuristic of choosing the offer on an indifference (or \"iso-utility\") curve surface that is closest to the best offer made by the opponent in the previous period, and then taking this offer as the seed, chooses several other offers randomly in a specified neighborhood of this seed offer. Experimental analysis shows agents can reach near Pareto optimal agreements in quite general situations following the model where agents may have complex preferences on the attributes and incomplete information. This model does not require the presence of a mediator.", "Automated negotiation is a key form of interaction in systems that are composed of multiple autonomous agents. The aim of such interactions is to reach agreements through an iterative process of making offers. The content of such proposals are, however, a function of the strategy of the agents. Here we present a strategy called the trade-off strategy where multiple negotiation decision variables are traded-off against one another (e.g., paying a higher price in order to obtain an earlier delivery date or waiting longer in order to obtain a higher quality service). Such a strategy is commonly known to increase the social welfare of agents. Yet, to date, most computational work in this area has ignored the issue of trade-offs, instead aiming to increase social welfare through mechanism design. The aim of this paper is to develop a heuristic computational model of the trade-off strategy and show that it can lead to an increased social welfare of the system. A novel linear algorithm is presented that enables software agents to make trade-offs for multi-dimensional goods for the problem of distributed resource allocation. Our algorithm is motivated by a number of real-world negotiation applications that we have developed and can operate in the presence of varying degrees of uncertainty. Moreover, we show that on average the total time used by the algorithm is linearly proportional to the number of negotiation issues under consideration. This formal analysis is complemented by an empirical evaluation that highlights the operational effectiveness of the algorithm in a range of negotiation scenarios. The algorithm itself operates by using the notion of fuzzy similarity to approximate the preference structure of the other negotiator and then uses a hill-climbing technique to explore the space of possible trade-offs for the one that is most likely to be acceptable.  2002 Elsevier Science B.V. All rights reserved.", "Abstract We present a formal model of negotiation between autonomous agents. The purpose of the negotiation is to reach an agreement about the provision of a service by one agent for another. The model defines a range of strategies and tactics that agents can employ to generate initial offers, evaluate proposals and offer counter proposals. The model is based on computationally tractable assumptions, demonstrated in the domain of business process management and empirically evaluated.", "A component-based generic agent architecture for multi-attribute (integrative) negotiation is introduced and its application is described in a prototype system for negotiation about cars, developed in co-operation with, among others, Dutch Telecom KPN. The approach can be characterised as co-operative one-to-one multi-criteria negotiation in which the privacy of both parties is protected as much as possible.", "We study a bilateral multi-issue bargaining procedure with complete information and endogenous unrestricted agenda, in which offers can be made in any subset of outstanding issues. We find necessary and sufficient conditions for this procedure to have a unique subgame perfect equilibrium agreement." ] }
1604.04728
2017435659
In this article, an agent-based negotiation model for negotiation teams that negotiate a deal with an opponent is presented. Agent-based negotiation teams are groups of agents that join together as a single negotiation party because they share an interest that is related to the negotiation process. The model relies on a trusted mediator that coordinates and helps team members in the decisions that they have to take during the negotiation process: which offer is sent to the opponent, and whether the offers received from the opponent are accepted. The main strength of the proposed negotiation model is the fact that it guarantees unanimity within team decisions since decisions report a utility to team members that is greater than or equal to their aspiration levels at each negotiation round. This work analyzes how unanimous decisions are taken within the team and the robustness of the model against different types of manipulations. An empirical evaluation is also performed to study the impact of the different parameters of the model.
Jonker and Treur propose the Agent-Based Market Place (ABMP) model @cite_13 where agents, engage in bilateral negotiations. ABMP is a negotiation model where proposed bids are concessions to previous bids. The amount of concession is regulated by the concession factor (i.e., reservation utility), the negotiation speed, the acceptable utility gap (maximal difference between the target utility and the utility of an offer that is acceptable), and the impatience factor (which governs the probability of the agent leaving the negotiation process).
{ "cite_N": [ "@cite_13" ], "mid": [ "1531563433" ], "abstract": [ "A component-based generic agent architecture for multi-attribute (integrative) negotiation is introduced and its application is described in a prototype system for negotiation about cars, developed in co-operation with, among others, Dutch Telecom KPN. The approach can be characterised as co-operative one-to-one multi-criteria negotiation in which the privacy of both parties is protected as much as possible." ] }
1604.04728
2017435659
In this article, an agent-based negotiation model for negotiation teams that negotiate a deal with an opponent is presented. Agent-based negotiation teams are groups of agents that join together as a single negotiation party because they share an interest that is related to the negotiation process. The model relies on a trusted mediator that coordinates and helps team members in the decisions that they have to take during the negotiation process: which offer is sent to the opponent, and whether the offers received from the opponent are accepted. The main strength of the proposed negotiation model is the fact that it guarantees unanimity within team decisions since decisions report a utility to team members that is greater than or equal to their aspiration levels at each negotiation round. This work analyzes how unanimous decisions are taken within the team and the robustness of the model against different types of manipulations. An empirical evaluation is also performed to study the impact of the different parameters of the model.
@cite_3 propose a bilateral negotiation model where agents are allowed to propose up to @math different offers at each negotiation round. Offers are proposed from the current iso-utility curve according to a similarity mechanism that selects the most similar offer to the last offer received from the opponent. The selected similarity heuristic is the Euclidean distance since it is general and does not require domain-specific knowledge and information regarding the opponent's utility function. Results showed that the strategy is capable of reaching agreements that are very close to the Pareto Frontier. Sanchez- @cite_22 proposed an enhancement for this strategy in environments where computational resources are very limited and utility functions are complex. It relies on genetic algorithms to sample offers that are interesting for the agent itself and creates new offers during the negotiation process that are interesting for both parties. Results showed that the model is capable of obtaining statistically equivalent results to similar models that had the full iso-utility curve sampled, while being computationally more tractable.
{ "cite_N": [ "@cite_22", "@cite_3" ], "mid": [ "2039710942", "1523459335" ], "abstract": [ "Ambient Intelligence aims to offer personalized services and easier ways of interaction between people and systems. Since several users and systems may coexist in these environments, it is quite possible that entities with opposing preferences need to cooperate to reach their respective goals. Automated negotiation is pointed as one of the mechanisms that may provide a solution to this kind of problems. In this article, a multi-issue bilateral bargaining model for Ambient Intelligence domains is presented where it is assumed that agents have computational bounded resources and do not know their opponents' preferences. The main goal of this work is to provide negotiation models that obtain efficient agreements while maintaining the computational cost low. A niching genetic algorithm is used before the negotiation process to sample one's own utility function (self-sampling). During the negotiation process, genetic operators are applied over the opponent's and one's own offers in order to sample new offers that are interesting for both parties. Results show that the proposed model is capable of outperforming similarity heuristics which only sample before the negotiation process and of obtaining similar results to similarity heuristics which have access to all of the possible offers.", "This paper presents a decentralized model that allows self-interested agents to reach \"win-win\" agreements in a multi-attribute negotiation. The model is based on an alternating-offer protocol. In each period, the proposing agent is allowed to make a limited number of offers. The responding agent can select the best out of these offers. In the case of rejection, agents exchange their roles and the negotiation proceeds to the next period. To make counteroffers, an agent first uses the heuristic of choosing the offer on an indifference (or \"iso-utility\") curve surface that is closest to the best offer made by the opponent in the previous period, and then taking this offer as the seed, chooses several other offers randomly in a specified neighborhood of this seed offer. Experimental analysis shows agents can reach near Pareto optimal agreements in quite general situations following the model where agents may have complex preferences on the attributes and incomplete information. This model does not require the presence of a mediator." ] }
1604.04728
2017435659
In this article, an agent-based negotiation model for negotiation teams that negotiate a deal with an opponent is presented. Agent-based negotiation teams are groups of agents that join together as a single negotiation party because they share an interest that is related to the negotiation process. The model relies on a trusted mediator that coordinates and helps team members in the decisions that they have to take during the negotiation process: which offer is sent to the opponent, and whether the offers received from the opponent are accepted. The main strength of the proposed negotiation model is the fact that it guarantees unanimity within team decisions since decisions report a utility to team members that is greater than or equal to their aspiration levels at each negotiation round. This work analyzes how unanimous decisions are taken within the team and the robustness of the model against different types of manipulations. An empirical evaluation is also performed to study the impact of the different parameters of the model.
Multi-agent teamwork is also a close research topic. Agent teams have been proposed for a variety of tasks such as Robocup @cite_29 , rescue tasks @cite_32 , and transportation tasks @cite_0 . However, as far as we know, there is no published work that considers teams of agents negotiating with an opponent. Most works in agent teamwork consider fully cooperative agents that work to maximize shared goals. The team negotiation setting is different since, even though team members share a common interest related to the negotiation, there may be competition among team members to maximize one's own preferences.
{ "cite_N": [ "@cite_0", "@cite_29", "@cite_32" ], "mid": [ "2076064414", "2107280071", "2736978630" ], "abstract": [ "One reason why Distributed AI (DAI) technology has been deployed in relatively few real-size applications is that it lacks a clear and implementable model of cooperative problem solving which specifies how agents should operate and interact in complex, dynamic and unpredictable environments. As a consequence of the experience gained whilst building a number of DAI systems for industrial applications, a new principled model of cooperation has been developed. This model, called Joint Responsibility, has the notion of joint intentions at its core. It specifies pre-conditions which must be attained before collaboration can commence and prescribes how individuals should behave both when joint activity is progressing satisfactorily and also when it runs into difficulty. The theoretical model has been used to guide the implementation of a general-purpose cooperation framework and the qualitative and quantitative benefits of this implementation have been assessed through a series of comparative experiments in the real-world domain of electricity transportation management. Finally, the success of the approach of building a system with an explicit and grounded representation of cooperative problem solving is used to outline a proposal for the next generation of multi-agent systems.", "Multi-agent domains consisting of teams of agents that need to collaborate in an adversarial environment offer challenging research opportunities. In this article, we introduce periodic team synchronization (PTS) domains as time-critical environments in which agents act autonomously with low communication, but in which they can periodically synchronize in a full-communication setting. The two main contributions of this article are a flexible team agent structure and a method for inter-agent communication. First, the team agent structure allows agents to capture and reason about team agreements. We achieve collaboration between agents through the introduction of formations. A formation decomposes the task space defining a set of roles. Homogeneous agents can flexibly switch roles within formations, and agents can change formations dynamically, according to pre-defined triggers to be evaluated at run-time. This flexibility increases the performance of the overall team. Our teamwork structure further includes pre-planning for frequently occurring situations. Second, the communication method is designed for use during the low-communication periods in PTS domains. It overcomes the obstacles to inter-agent communication in multi-agent environments with unreliable, single-channel, high-cost, low-bandwidth communication. We fully implemented both the flexible teamwork structure and the communication method in the domain of simulated robotic soccer, and conducted controlled empirical experiments to verify their effectiveness. In addition, our simulator team made it to the semi-finals of the RoboCup-97 competition, in which 29 teams participated. It achieved a total score of 67–9 over six different games, and successfully demonstrated its flexible teamwork structure and inter-agent communication.", "Disaster rescue is one of the most serious social issues that involves very large numbers of heterogeneous agents in the hostile environment. The intention of the RoboCup Rescue project is to promote research and development in this socially significant domain at various levels, involving multiagent teamwork coordination, physical agents for search and rescue, information infrastructures, personal digital assistants, a standard simulator and decision-support systems, evaluation benchmarks for rescue strategies, and robotic systems that are all integrated into a comprehensive system in the future. For this effort, which was built on the success of the RoboCup Soccer project, we will provide forums of technical discussions and competitive evaluations for researchers and practitioners. Although the rescue domain is intuitively appealing as a large-scale multiagent and intelligent system domain, analysis has not yet revealed its domain characteristics. The first research evaluation meeting will be held at RoboCup-2001, in conjunction with the Seventeenth International Joint Conference on Artificial Intelligence (IJCAI-2001), as part of the RoboCup Rescue Simulation League and RoboCup AAAI Rescue Robot Competition. In this article, we present a detailed analysis of the task domain and elucidate characteristics necessary for multiagent and intelligent systems for this domain. Then, we present an overview of the RoboCup Rescue project." ] }
1604.04649
2339931821
Data generated on location-based social networks provide rich information on the whereabouts of urban dwellers. Specifically, such data reveal who spends time where, when, and on what type of activity (e.g., shopping at a mall, or dining at a restaurant). That information can, in turn, be used to describe city regions in terms of activity that takes place therein. For example, the data might reveal that citizens visit one region mainly for shopping in the morning, while another for dining in the evening. Furthermore, once such a description is available, one can ask more elaborate questions: What are the features that distinguish one region from another -- is it simply the type of venues they host or is it the visitors they attract? What regions are similar across cities? In this paper, we attempt to answer these questions using publicly shared Foursquare data. In contrast with previous work, our method makes use of a probabilistic model with minimal assumptions about the data and thus relieves us from having to make arbitrary decisions in our analysis (e.g., regarding the granularity of discovered regions or the importance of different features). We perform an empirical comparison with previous work and discuss insights obtained through our findings.
Finding cohesive geographical regions within cities has been attempted using a variety of data sources, such as cellphone activity @cite_34 , geotagged tweets @cite_6 , social interactions @cite_21 , types of buildings @cite_5 , or public transport and taxi trajectories @cite_36 @cite_3 .
{ "cite_N": [ "@cite_36", "@cite_21", "@cite_6", "@cite_3", "@cite_5", "@cite_34" ], "mid": [ "1990444226", "1986623896", "1971631718", "1588430920", "2068761245", "2953145849" ], "abstract": [ "The step of urbanization and modern civilization fosters different functional zones in a city, such as residential areas, business districts, and educational areas. In a metropolis, people commute between these functional zones every day to engage in different socioeconomic activities, e.g., working, shopping, and entertaining. In this paper, we propose a data-driven framework to discover functional zones in a city. Specifically, we introduce the concept of latent activity trajectory (LAT), which captures socioeconomic activities conducted by citizens at different locations in a chronological order. Later, we segment an urban area into disjointed regions according to major roads, such as highways and urban expressways. We have developed a topic-modeling-based approach to cluster the segmented regions into functional zones leveraging mobility and location semantics mined from LAT. Furthermore, we identify the intensity of each functional zone using Kernel Density Estimation. Extensive experiments are conducted with several urban scale datasets to show that the proposed framework offers a powerful ability to capture city dynamics and provides valuable calibrations to urban planners in terms of functional zones.", "This study attempts to measure neighborhood boundaries in a novel way by creating network neighborhoods based on the density of social ties among adolescents. We create valued matrices based on social ties and physical distance between adolescents in the county. We then perform factor analyses on these valued matrices to detect these network neighborhoods. The resulting network neighborhoods show considerable spatial contiguity. We assess the quality of these aggregations by comparing the degree of agreement among residents assigned to the same network neighborhood when assessing various characteristics of their “neighborhood”, along with traditional definitions of neighborhoods from Census aggregations. Our findings suggest that these network neighborhoods are a valuable approach for “neighborhood” aggregation.", "Individuals generate vast amounts of geolocated content through the use of mobile social media applications. In this context, Twitter has become an important sensor of the interactions between individuals and their environment. Building on this idea, this paper proposes the use of geolocated tweets as a complementary source of information for urban planning applications, focusing on the characterization of land use. The proposed technique uses unsupervised learning and automatically determines land uses in urban areas by clustering geographical regions with similar tweeting activity patterns. Three case studies are presented and validated for Manhattan (NYC), London (UK) and Madrid (Spain) using Twitter activity and land use information provided by the city planning departments. Results indicate that geolocated tweets can be used as a powerful data source for urban planning applications.", "With the rapid urbanization of Beijing in recent decades, comprehensively understanding its regions' structures and functions becomes more and more challenging, though it indeed plays a fundamental role in the city planning. While fortunately, the accumulation of huge mobility records from massive individuals provides an unprecedented big-data window for solving this issue. In this paper, we segment urban areas of Beijing into administrative and functional subdivisions through mining GPS trajectories of taxis. First, a flow network between small regions is established to administratively segment the urban area and Infomap is found to be a better approach. Second, temporal features from regions' flow dynamics are extracted to functionally segment the urban area through spectral clustering, which effectively identifies regions with different functions and flow patterns. Third, the comparison of segmentation at different time can vividly represent the evolution of the city, including emergence of new regions and vanishment of aging areas. Our results demonstrate the possibility that the big-data of movements generated by massive users could provide a new but promising probe to understand the evolution of cities in both spatial and temporal dimensions.", "Cities all around the world are in constant evolution due to numerous factors, such as fast urbanization and new ways of communication and transportation. Since understanding the composition of cities is the key to intelligent urbanization, there is a growing need to develop urban computing and analysis tools to guide the orderly development of cities, as well as to enhance their smooth and beneficiary evolution. This paper presents a spatial clustering approach to discover interesting regions and regions which serve different functions in cities. Spatial clustering groups the objects in a spatial dataset and identifies contiguous regions in the space of the spatial attributes. We formally define the task of finding uniform regions in spatial data as a maximization problem of a plug-in measure of uniformity and introduce a prototype-based clustering algorithm named CLEVER to find such regions. Moreover, polygon models which capture the scope of a spatial cluster and histogram-style distribution signatures are used to annotate the content of a spatial cluster in the proposed methodology; they play a key role in summarizing the composition of a spatial dataset. Furthermore, algorithms for identifying popular distribution signatures and approaches for identifying regions which express a particular distribution signature will be presented. The proposed methodology is demonstrated and evaluated in a challenging real-world case study centering on analyzing the composition of the city of Strasbourg in France.", "Understanding the spatiotemporal distribution of people within a city is crucial to many planning applications. Obtaining data to create required knowledge, currently involves costly survey methods. At the same time ubiquitous mobile sensors from personal GPS devices to mobile phones are collecting massive amounts of data on urban systems. The locations, communications, and activities of millions of people are recorded and stored by new information technologies. This work utilizes novel dynamic data, generated by mobile phone users, to measure spatiotemporal changes in population. In the process, we identify the relationship between land use and dynamic population over the course of a typical week. A machine learning classification algorithm is used to identify clusters of locations with similar zoned uses and mobile phone activity patterns. It is shown that the mobile phone data is capable of delivering useful information on actual land use that supplements zoning regulations." ] }
1604.04649
2339931821
Data generated on location-based social networks provide rich information on the whereabouts of urban dwellers. Specifically, such data reveal who spends time where, when, and on what type of activity (e.g., shopping at a mall, or dining at a restaurant). That information can, in turn, be used to describe city regions in terms of activity that takes place therein. For example, the data might reveal that citizens visit one region mainly for shopping in the morning, while another for dining in the evening. Furthermore, once such a description is available, one can ask more elaborate questions: What are the features that distinguish one region from another -- is it simply the type of venues they host or is it the visitors they attract? What regions are similar across cities? In this paper, we attempt to answer these questions using publicly shared Foursquare data. In contrast with previous work, our method makes use of a probabilistic model with minimal assumptions about the data and thus relieves us from having to make arbitrary decisions in our analysis (e.g., regarding the granularity of discovered regions or the importance of different features). We perform an empirical comparison with previous work and discuss insights obtained through our findings.
In that context, Location Based Social Networks ( LBSNs ) have also proven a rich source of data and utilized by recent works. For instance, @cite_9 collects checkins and build a @math -nearest spatial neighbors graph of venues, with edges weighted by the cosine similarity of both venue's user distribution. The regions are the spectral clusters of this graph. Using similar data, @cite_22 describes venues by category, peak time activity and a binary touristic indicator. Venues are clustered in hotspots along all these dimensions by the algorithm. The city is divided into a grid, with cells described by their hotspot density for each feature. Finally, similar cells are iteratively clustered into regions. Like us, @cite_12 considers venues to be essential in defining regions. The city is divided into a grid of cells with the goal of assigning each cell a category label in a way that is as specific as possible while being locally homogeneous. This is done through a bottom-up clustering which greedily merge neighboring cells to improve a cost function formalizing this trade off.
{ "cite_N": [ "@cite_9", "@cite_22", "@cite_12" ], "mid": [ "2164061616", "1998812539", "" ], "abstract": [ "Studying the social dynamics of a city on a large scale has traditionally been a challenging endeavor, often requiring long hours of observation and interviews, usually resulting in only a partial depiction of reality. To address this difficulty, we introduce a clustering model and research methodology for studying the structure and composition of a city on a large scale based on the social media its residents generate. We apply this new methodology to data from approximately 18 million check-ins collected from users of a location-based online social network. Unlike the boundaries of traditional municipal organizational units such as neighborhoods, which do not always reflect the character of life in these areas, our clusters, which we call Livehoods, are representations of the dynamic areas that comprise the city. We take a qualitative approach to validating these clusters, interviewing 27 residents of Pittsburgh, PA, to see how their perceptions of the city project onto our findings there. Our results provide strong support for the discovered clusters, showing how Livehoods reveal the distinctly characterized areas of the city and the forces that shape them.", "Information garnered from activity on location-based social networks can be harnessed to characterize urban spaces and organize them into neighborhoods. We represent geographic points in the city using spatio-temporal information about Foursquare user check-ins and semantic information about places, with the goal of developing features to input into a novel neighborhood detection algorithm. The algorithm first employs a similarity metric that assesses the homogeneity of a geographic area, and then with a simple mechanism of geographic navigation, it detects the boundaries of a city's neighborhoods. The models and algorithms devised are subsequently integrated into a publicly available, map-based tool named Hood square that allows users to explore activities and neighborhoods in cities around the world. Finally, we evaluate Hood square in the context of are commendation application where user profiles are matched to urban neighborhoods. By comparing with a number of baselines, we demonstrate how Hood square can be used to accurately predict the home neighborhood of Twitter users. We also show that we are able to suggest neighborhoods geographically constrained in size, a desirable property in mobile recommendation scenarios for which geographical precision is key.", "" ] }
1604.04649
2339931821
Data generated on location-based social networks provide rich information on the whereabouts of urban dwellers. Specifically, such data reveal who spends time where, when, and on what type of activity (e.g., shopping at a mall, or dining at a restaurant). That information can, in turn, be used to describe city regions in terms of activity that takes place therein. For example, the data might reveal that citizens visit one region mainly for shopping in the morning, while another for dining in the evening. Furthermore, once such a description is available, one can ask more elaborate questions: What are the features that distinguish one region from another -- is it simply the type of venues they host or is it the visitors they attract? What regions are similar across cities? In this paper, we attempt to answer these questions using publicly shared Foursquare data. In contrast with previous work, our method makes use of a probabilistic model with minimal assumptions about the data and thus relieves us from having to make arbitrary decisions in our analysis (e.g., regarding the granularity of discovered regions or the importance of different features). We perform an empirical comparison with previous work and discuss insights obtained through our findings.
Another task hindered by data sparsity and which benefits from modeling user preferences is spatial item recommendation. The interested reader will find many examples exploiting LBSN in a recent survey @cite_23 , but here we give a taste of two approaches inspired by SAGE. In both cases, topics are distributions over words and venues. Each user is endowed with her own topic, and so to are each region. @cite_41 uses SAGE to model user topics as a variation from the overall global distribution. To improve out of town recommendation, @cite_46 assigns regions both local and tourist topics. Learning such high number of parameters is made possible by combining SAGE and a hierarchical model called spatial pyramid.
{ "cite_N": [ "@cite_41", "@cite_46", "@cite_23" ], "mid": [ "2082260230", "2062079386", "1972436494" ], "abstract": [ "Mobile networks enable users to post on social media services (e.g., Twitter) from anywhere. The activities of mobile users involve three major entities: user, post, and location. The interaction of these entities is the key to answer questions such as who will post a message where and on what topic? In this paper, we address the problem of profiling mobile users by modeling their activities, i.e., we explore topic modeling considering the spatial and textual aspects of user posts, and predict future user locations. We propose the first ST (Spatial Topic) model to capture the correlation between users' movements and between user interests and the function of locations. We employ the sparse coding technique which greatly speeds up the learning process. We perform experiments on two real life data sets from Twitter and Yelp. Through comprehensive experiments, we demonstrate that our proposed model consistently improves the average precision@1,5,10,15,20 for location recommendation by at least 50 (Twitter) and 300 (Yelp) against existing state-of-the-art recommendation algorithms and geographical topic models.", "With the rapid development of location-based social networks (LBSNs), spatial item recommendation has become an important means to help people discover attractive and interesting venues and events, especially when users travel out of town. However, this recommendation is very challenging compared to the traditional recommender systems. A user can visit only a limited number of spatial items, leading to a very sparse user-item matrix. Most of the items visited by a user are located within a short distance from where he she lives, which makes it hard to recommend items when the user travels to a far away place. Moreover, user interests and behavior patterns may vary dramatically across different geographical regions. In light of this, we propose Geo-SAGE, a geographical sparse additive generative model for spatial item recommendation in this paper. Geo-SAGE considers both user personal interests and the preference of the crowd in the target region, by exploiting both the co-occurrence pattern of spatial items and the content of spatial items. To further alleviate the data sparsity issue, Geo-SAGE exploits the geographical correlation by smoothing the crowd's preferences over a well-designed spatial index structure called spatial pyramid. We conduct extensive experiments and the experimental results clearly demonstrate our Geo-SAGE model outperforms the state-of-the-art.", "Recent advances in localization techniques have fundamentally enhanced social networking services, allowing users to share their locations and location-related contents, such as geo-tagged photos and notes. We refer to these social networks as location-based social networks (LBSNs). Location data bridges the gap between the physical and digital worlds and enables a deeper understanding of users' preferences and behavior. This addition of vast geo-spatial datasets has stimulated research into novel recommender systems that seek to facilitate users' travels and social interactions. In this paper, we offer a systematic review of this research, summarizing the contributions of individual efforts and exploring their relations. We discuss the new properties and challenges that location brings to recommender systems for LBSNs. We present a comprehensive survey analyzing 1) the data source used, 2) the methodology employed to generate a recommendation, and 3) the objective of the recommendation. We propose three taxonomies that partition the recommender systems according to the properties listed above. First, we categorize the recommender systems by the objective of the recommendation, which can include locations, users, activities, or social media. Second, we categorize the recommender systems by the methodologies employed, including content-based, link analysis-based, and collaborative filtering-based methodologies. Third, we categorize the systems by the data sources used, including user profiles, user online histories, and user location histories. For each category, we summarize the goals and contributions of each system and highlight the representative research effort. Further, we provide comparative analysis of the recommender systems within each category. Finally, we discuss the available data-sets and the popular methods used to evaluate the performance of recommender systems. Finally, we point out promising research topics for future work. This article presents a panorama of the recommender systems in location-based social networks with a balanced depth, facilitating research into this important research theme." ] }
1604.04736
355035644
Under some circumstances, a group of individuals may need to negotiate together as a negotiation team against another party. Unlike bilateral negotiation between two individuals, this type of negotiations entails to adopt an intra-team strategy for negotiation teams in order to make team decisions and accordingly negotiate with the opponent. It is crucial to be able to negotiate successfully with heterogeneous opponents since opponents’ negotiation strategy and behavior may vary in an open environment. While one opponent might collaborate and concede over time, another may not be inclined to concede. This paper analyzes the performance of recently proposed intra-team strategies for negotiation teams against different categories of opponents: competitors, matchers, and conceders. Furthermore, it provides an extension of the negotiation tool Genius for negotiation teams in bilateral settings. Consequently, this work facilitates research in negotiation teams.
As far as we are concerned, only our previous works @cite_25 @cite_32 @cite_29 @cite_8 have considered negotiation teams in computational models. More specifically, the four different computational models introduced in this article are analyzed in different negotiation conditions when facing opponents governed by time tactics. However, the analysis does not include variability with respect to the strategy carried out by the opponent like the experiments carried out in this present article.
{ "cite_N": [ "@cite_29", "@cite_32", "@cite_25", "@cite_8" ], "mid": [ "2113420304", "2017435659", "", "2051409319" ], "abstract": [ "It has been documented in the social sciences that cultural factors affect how people negotiate and behave in negotiations. Despite the importance of culture in the business world and politics, there is a lack of computational models that help to analyze how cultural factors affect negotiation. Moreover, while many negotiations take place between teams, there is a dearth of computational models for team negotiations. In this paper we present the first attempt to provide a computational model which takes into account cultural factors in a team negotiation setting. The model considers how two important cultural dimensions, power distance and individualism collectivism, affect team negotiation dynamics and negotiation outcomes. We conducted experiments in high low intra-team conflict scenarios. The results are compatible with social sciences findings from team decision making.", "In this article, an agent-based negotiation model for negotiation teams that negotiate a deal with an opponent is presented. Agent-based negotiation teams are groups of agents that join together as a single negotiation party because they share an interest that is related to the negotiation process. The model relies on a trusted mediator that coordinates and helps team members in the decisions that they have to take during the negotiation process: which offer is sent to the opponent, and whether the offers received from the opponent are accepted. The main strength of the proposed negotiation model is the fact that it guarantees unanimity within team decisions since decisions report a utility to team members that is greater than or equal to their aspiration levels at each negotiation round. This work analyzes how unanimous decisions are taken within the team and the robustness of the model against different types of manipulations. An empirical evaluation is also performed to study the impact of the different parameters of the model.", "", "In this article we study the impact of the negotiation environment on the performance of several intra-team strategies (team dynamics) for agent-based negotiation teams that negotiate with an opponent. An agent-based negotiation team is a group of agents that joins together as a party because they share common interests in the negotiation at hand. It is experimentally shown how negotiation environment conditions like the deadline of both parties, the concession speed of the opponent, similarity among team members, and team size affect performance metrics like the minimum utility of team members, the average utility of team members, and the number of negotiation rounds. Our goal is identifying which intra-team strategies work better in different environmental conditions in order to provide useful knowledge for team members to select appropriate intra-team strategies according to environmental conditions." ] }
1604.04737
2051409319
In this article we study the impact of the negotiation environment on the performance of several intra-team strategies (team dynamics) for agent-based negotiation teams that negotiate with an opponent. An agent-based negotiation team is a group of agents that joins together as a party because they share common interests in the negotiation at hand. It is experimentally shown how negotiation environment conditions like the deadline of both parties, the concession speed of the opponent, similarity among team members, and team size affect performance metrics like the minimum utility of team members, the average utility of team members, and the number of negotiation rounds. Our goal is identifying which intra-team strategies work better in different environmental conditions in order to provide useful knowledge for team members to select appropriate intra-team strategies according to environmental conditions.
Multi-agent systems have gained a growing interest as the infrastructure necessary for the next generation of distributed systems. Due to the inherent conflict among agents, techniques that allow agents to solve their conflicts and cooperate are needed. This need is what has given birth to a group of technologies which have recently been referred to as agreement technologies @cite_4 @cite_32 . Trust and reputation @cite_10 @cite_12 @cite_22 , norms @cite_14 @cite_20 , agent organizations @cite_24 @cite_6 @cite_9 , argumentation @cite_25 @cite_3 and automated negotiation @cite_35 @cite_28 are part of the core that makes up this new family of technologies.
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_4", "@cite_22", "@cite_28", "@cite_9", "@cite_32", "@cite_6", "@cite_3", "@cite_24", "@cite_20", "@cite_10", "@cite_25", "@cite_12" ], "mid": [ "2115852343", "1599773397", "1010989072", "2100006177", "2039710942", "2110101488", "", "", "", "2039048406", "", "2120312633", "2145340246", "1983274586" ], "abstract": [ "", "In this paper we present some concepts and their relations that are necessary for modeling autonomous agents in an environment that is governed by some (social) norms. We divide the norms over three levels: the private level the contract level and the convention level. We show how deontic logic can be used to model the concepts and how the theory of speech acts can be used to model the generation of (some of) the norms. Finally we give some idea about an agent architecture incorporating the social norms based on a BDI framework.", "With the emergence of new paradigms for computing, such as peer-to-peer technologies, grid computing, autonomic computing and other approaches, it is becoming increasingly natural to view large systems in terms of the services they offer, and consequently in terms of the entities or agents providing or consuming services. For example, web services technologies provide a standard means of interoperating between different software applications, running on a variety of platforms. More generally, web services standards now serve as potential convergence point for diverse technology efforts in support of more general service-oriented architectures. Here, distributed systems are increasingly viewed as collections of service provider and service consumer components interlinked by dynamically defined workflows. Web services must thus be realised by concrete entities or agents that send and receive messages, while the services themselves are the resources characterised by the functionality provided. The important characteristics of these emerging domains and environments are that they are open and dynamic so that new agents may join and existing ones leave. In this view, agents act on behalf of service owners, managing access to services, and ensuring that contracts are fulfilled. They also act on behalf of service consumers, locating services, agreeing contracts, and receiving and presenting results. In these domains, agents are required to engage in interactions, negotiate with one another, make agreements, and make proactive run-time decisions, individually and collectively, while responding to changing circumstances. In particular, agents need to collaborate and to form coalitions of agents with different capabilities in support of new virtual organisations.", "Autonomous agents may encapsulate their principals' personal data attributes. These attributes may be disclosed to other agents during agent interactions, producing a loss of privacy. Thus, agents need self-disclosure decision-making mechanisms to autonomously decide whether disclosing personal data attributes to other agents is acceptable or not. Current self-disclosure decision-making mechanisms consider the direct benefit and the privacy loss of disclosing an attribute. However, there are many situations in which the direct benefit of disclosing an attribute is a priori unknown. This is the case in human relationships, where the disclosure of personal data attributes plays a crucial role in their development. In this paper, we present self-disclosure decision-making mechanisms based on psychological findings regarding how humans disclose personal information in the building of their relationships. We experimentally demonstrate that, in most situations, agents following these decision-making mechanisms lose less privacy than agents that do not use them.", "Ambient Intelligence aims to offer personalized services and easier ways of interaction between people and systems. Since several users and systems may coexist in these environments, it is quite possible that entities with opposing preferences need to cooperate to reach their respective goals. Automated negotiation is pointed as one of the mechanisms that may provide a solution to this kind of problems. In this article, a multi-issue bilateral bargaining model for Ambient Intelligence domains is presented where it is assumed that agents have computational bounded resources and do not know their opponents' preferences. The main goal of this work is to provide negotiation models that obtain efficient agreements while maintaining the computational cost low. A niching genetic algorithm is used before the negotiation process to sample one's own utility function (self-sampling). During the negotiation process, genetic operators are applied over the opponent's and one's own offers in order to sample new offers that are interesting for both parties. Results show that the proposed model is capable of outperforming similarity heuristics which only sample before the negotiation process and of obtaining similar results to similarity heuristics which have access to all of the possible offers.", "The emergence of multi-agent systems in the past years has led to the development of new methodologies to assist in the requirements and architectural analysis, as well as in the design phases of such systems. Consequently, several Agent Oriented Software Engineering (AOSE) methodologies have been proposed. In this paper, we analyze some AOSE methodologies, including Gaia, which supports the architectural design stage, and some proposed extensions. We then use an adapted version of this methodology to design an abstract generic system meta-model for a multi-robot application, which can be used as a basis for the design of these systems, avoiding or shortening repetitive tasks common to most systems. Based on the proposed Generic Robotic Agent Meta-Model (GRAMM), two distinct models for two different applications are derived, demonstrating the versatility and adaptability of the meta-model. By adapting the Gaia methodology to the design of open systems, this work makes the designers' job faster and easier, decreasing the time needed to complete several tasks, while at the same time maintaining a high-level overview of the system.", "", "", "", "Many researchers have demonstrated that the organizational design employed by an agent system can have a significant, quantitative effect on its performance characteristics. A range of organizational strategies have emerged from this line of research, each with different strengths and weaknesses. In this article we present a survey of the major organizational paradigms used in multi-agent systems. These include hierarchies, holarchies, coalitions, teams, congregations, societies, federations, markets, and matrix organizations. We will provide a description of each, discuss their advantages and disadvantages, and provide examples of how they may be instantiated and maintained. This summary will facilitate the comparative evaluation of organizational styles, allowing designers to first recognize the spectrum of possibilities, and then guiding the selection of an appropriate organizational design for a particular domain and environment.", "", "The scientific research in the area of computational mechanisms for trust and reputation in virtual societies is a recent discipline oriented to increase the reliability and performance of electronic communities. Computer science has moved from the paradigm of isolated machines to the paradigm of networks and distributed computing. Likewise, artificial intelligence is quickly moving from the paradigm of isolated and non-situated intelligence to the paradigm of situated, social and collective intelligence. The new paradigm of the so called intelligent or autonomous agents and multi-agent systems (MAS) together with the spectacular emergence of the information society technologies (specially reflected by the popularization of electronic commerce) are responsible for the increasing interest on trust and reputation mechanisms applied to electronic societies. This review wants to offer a panoramic view on current computational trust and reputation models.", "Negotiation is essential in settings where autonomous agents have conflicting interests and a desire to cooperate. For this reason, mechanisms in which agents exchange potential agreements according to various rules of interaction have become very popular in recent years as evident, for example, in the auction and mechanism design community. However, a growing body of research is now emerging which points out limitations in such mechanisms and advocates the idea that agents can increase the likelihood and quality of an agreement by exchanging arguments which influence each others' states. This community further argues that argument exchange is sometimes essential when various assumptions about agent rationality cannot be satisfied. To this end, in this article, we identify the main research motivations and ambitions behind work in the field. We then provide a conceptual framework through which we outline the core elements and features required by agents engaged in argumentation-based negotiation, as well as the environment that hosts these agents. For each of these elements, we survey and evaluate existing proposed techniques in the literature and highlight the major challenges that need to be addressed if argument-based negotiation research is to reach its full potential.", "This paper explores the relationships between the hard security concepts of identity and privacy on the one hand, and the soft security concepts of trust and reputation on the other hand. We specifically focus on two vulnerabilities that current trust and reputation systems have: the change of identity and multiple identities problems. As a result, we provide a privacy preserving solution to these vulnerabilities which integrates the explored relationships among identity, privacy, trust and reputation. We also provide a prototype of our solution to these vulnerabilities and an application scenario." ] }
1604.04737
2051409319
In this article we study the impact of the negotiation environment on the performance of several intra-team strategies (team dynamics) for agent-based negotiation teams that negotiate with an opponent. An agent-based negotiation team is a group of agents that joins together as a party because they share common interests in the negotiation at hand. It is experimentally shown how negotiation environment conditions like the deadline of both parties, the concession speed of the opponent, similarity among team members, and team size affect performance metrics like the minimum utility of team members, the average utility of team members, and the number of negotiation rounds. Our goal is identifying which intra-team strategies work better in different environmental conditions in order to provide useful knowledge for team members to select appropriate intra-team strategies according to environmental conditions.
Even though agreement technologies are a novel topic in the community of agent research, some of its core technologies like automated negotiation have been studied by scholars for a few years. In definition, automated negotiation is a process carried out between two or more parties in order to reach an agreement by means of exchange of proposals. Two different research trends can be distinguished in automated negotiation models. The first type of model aims to calculate the optimum strategy given certain information about the opponent and the negotiation environment @cite_17 @cite_19 @cite_15 . The second type of model encloses heuristics that do not calculate the optimum strategy but obtain results that aim to be as close to the optimum as possible @cite_30 @cite_23 @cite_31 @cite_5 . These models assume imperfect knowledge about the opponent and the environment, and aim to be computationally tractable while obtaining good results. This present work can be classified into the latter type of models.
{ "cite_N": [ "@cite_30", "@cite_31", "@cite_19", "@cite_23", "@cite_5", "@cite_15", "@cite_17" ], "mid": [ "2105440797", "2110872636", "2114914572", "1531563433", "1523459335", "2144187367", "2035409777" ], "abstract": [ "Abstract We present a formal model of negotiation between autonomous agents. The purpose of the negotiation is to reach an agreement about the provision of a service by one agent for another. The model defines a range of strategies and tactics that agents can employ to generate initial offers, evaluate proposals and offer counter proposals. The model is based on computationally tractable assumptions, demonstrated in the domain of business process management and empirically evaluated.", "Automated negotiation is a key form of interaction in systems that are composed of multiple autonomous agents. The aim of such interactions is to reach agreements through an iterative process of making offers. The content of such proposals are, however, a function of the strategy of the agents. Here we present a strategy called the trade-off strategy where multiple negotiation decision variables are traded-off against one another (e.g., paying a higher price in order to obtain an earlier delivery date or waiting longer in order to obtain a higher quality service). Such a strategy is commonly known to increase the social welfare of agents. Yet, to date, most computational work in this area has ignored the issue of trade-offs, instead aiming to increase social welfare through mechanism design. The aim of this paper is to develop a heuristic computational model of the trade-off strategy and show that it can lead to an increased social welfare of the system. A novel linear algorithm is presented that enables software agents to make trade-offs for multi-dimensional goods for the problem of distributed resource allocation. Our algorithm is motivated by a number of real-world negotiation applications that we have developed and can operate in the presence of varying degrees of uncertainty. Moreover, we show that on average the total time used by the algorithm is linearly proportional to the number of negotiation issues under consideration. This formal analysis is complemented by an empirical evaluation that highlights the operational effectiveness of the algorithm in a range of negotiation scenarios. The algorithm itself operates by using the notion of fuzzy similarity to approximate the preference structure of the other negotiator and then uses a hill-climbing technique to explore the space of possible trade-offs for the one that is most likely to be acceptable.  2002 Elsevier Science B.V. All rights reserved.", "In this paper we study multi issue alternating-offers bargaining in a perfect information finite horizon setting, we determine the pertinent subgame perfect equilibrium, and we provide an algorithm to compute it. The equilibrium is determined by making a novel use of backward induction together with convex programming techniques in multi issue settings. We show that the agents reach an agreement immediately and that such an agreement is Pareto efficient. Furthermore, we prove that, when the multi issue utility functions are linear, the problem of computing the equilibrium is tractable and the related complexity is polynomial with the number of issues and linear with the deadline of bargaining.", "A component-based generic agent architecture for multi-attribute (integrative) negotiation is introduced and its application is described in a prototype system for negotiation about cars, developed in co-operation with, among others, Dutch Telecom KPN. The approach can be characterised as co-operative one-to-one multi-criteria negotiation in which the privacy of both parties is protected as much as possible.", "This paper presents a decentralized model that allows self-interested agents to reach \"win-win\" agreements in a multi-attribute negotiation. The model is based on an alternating-offer protocol. In each period, the proposing agent is allowed to make a limited number of offers. The responding agent can select the best out of these offers. In the case of rejection, agents exchange their roles and the negotiation proceeds to the next period. To make counteroffers, an agent first uses the heuristic of choosing the offer on an indifference (or \"iso-utility\") curve surface that is closest to the best offer made by the opponent in the previous period, and then taking this offer as the seed, chooses several other offers randomly in a specified neighborhood of this seed offer. Experimental analysis shows agents can reach near Pareto optimal agreements in quite general situations following the model where agents may have complex preferences on the attributes and incomplete information. This model does not require the presence of a mediator.", "This paper studies bilateral multi-issue negotiation between self-interested autonomous agents. Now, there are a number of different procedures that can be used for this process; the three main ones being the package deal procedure in which all the issues are bundled and discussed together, the simultaneous procedure in which the issues are discussed simultaneously but independently of each other, and the sequential procedure in which the issues are discussed one after another. Since each of them yields a different outcome, a key problem is to decide which one to use in which circumstances. Specifically, we consider this question for a model in which the agents have time constraints (in the form of both deadlines and discount factors) and information uncertainty (in that the agents do not know the opponent's utility function). For this model, we consider issues that are both independent and those that are interdependent and determine equilibria for each case for each procedure. In so doing, we show that the package deal is in fact the optimal procedure for each party. We then go on to show that, although the package deal may be computationally more complex than the other two procedures, it generates Pareto optimal outcomes (unlike the other two), it has similar earliest and latest possible times of agreement to the simultaneous procedure (which is better than the sequential procedure), and that it (like the other two procedures) generates a unique outcome only under certain conditions (which we define).", "We study a bilateral multi-issue bargaining procedure with complete information and endogenous unrestricted agenda, in which offers can be made in any subset of outstanding issues. We find necessary and sufficient conditions for this procedure to have a unique subgame perfect equilibrium agreement." ] }
1604.04737
2051409319
In this article we study the impact of the negotiation environment on the performance of several intra-team strategies (team dynamics) for agent-based negotiation teams that negotiate with an opponent. An agent-based negotiation team is a group of agents that joins together as a party because they share common interests in the negotiation at hand. It is experimentally shown how negotiation environment conditions like the deadline of both parties, the concession speed of the opponent, similarity among team members, and team size affect performance metrics like the minimum utility of team members, the average utility of team members, and the number of negotiation rounds. Our goal is identifying which intra-team strategies work better in different environmental conditions in order to provide useful knowledge for team members to select appropriate intra-team strategies according to environmental conditions.
Jonker and Treur propose the Agent-Based Market Place (ABMP) model @cite_23 where agents, engage in bilateral negotiations. ABMP is a negotiation model where proposed bids are concessions to previous bids. The amount of concession is regulated by the concession factor (i.e., reservation utility), the negotiation speed, the acceptable utility gap (maximal difference between the target utility and the utility of an offer that is acceptable), and the impatience factor (which governs the probability of the agent leaving the negotiation process).
{ "cite_N": [ "@cite_23" ], "mid": [ "1531563433" ], "abstract": [ "A component-based generic agent architecture for multi-attribute (integrative) negotiation is introduced and its application is described in a prototype system for negotiation about cars, developed in co-operation with, among others, Dutch Telecom KPN. The approach can be characterised as co-operative one-to-one multi-criteria negotiation in which the privacy of both parties is protected as much as possible." ] }
1604.04737
2051409319
In this article we study the impact of the negotiation environment on the performance of several intra-team strategies (team dynamics) for agent-based negotiation teams that negotiate with an opponent. An agent-based negotiation team is a group of agents that joins together as a party because they share common interests in the negotiation at hand. It is experimentally shown how negotiation environment conditions like the deadline of both parties, the concession speed of the opponent, similarity among team members, and team size affect performance metrics like the minimum utility of team members, the average utility of team members, and the number of negotiation rounds. Our goal is identifying which intra-team strategies work better in different environmental conditions in order to provide useful knowledge for team members to select appropriate intra-team strategies according to environmental conditions.
@cite_5 propose a decentralized bilateral negotiation model where agents are allowed to propose up to @math different offers at each negotiation round. Offers are proposed from the current iso-utility curve according to a similarity mechanism that selects the most similar offer to the last offer received from the opponent. The selected similarity heuristic is the Euclidean distance since it is general and does not require domain-specific knowledge and information regarding the opponent's utility function. Results showed that the strategy is capable of reaching agreements that are very close to the Pareto Frontier. Sanchez- @cite_28 proposed an enhancement for this strategy in environments where computational resources are very limited and utility functions are complex. It relies on genetic algorithms to sample offers that are interesting for the agent itself and creates new offers during the negotiation process that are interesting for both parties. Results showed that the model is capable of obtaining statistically equivalent results to similar models that had the full iso-utility curve sampled, while being computationally more tractable. As commented above, some of our intra-team strategies use similarity heuristics to satisfy team members' preferences and the opponent's preferences.
{ "cite_N": [ "@cite_28", "@cite_5" ], "mid": [ "2039710942", "1523459335" ], "abstract": [ "Ambient Intelligence aims to offer personalized services and easier ways of interaction between people and systems. Since several users and systems may coexist in these environments, it is quite possible that entities with opposing preferences need to cooperate to reach their respective goals. Automated negotiation is pointed as one of the mechanisms that may provide a solution to this kind of problems. In this article, a multi-issue bilateral bargaining model for Ambient Intelligence domains is presented where it is assumed that agents have computational bounded resources and do not know their opponents' preferences. The main goal of this work is to provide negotiation models that obtain efficient agreements while maintaining the computational cost low. A niching genetic algorithm is used before the negotiation process to sample one's own utility function (self-sampling). During the negotiation process, genetic operators are applied over the opponent's and one's own offers in order to sample new offers that are interesting for both parties. Results show that the proposed model is capable of outperforming similarity heuristics which only sample before the negotiation process and of obtaining similar results to similarity heuristics which have access to all of the possible offers.", "This paper presents a decentralized model that allows self-interested agents to reach \"win-win\" agreements in a multi-attribute negotiation. The model is based on an alternating-offer protocol. In each period, the proposing agent is allowed to make a limited number of offers. The responding agent can select the best out of these offers. In the case of rejection, agents exchange their roles and the negotiation proceeds to the next period. To make counteroffers, an agent first uses the heuristic of choosing the offer on an indifference (or \"iso-utility\") curve surface that is closest to the best offer made by the opponent in the previous period, and then taking this offer as the seed, chooses several other offers randomly in a specified neighborhood of this seed offer. Experimental analysis shows agents can reach near Pareto optimal agreements in quite general situations following the model where agents may have complex preferences on the attributes and incomplete information. This model does not require the presence of a mediator." ] }
1604.04737
2051409319
In this article we study the impact of the negotiation environment on the performance of several intra-team strategies (team dynamics) for agent-based negotiation teams that negotiate with an opponent. An agent-based negotiation team is a group of agents that joins together as a party because they share common interests in the negotiation at hand. It is experimentally shown how negotiation environment conditions like the deadline of both parties, the concession speed of the opponent, similarity among team members, and team size affect performance metrics like the minimum utility of team members, the average utility of team members, and the number of negotiation rounds. Our goal is identifying which intra-team strategies work better in different environmental conditions in order to provide useful knowledge for team members to select appropriate intra-team strategies according to environmental conditions.
Multi-agent teamwork is also a close research area. Agent teams have been proposed for a variety of tasks such as Robocup @cite_38 , rescue tasks @cite_42 , and transportation tasks @cite_0 . However, as far as we know, there is no published work that considers teams of agents negotiating with an opponent. Most works in agent teamwork consider fully cooperative agents that work to maximize shared goals. The team negotiation setting is different since, even though team members share a common interest related to the negotiation, there may be competition among team members to maximize one's own preferences.
{ "cite_N": [ "@cite_0", "@cite_38", "@cite_42" ], "mid": [ "2076064414", "2107280071", "2736978630" ], "abstract": [ "One reason why Distributed AI (DAI) technology has been deployed in relatively few real-size applications is that it lacks a clear and implementable model of cooperative problem solving which specifies how agents should operate and interact in complex, dynamic and unpredictable environments. As a consequence of the experience gained whilst building a number of DAI systems for industrial applications, a new principled model of cooperation has been developed. This model, called Joint Responsibility, has the notion of joint intentions at its core. It specifies pre-conditions which must be attained before collaboration can commence and prescribes how individuals should behave both when joint activity is progressing satisfactorily and also when it runs into difficulty. The theoretical model has been used to guide the implementation of a general-purpose cooperation framework and the qualitative and quantitative benefits of this implementation have been assessed through a series of comparative experiments in the real-world domain of electricity transportation management. Finally, the success of the approach of building a system with an explicit and grounded representation of cooperative problem solving is used to outline a proposal for the next generation of multi-agent systems.", "Multi-agent domains consisting of teams of agents that need to collaborate in an adversarial environment offer challenging research opportunities. In this article, we introduce periodic team synchronization (PTS) domains as time-critical environments in which agents act autonomously with low communication, but in which they can periodically synchronize in a full-communication setting. The two main contributions of this article are a flexible team agent structure and a method for inter-agent communication. First, the team agent structure allows agents to capture and reason about team agreements. We achieve collaboration between agents through the introduction of formations. A formation decomposes the task space defining a set of roles. Homogeneous agents can flexibly switch roles within formations, and agents can change formations dynamically, according to pre-defined triggers to be evaluated at run-time. This flexibility increases the performance of the overall team. Our teamwork structure further includes pre-planning for frequently occurring situations. Second, the communication method is designed for use during the low-communication periods in PTS domains. It overcomes the obstacles to inter-agent communication in multi-agent environments with unreliable, single-channel, high-cost, low-bandwidth communication. We fully implemented both the flexible teamwork structure and the communication method in the domain of simulated robotic soccer, and conducted controlled empirical experiments to verify their effectiveness. In addition, our simulator team made it to the semi-finals of the RoboCup-97 competition, in which 29 teams participated. It achieved a total score of 67–9 over six different games, and successfully demonstrated its flexible teamwork structure and inter-agent communication.", "Disaster rescue is one of the most serious social issues that involves very large numbers of heterogeneous agents in the hostile environment. The intention of the RoboCup Rescue project is to promote research and development in this socially significant domain at various levels, involving multiagent teamwork coordination, physical agents for search and rescue, information infrastructures, personal digital assistants, a standard simulator and decision-support systems, evaluation benchmarks for rescue strategies, and robotic systems that are all integrated into a comprehensive system in the future. For this effort, which was built on the success of the RoboCup Soccer project, we will provide forums of technical discussions and competitive evaluations for researchers and practitioners. Although the rescue domain is intuitively appealing as a large-scale multiagent and intelligent system domain, analysis has not yet revealed its domain characteristics. The first research evaluation meeting will be held at RoboCup-2001, in conjunction with the Seventeenth International Joint Conference on Artificial Intelligence (IJCAI-2001), as part of the RoboCup Rescue Simulation League and RoboCup AAAI Rescue Robot Competition. In this article, we present a detailed analysis of the task domain and elucidate characteristics necessary for multiagent and intelligent systems for this domain. Then, we present an overview of the RoboCup Rescue project." ] }
1604.04730
2039710942
Ambient Intelligence aims to offer personalized services and easier ways of interaction between people and systems. Since several users and systems may coexist in these environments, it is quite possible that entities with opposing preferences need to cooperate to reach their respective goals. Automated negotiation is pointed as one of the mechanisms that may provide a solution to this kind of problems. In this article, a multi-issue bilateral bargaining model for Ambient Intelligence domains is presented where it is assumed that agents have computational bounded resources and do not know their opponents' preferences. The main goal of this work is to provide negotiation models that obtain efficient agreements while maintaining the computational cost low. A niching genetic algorithm is used before the negotiation process to sample one's own utility function (self-sampling). During the negotiation process, genetic operators are applied over the opponent's and one's own offers in order to sample new offers that are interesting for both parties. Results show that the proposed model is capable of outperforming similarity heuristics which only sample before the negotiation process and of obtaining similar results to similarity heuristics which have access to all of the possible offers.
Ambient Intelligence looks to offer personalized services and provide users with easier and more efficient ways to communicate and interact with other people and systems @cite_14 @cite_24 . Since several users may coexist in AmI environments, it is quite probable that their preferences conflict and thus mechanisms are needed to allow users to cooperate. For instance, imagine a ubiquitous shopping mall @cite_32 @cite_0 where buying agents have to help users to buy the products, and vendor agents have to maximize their users' profits. Automated negotiation provides mechanisms that solve this particularly interesting problem. Some authors have already claimed that in most real world negotiations such as e-commerce @cite_35 @cite_18 @cite_7 , issues present interdependence relationships that make agents' utility functions complex. Therefore, the problem of complex utility functions in automated negotiation is also interesting for AmI applications.
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_14", "@cite_7", "@cite_32", "@cite_24", "@cite_0" ], "mid": [ "2951650962", "2615078972", "2087201794", "2098265769", "2059422694", "1990087375", "2144386207" ], "abstract": [ "", "", "This paper presents an autonomous intelligent agent developed for monitoring Alzheimer patients' health care in execution time in geriatric residences. The AGALZ (Autonomous aGent for monitoring ALZheimer patients) is an autonomous deliberative case-based planner agent designed to plan the nurses' working time dynamically, to maintain the standard working reports about the nurses' activities, and to guarantee that the patients assigned to the nurses are given the right care. The agent operates in wireless devices and is integrated with complementary agents into a multi-agent system, named ALZ-MAS (ALZheimer Multi-Agent System), capable of interacting with the environment. AGALZ description, its relationship with the complementary agents, and preliminary results of the multi-agent system prototype in a real environment are presented.", "Multi-Issue Negotiation protocols have been studied very widely and represent a promising field since most of negotiation problems in the real-world are complex ones including multiple issues. In particular, in reality issues are constrained each other. This makes agents' utilities nonlinear. There have been a lot of work on multi-issue negotiations. However, there have been very few work that focus on nonlinear utility spaces. In this paper, we assume agents have nonlinear utility spaces. For the linear utility domain, agents can aggregate the utilities of the issue-values by simple linear summation. In the real world, such aggregations are unrealistic. For example, we cannot just add up the value of car's tires and the value of car's engine when engineers design a car. In this paper, we propose an auction-based multiple-issue negotiation protocol among nonlinear utility agents. Our negotiation protocol employs several techniques, i.e., adjusting sampling, auction-based maximization of social welfare. Our experimental results show that our method can outperform the existing simple methods in particular in the huge utility space that can be often found in the real-world. Further, theoretically, our negotiation protocol can guarantee the completeness if some conditions are satisfied.", "E-commerce has been one of the success stories of the last decade. Developments in wireless communications and mobile computing have heralded an era of mobile commerce (m-commerce). Though a number of successful applications and services have been deployed, these are almost invariably inherently static in nature. By augmenting m-commerce with intelligent and autonomous components, the significant benefits of convenience and added value may be realized for the average shopper as they wander their local shopping mall or high street. By a selective and judicious juxtaposition of Ambient Intelligent (AmI) concepts and e-commerce precepts, the foundations for truly ubiquitous commerce (u-commerce) may be constructed. In this paper, the synergy between AmI and e-commerce is explored and developed, resulting in the description of a prototypical application that demonstrates the viability of this approach.", "Ambient intelligence (AmI) is a new multidisciplinary paradigm rooted in the ideas of NormanAuthor of the Invisible Computer [32]. and Ubiquitous Computing. AmI fosters novel anthropomorphic human–machine models of interaction. In AmI, technologies are deployed to make computers disappear in the background, while the human user moves into the foreground in complete control of the augmented environment. AmI is a user-centric paradigm, it supports a variety of artificial intelligence methods and works pervasively, nonintrusively, and transparently to aid the user. AmI supports and promotes interdisciplinary research encompassing the technological, scientific and artistic fields creating a virtual support for embedded and distributed intelligence.", "This paper introduces the SHOMAS Multiagent System that provides guidance on leisure facilities and suggestions for shopping in malls. The multiagent architecture incorporates reactive and deliberative agents that take decisions automatically. The developed deliberative agent provides suggestions in execution time, with the help of case-based planners. This agent is described together with its guidance and suggestion mechanism. SHOMAS has been tested successfully, and the results obtained are presented in this paper." ] }
1604.04730
2039710942
Ambient Intelligence aims to offer personalized services and easier ways of interaction between people and systems. Since several users and systems may coexist in these environments, it is quite possible that entities with opposing preferences need to cooperate to reach their respective goals. Automated negotiation is pointed as one of the mechanisms that may provide a solution to this kind of problems. In this article, a multi-issue bilateral bargaining model for Ambient Intelligence domains is presented where it is assumed that agents have computational bounded resources and do not know their opponents' preferences. The main goal of this work is to provide negotiation models that obtain efficient agreements while maintaining the computational cost low. A niching genetic algorithm is used before the negotiation process to sample one's own utility function (self-sampling). During the negotiation process, genetic operators are applied over the opponent's and one's own offers in order to sample new offers that are interesting for both parties. Results show that the proposed model is capable of outperforming similarity heuristics which only sample before the negotiation process and of obtaining similar results to similarity heuristics which have access to all of the possible offers.
@cite_20 presented a negotiation model for linear utility functions where a negotiation strategy is composed of different tactics that may be applied depending on the negotiation time, the quantity of the resource and the behavior of the opponent. Nevertheless, the model is only applicable in negotiation with linear utility functions, which are easier cases than those presented in this present article.
{ "cite_N": [ "@cite_20" ], "mid": [ "2105440797" ], "abstract": [ "Abstract We present a formal model of negotiation between autonomous agents. The purpose of the negotiation is to reach an agreement about the provision of a service by one agent for another. The model defines a range of strategies and tactics that agents can employ to generate initial offers, evaluate proposals and offer counter proposals. The model is based on computationally tractable assumptions, demonstrated in the domain of business process management and empirically evaluated." ] }
1604.04730
2039710942
Ambient Intelligence aims to offer personalized services and easier ways of interaction between people and systems. Since several users and systems may coexist in these environments, it is quite possible that entities with opposing preferences need to cooperate to reach their respective goals. Automated negotiation is pointed as one of the mechanisms that may provide a solution to this kind of problems. In this article, a multi-issue bilateral bargaining model for Ambient Intelligence domains is presented where it is assumed that agents have computational bounded resources and do not know their opponents' preferences. The main goal of this work is to provide negotiation models that obtain efficient agreements while maintaining the computational cost low. A niching genetic algorithm is used before the negotiation process to sample one's own utility function (self-sampling). During the negotiation process, genetic operators are applied over the opponent's and one's own offers in order to sample new offers that are interesting for both parties. Results show that the proposed model is capable of outperforming similarity heuristics which only sample before the negotiation process and of obtaining similar results to similarity heuristics which have access to all of the possible offers.
@cite_34 determined the successful strategies for different settings using the model proposed by @cite_20 . They employ an evolutionary approach in which strategies and tactics correspond to the genetic material in a genetic algorithm. In their experiments, populations of buyers and sellers with different strategies negotiate in a round robin way. After each round robin round, strategies are evaluated by means of a fitness function. Then, strategies are selected to be the parents of the next population according to their fitness function. In the end, a population of strategies implicitly adapted to the environment is obtained. They use genetic algorithms as a learning mechanism of negotiation strategies when placed under certain circumstances. There are two differences between work and the present work. Firstly, the negotiation model of is designed for linear utility functions. Secondly, the genetic algorithm proposed in this present work is an implicit learning mechanism of the opponent's preferences that guides the offer sampling during the negotiation process.
{ "cite_N": [ "@cite_34", "@cite_20" ], "mid": [ "2107939987", "2105440797" ], "abstract": [ "To be successful in open, multi-agent environments, autonomous agents must be capable of adapting their negotiation strategies and tactics to their prevailing circumstances. To this end, we present an empirical study showing the relative success of different strategies against different types of opponent in different environments. In particular we adopt an evolutionary approach in which strategies and tactics correspond to the genetic material in a genetic algorithm. We conduct a series of experiments to determine the most successful strategies and to see how and when these strategies evolve depending on the context and negotiation stance of the agent's opponent.", "Abstract We present a formal model of negotiation between autonomous agents. The purpose of the negotiation is to reach an agreement about the provision of a service by one agent for another. The model defines a range of strategies and tactics that agents can employ to generate initial offers, evaluate proposals and offer counter proposals. The model is based on computationally tractable assumptions, demonstrated in the domain of business process management and empirically evaluated." ] }
1604.04730
2039710942
Ambient Intelligence aims to offer personalized services and easier ways of interaction between people and systems. Since several users and systems may coexist in these environments, it is quite possible that entities with opposing preferences need to cooperate to reach their respective goals. Automated negotiation is pointed as one of the mechanisms that may provide a solution to this kind of problems. In this article, a multi-issue bilateral bargaining model for Ambient Intelligence domains is presented where it is assumed that agents have computational bounded resources and do not know their opponents' preferences. The main goal of this work is to provide negotiation models that obtain efficient agreements while maintaining the computational cost low. A niching genetic algorithm is used before the negotiation process to sample one's own utility function (self-sampling). During the negotiation process, genetic operators are applied over the opponent's and one's own offers in order to sample new offers that are interesting for both parties. Results show that the proposed model is capable of outperforming similarity heuristics which only sample before the negotiation process and of obtaining similar results to similarity heuristics which have access to all of the possible offers.
Later, @cite_16 presented a negotiation strategy for bilateral bargaining that is focused on achieving situations by means of trade-off. The heuristic applied to perform trade-off is similar to that employed in this present work. Given an agent's current utility, the offer from the iso-utility curve that is most similar to the last offer received from the opponent is sent. The idea behind this heuristic is that, since the proposed offer is the most similar to the last offer received from the opponent, it is more likely to be satisfactory to both participants. A fuzzy similarity criterion is employed to compare offers. Nevertheless, the use of fuzzy similarity requires some knowledge of opponent preferences. The application of criteria of this kind is complicated in complex utility functions due to the interdependencies among the different issues. In this present work, the Euclidean distance is used, as this does not require any knowledge about the opponent, and which is independent of the interdependencies among issues.
{ "cite_N": [ "@cite_16" ], "mid": [ "2161936342" ], "abstract": [ "Addresses the issues involved in software agents making trade-offs during automated negotiations in which they have information uncertainty and resource limitations. In particular the importance of being able to make tradeoffs in real-world applications is highlighted and an algorithm for performing trade-offs for multi-dimensional goods is developed. The algorithm uses the notion of fuzzy similarity in order to find negotiation solutions that are beneficial to both parties. Empirical results indicate the benefits and effectiveness of the trade-off algorithm in a range of negotiation situations." ] }
1604.04730
2039710942
Ambient Intelligence aims to offer personalized services and easier ways of interaction between people and systems. Since several users and systems may coexist in these environments, it is quite possible that entities with opposing preferences need to cooperate to reach their respective goals. Automated negotiation is pointed as one of the mechanisms that may provide a solution to this kind of problems. In this article, a multi-issue bilateral bargaining model for Ambient Intelligence domains is presented where it is assumed that agents have computational bounded resources and do not know their opponents' preferences. The main goal of this work is to provide negotiation models that obtain efficient agreements while maintaining the computational cost low. A niching genetic algorithm is used before the negotiation process to sample one's own utility function (self-sampling). During the negotiation process, genetic operators are applied over the opponent's and one's own offers in order to sample new offers that are interesting for both parties. Results show that the proposed model is capable of outperforming similarity heuristics which only sample before the negotiation process and of obtaining similar results to similarity heuristics which have access to all of the possible offers.
@cite_26 @cite_19 @cite_13 analyzed the problem of multi-attribute negotiations in an agenda-based framework. Agendas determine in which order the different issues are to be negotiated when negotiations are carried out issue by issue. Once an agreement has been found on a specific issue, it cannot be changed. Thus, the agents face the problem of which issues should be negotiated first and which strategies should be applied. They studied the optimal agendas for different scenarios. Nevertheless, their work focused on linear utility functions, which does not take into account the possible interdependences among the different issues.
{ "cite_N": [ "@cite_19", "@cite_26", "@cite_13" ], "mid": [ "2108260662", "2132786218", "2619153869" ], "abstract": [ "This paper presents a new model for multi-issue negotiation under time constraints in an incomplete information setting. The issues to be bargained over can be associated with a single good service or multiple goods services. In our agenda-based model, the order in which issues are bargained over and agreements are reached is determined endogenously, as part of the bargaining equilibrium. In this context we determine the conditions under which agents have similar preferences over the implementation scheme and the conditions under which they have conflicting preferences. Our analysis shows the existence of equilibrium even when both players have uncertain information about each other, and each agent's information is its private knowledge. We also study the properties of the equilibrium solution and determine conditions under which it is unique, symmetric, and Pareto-optimal.", "There are two ways of handling bilateral multi-issue negotiations -- one is to negotiate all the issues together, and the other is to negotiate them one by one. The order in which issues are negotiated in issue-by-issue negotiation is specified by the agenda, which can be defined in two ways. One way is to decide it exogenously, i.e., before negotiation begins. The other way is to let the players decide which issue they will negotiate next, during the process of negotiation, i.e., the agenda is determined endogenously. Against this background, this paper studies the effect of combining the exogenous and endogenous agendas on the players' utilities. More specifically, we determine whether, decomposing a set of N issues into k stages (for 1 ≤ k ≤ N), determining the issues to be negotiated at each stage exogenously, and negotiating each stage sequentially using an endogenous agenda can improve an agent's utility relative to the utility it gets if the agenda for all the N issues is defined endogenously. For each agent, we find the expected utility for each value of k between 1 and N. The value of k that gives an agent maximum utility is its optimal number of stages.Our study shows that, in some negotiation scenarios, the optimal value of k is identical for the two players, and is greater than one. In other words, in some negotiation scenarios, both the agents can improve their utilities by using the k-stage negotiation relative to the single stage negotiation. However, since the players have incomplete information about the negotiation parameters, they cannot identify such scenarios. We therefore present an extended alternating offers protocol, that allows the agents to identify such scenarios through a mediator, thereby resulting in improved utility to both the agents.", "" ] }
1604.04730
2039710942
Ambient Intelligence aims to offer personalized services and easier ways of interaction between people and systems. Since several users and systems may coexist in these environments, it is quite possible that entities with opposing preferences need to cooperate to reach their respective goals. Automated negotiation is pointed as one of the mechanisms that may provide a solution to this kind of problems. In this article, a multi-issue bilateral bargaining model for Ambient Intelligence domains is presented where it is assumed that agents have computational bounded resources and do not know their opponents' preferences. The main goal of this work is to provide negotiation models that obtain efficient agreements while maintaining the computational cost low. A niching genetic algorithm is used before the negotiation process to sample one's own utility function (self-sampling). During the negotiation process, genetic operators are applied over the opponent's and one's own offers in order to sample new offers that are interesting for both parties. Results show that the proposed model is capable of outperforming similarity heuristics which only sample before the negotiation process and of obtaining similar results to similarity heuristics which have access to all of the possible offers.
The work of @cite_27 opened the path for GA's in automated negotiation. proposed a GA for bilateral negotiations that was performed each time a negotiation round ended. The population of chromosomes was randomly initialized with 90 random offers and 10 heuristic offers (the last offer from the opponent and the nine best offers from the previous round). The idea behind using GA's is that the resulting offers have good characteristics for both agents. However, 60 generations were needed during each round in order to obtain the next offer, which may turn out to be computationally expensive in large issue domains. @cite_1 enhanced Krovi's model with more learning capabilities. More specifically, it is capable of learning opponent preferences by means of stochastic approximation and of adapting its mutation rate to opponent behavior. However, these strategies and mechanisms are devised for linear utility functions with few negotiation issues. The performance of these methods is uncertain when a large number of issues or complex utility functions are used. This present work also employs genetic operators to obtain new offers, but it is capable of providing solutions for domains with complex utility functions and domains where the number of issues is large.
{ "cite_N": [ "@cite_27", "@cite_1" ], "mid": [ "2303547534", "2078435329" ], "abstract": [ "A computational prototype of negotiation behavior is presented where the following occurs: (1) agents employ different concession matching tactics; (2) agents are unaware of opponent preferences; (3) agents incur a cost for delaying settlements; (4) agents vary in terms of goal difficulty and initial offer magnitude; and (5) demands and counter-offers are made and evaluated based on the opponent's degree of concession matching. This research explores the impact of the interaction of different agent behaviors on the negotiation process and the outcome of the negotiation. Simulation experiments show that the prototype is able to manifest fundamental patterns and confirms the effectiveness of classical negotiation and mediation strategies, such as ambitious goals and aggressive concession matching tactics. The model reveals some counterintuitive patterns that may shed a new perspective on the effects of time constraints and information availability.", "Abstract Automated negotiation has become increasingly important since the advent of electronic commerce. Nowadays, goods are no longer necessarily traded at a fixed price, and instead buyers and sellers negotiate among themselves to reach a deal that maximizes the payoffs of both parties. In this paper, a genetic agent-based model for bilateral, multi-issue negotiation is studied. The negotiation agent employs genetic algorithms and attempts to learn its opponent's preferences according to the history of the counter-offers based upon stochastic approximation. We also consider two types of agents: level-0 agents are only concerned with their own interest while level-1 agents consider also their opponents' utility. Our goal is to develop an automated negotiator that guides the negotiation process so as to maximize both parties' payoff." ] }
1604.04693
2950703487
In CNN-based object detection methods, region proposal becomes a bottleneck when objects exhibit significant scale variation, occlusion or truncation. In addition, these methods mainly focus on 2D object detection and cannot estimate detailed properties of objects. In this paper, we propose subcategory-aware CNNs for object detection. We introduce a novel region proposal network that uses subcategory information to guide the proposal generating process, and a new detection network for joint detection and subcategory classification. By using subcategories related to object pose, we achieve state-of-the-art performance on both detection and pose estimation on commonly used benchmarks.
Subcategory has been widely utilized to facilitate object detection, and different methods of discovering object subcategories have been proposed. In DPM @cite_11 , subcategories are discovered by clustering objects according to the aspect ratio of their bounding boxes. @cite_34 performs clustering according to the viewpoint of the object to discover subcategories. Visual subcategories are constructed by clustering in the appearance space of object @cite_17 @cite_35 @cite_30 @cite_24 . 3DVP @cite_33 performs clustering in the 3D voxel space according to the visibility of the voxels. Unlike previous works, we utilize subcategory to improve CNN-based detection, and our framework is general to employ different types of object subcategories.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_11", "@cite_33", "@cite_24", "@cite_34", "@cite_17" ], "mid": [ "2002754212", "2100912617", "2168356304", "1946609740", "2081613070", "", "2950245566" ], "abstract": [ "There have been some recent efforts to build visual knowledge bases from Internet images. But most of these approaches have focused on bounding box representation of objects. In this paper, we propose to enrich these knowledge bases by automatically discovering objects and their segmentations from noisy Internet images. Specifically, our approach combines the power of generative modeling for segmentation with the effectiveness of discriminative models for detection. The key idea behind our approach is to learn and exploit top-down segmentation priors based on visual subcategories. The strong priors learned from these visual subcategories are then combined with discriminatively trained detectors and bottom up cues to produce clean object segmentations. Our experimental results indicate state-of-the-art performance on the difficult dataset introduced by [29] We have integrated our algorithm in NEIL for enriching its knowledge base [5]. As of 14th April 2014, NEIL has automatically generated approximately 500K segmentations using web data.", "This paper studies efficient means in dealing with intracategory diversity in object detection. Strategies for occlusion and orientation handling are explored by learning an ensemble of detection models from visual and geometrical clusters of object instances. An AdaBoost detection scheme is employed with pixel lookup features for fast detection. The analysis provides insight into the design of a robust vehicle detection system, showing promise in terms of detection performance and orientation estimation accuracy.", "We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.", "Despite the great progress achieved in recognizing objects as 2D bounding boxes in images, it is still very challenging to detect occluded objects and estimate the 3D properties of multiple objects from a single image. In this paper, we propose a novel object representation, 3D Voxel Pattern (3DVP), that jointly encodes the key properties of objects including appearance, 3D shape, viewpoint, occlusion and truncation. We discover 3DVPs in a data-driven way, and train a bank of specialized detectors for a dictionary of 3DVPs. The 3DVP detectors are capable of detecting objects with specific visibility patterns and transferring the meta-data from the 3DVPs to the detected objects, such as 2D segmentation mask, 3D pose as well as occlusion or truncation boundaries. The transferred meta-data allows us to infer the occlusion relationship among objects, which in turn provides improved object recognition results. Experiments are conducted on the KITTI detection benchmark [17] and the outdoor-scene dataset [41]. We improve state-of-the-art results on car detection and pose estimation with notable margins (6 in difficult data of KITTI). We also verify the ability of our method in accurately segmenting objects from the background and localizing them in 3D.", "Recognition is graduating from labs to real-world applications. While it is encouraging to see its potential being tapped, it brings forth a fundamental challenge to the vision researcher: scalability. How can we learn a model for any concept that exhaustively covers all its appearance variations, while requiring minimal or no human supervision for compiling the vocabulary of visual variance, gathering the training images and annotations, and learning the models? In this paper, we introduce a fully-automated approach for learning extensive models for a wide range of variations (e.g. actions, interactions, attributes and beyond) within any concept. Our approach leverages vast resources of online books to discover the vocabulary of variance, and intertwines the data collection and modeling steps to alleviate the need for explicit human supervision in training the models. Our approach organizes the visual knowledge about a concept in a convenient and useful way, enabling a variety of applications across vision and NLP. Our online system has been queried by users to learn models for several interesting concepts including breakfast, Gandhi, beautiful, etc. To date, our system has models available for over 50, 000 variations within 150 concepts, and has annotated more than 10 million images with bounding boxes.", "", "The main stated contribution of the Deformable Parts Model (DPM) detector of (over the Histogram-of-Oriented-Gradients approach of Dalal and Triggs) is the use of deformable parts. A secondary contribution is the latent discriminative learning. Tertiary is the use of multiple components. A common belief in the vision community (including ours, before this study) is that their ordering of contributions reflects the performance of detector in practice. However, what we have experimentally found is that the ordering of importance might actually be the reverse. First, we show that by increasing the number of components, and switching the initialization step from their aspect-ratio, left-right flipping heuristics to appearance-based clustering, considerable improvement in performance is obtained. But more intriguingly, we show that with these new components, the part deformations can now be completely switched off, yet obtaining results that are almost on par with the original DPM detector. Finally, we also show initial results for using multiple components on a different problem -- scene classification, suggesting that this idea might have wider applications in addition to object detection." ] }
1604.04693
2950703487
In CNN-based object detection methods, region proposal becomes a bottleneck when objects exhibit significant scale variation, occlusion or truncation. In addition, these methods mainly focus on 2D object detection and cannot estimate detailed properties of objects. In this paper, we propose subcategory-aware CNNs for object detection. We introduce a novel region proposal network that uses subcategory information to guide the proposal generating process, and a new detection network for joint detection and subcategory classification. By using subcategories related to object pose, we achieve state-of-the-art performance on both detection and pose estimation on commonly used benchmarks.
We can categorize the state-of-the-art CNN-based object detection methods into two classes: one-stage detection and two-stage detection. In one-stage detection, such as the Overfeat @cite_25 framework, a CNN directly processes an input image, and outputs object detections. In two-stage detection, such as R-CNNs @cite_27 @cite_31 @cite_6 , region proposals are first generated from an input image, where different region proposal methods can be employed @cite_21 @cite_23 @cite_7 . Then these region proposals are fed into a CNN for classification and location refinement. It is debatable which detection paradigm is better. We adopt the two-stage detection framework in this work, and consider the region proposal process to be the coarse detection step in coarse-to-fine detection @cite_26 . We propose a novel RPN motivated by @cite_6 and demonstrate its advantages.
{ "cite_N": [ "@cite_26", "@cite_7", "@cite_21", "@cite_6", "@cite_27", "@cite_23", "@cite_31", "@cite_25" ], "mid": [ "2137401668", "1991367009", "2088049833", "2953106684", "2102605133", "7746136", "", "1487583988" ], "abstract": [ "This paper describes a face detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a simple and efficient classifier which is built using the AdaBoost learning algorithm (Freund and Schapire, 1995) to select a small number of critical visual features from a very large set of potential features. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising face-like regions. A set of experiments in the domain of face detection is presented. The system yields face detection performance comparable to the best previous systems (Sung and Poggio, 1998; , 1998; Schneiderman and Kanade, 2000; , 2000). Implemented on a conventional desktop, face detection proceeds at 15 frames per second.", "We propose a unified approach for bottom-up hierarchical image segmentation and object candidate generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object candidates by exploring efficiently their combinatorial space. We conduct extensive experiments on both the BSDS500 and on the PASCAL 2012 segmentation datasets, showing that MCG produces state-of-the-art contours, hierarchical regions and object candidates.", "This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html ).", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy.", "", "We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat." ] }
1604.04677
2338133831
We demonstrate that an attention-based encoder-decoder model can be used for sentence-level grammatical error identification for the Automated Evaluation of Scientific Writing (AESW) Shared Task 2016. The attention-based encoder-decoder models can be used for the generation of corrections, in addition to error identification, which is of interest for certain end-user applications. We show that a character-based encoder-decoder model is particularly effective, outperforming other results on the AESW Shared Task on its own, and showing gains over a word-based counterpart. Our final model--a combination of three character-based encoder-decoder models, one word-based encoder-decoder model, and a sentence-level CNN--is the highest performing system on the AESW 2016 binary prediction Shared Task.
More recent work has emerged as a result of a series of shared tasks, starting with the Helping Our Own (HOO) Pilot Shared Task run in 2011, which focused on a diverse set of errors in a small dataset @cite_16 , and the subsequent HOO 2012 Shared Task, which focused on the automated detection and correction of preposition and determiner errors @cite_6 . The CoNLL-2013 Shared Task @cite_11 http: www.comp.nus.edu.sg nlp conll13st.html focused on the correction of a limited set of five error types in essays by second-language learners of English at the National University of Singapore. The follow-up CoNLL-2014 Shared Task @cite_5 http: www.comp.nus.edu.sg nlp conll14st.html focused on the full generation task of correcting all errors in essays by second-language learners.
{ "cite_N": [ "@cite_5", "@cite_16", "@cite_6", "@cite_11" ], "mid": [ "2098297786", "1640336798", "2171043109", "" ], "abstract": [ "The CoNLL-2014 shared task was devoted to grammatical error correction of all error types. In this paper, we give the task definition, present the data sets, and describe the evaluation metric and scorer used in the shared task. We also give an overview of the various approaches adopted by the participating teams, and present the evaluation results. Compared to the CoNLL2013 shared task, we have introduced the following changes in CoNLL-2014: (1) A participating system is expected to detect and correct grammatical errors of all types, instead of just the five error types in CoNLL-2013; (2) The evaluation metric was changed from F1 to F0.5, to emphasize precision over recall; and (3) We have two human annotators who independently annotated the test essays, compared to just one human annotator in CoNLL-2013.", "The aim of the Helping Our Own (HOO) Shared Task is to promote the development of automated tools and techniques that can assist authors in the writing task, with a specific focus on writing within the natural language processing community. This paper reports on the results of a pilot run of the shared task, in which six teams participated. We describe the nature of the task and the data used, report on the results achieved, and discuss some of the things we learned that will guide future versions of the task.", "Incorrect usage of prepositions and determiners constitute the most common types of errors made by non-native speakers of English. It is not surprising, then, that there has been a significant amount of work directed towards the automated detection and correction of such errors. However, to date, the use of different data sets and different task definitions has made it difficult to compare work on the topic. This paper reports on the HOO 2012 shared task on error detection and correction in the use of prepositions and determiners, where systems developed by 14 teams from around the world were evaluated on the same previously unseen errorful text.", "" ] }
1604.04428
2336701472
We introduce a novel artificial neural network architecture that integrates robustness to adversarial input in the network structure. The main idea of our approach is to force the network to make predictions on what the given instance of the class under consideration would look like and subsequently test those predictions. By forcing the network to redraw the relevant parts of the image and subsequently comparing this new image to the original, we are having the network give a "proof" of the presence of the object.
Neural networks recognise objects in a different way than humans. As @cite_6 point out: the human recognition system uses features and learning processes, which are critical for recognition, but are not used by current models''. They show that where humans can recognise internal components of the objects in the image, current neural networks do not. With knowledge about the internal representation of the objects, false detections can be rejected when it is not consistent with the internal representation of the object. This corresponds with the sensitivity to adversarial images with an imperceptible change that have been shown in @cite_9 and various work since @cite_22 . They show that the smoothness assumption does not hold for neural networks; an imperceptible change in the query image can flip the classification. argue that the primary cause for this is the linear behaviour of the networks in high-dimensional spaces @cite_25 as opposite to the nonlinearity suspected in @cite_9 . The adversarial images are not isolated, spurious points in the pixel space but appear in large regions of the space @cite_20 . Moreover, adversarial images can be efficiently computed using gradient ascent, starting from any input @cite_25 .
{ "cite_N": [ "@cite_22", "@cite_9", "@cite_6", "@cite_25", "@cite_20" ], "mid": [ "", "1673923490", "2280426979", "1945616565", "2171875106" ], "abstract": [ "", "Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.", "Discovering the visual features and representations used by the brain to recognize objects is a central problem in the study of vision. Recently, neural network models of visual object recognition, including biological and deep network models, have shown remarkable progress and have begun to rival human performance in some challenging tasks. These models are trained on image examples and learn to extract features and representations and to use them for categorization. It remains unclear, however, whether the representations and learning processes discovered by current models are similar to those used by the human visual system. Here we show, by introducing and using minimal recognizable images, that the human visual system uses features and processes that are not used by current models and that are critical for recognition. We found by psychophysical studies that at the level of minimal recognizable images a minute change in the image can have a drastic effect on recognition, thus identifying features that are critical for the task. Simulations then showed that current models cannot explain this sensitivity to precise feature configurations and, more generally, do not learn to recognize minimal images at a human level. The role of the features shown here is revealed uniquely at the minimal level, where the contribution of each feature is essential. A full understanding of the learning and use of such features will extend our understanding of visual recognition and its cortical mechanisms and will enhance the capacity of computational models to learn from visual experience and to deal with recognition and detailed image interpretation.", "Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.", "Adversarial examples have raised questions regarding the robustness and security of deep neural networks. In this work we formalize the problem of adversarial images given a pretrained classifier, showing that even in the linear case the resulting optimization problem is nonconvex. We generate adversarial images using shallow and deep classifiers on the MNIST and ImageNet datasets. We probe the pixel space of adversarial images using noise of varying intensity and distribution. We bring novel visualizations that showcase the phenomenon and its high variability. We show that adversarial images appear in large regions in the pixel space, but that, for the same task, a shallow classifier seems more robust to adversarial images than a deep convolutional network." ] }
1604.04428
2336701472
We introduce a novel artificial neural network architecture that integrates robustness to adversarial input in the network structure. The main idea of our approach is to force the network to make predictions on what the given instance of the class under consideration would look like and subsequently test those predictions. By forcing the network to redraw the relevant parts of the image and subsequently comparing this new image to the original, we are having the network give a "proof" of the presence of the object.
Though the existence of adversarial examples is universal @cite_9 , neural networks can be made more robust against them. One way is to include adversarial examples in the training data @cite_25 @cite_23 @cite_24 @cite_9 , e.g. by assigning them to an additional rubbish class. Apart from increasing the robustness it can also increase the accuracy on non-adversarial examples. Another approach is to adapt the model of the network to improve robustness @cite_8 @cite_1 . @cite_8 the authors identify features that are causally related with the classes. Their learning procedure could be seen as a way to train a classifier that is robust against adversarial examples. @cite_1 the authors test several denoising architectures to reduce the effects of adversarial examples. They conclude that the sensitivity is more related to the training procedure and objective function than to model topology and present a new training procedure.
{ "cite_N": [ "@cite_8", "@cite_9", "@cite_1", "@cite_24", "@cite_23", "@cite_25" ], "mid": [ "1931477419", "1673923490", "1883420340", "", "", "1945616565" ], "abstract": [ "We provide a rigorous definition of the visual cause of a behavior that is broadly applicable to the visually driven behavior in humans, animals, neurons, robots and other perceiving systems. Our framework generalizes standard accounts of causal learning to settings in which the causal variables need to be constructed from micro-variables. We prove the Causal Coarsening Theorem, which allows us to gain causal knowledge from observational data with minimal experimental effort. The theorem provides a connection to standard inference techniques in machine learning that identify features of an image that correlate with, but may not cause, the target behavior. Finally, we propose an active learning scheme to learn a manipulator function that performs optimal manipulations on the image to automatically identify the visual cause of a target behavior. We illustrate our inference and learning algorithms in experiments based on both synthetic and real data.", "Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.", "Recent work has shown deep neural networks (DNNs) to be highly susceptible to well-designed, small perturbations at the input layer, or so-called adversarial examples. Taking images as an example, such distortions are often imperceptible, but can result in 100 mis-classification for a state of the art DNN. We study the structure of adversarial examples and explore network topology, pre-processing and training strategies to improve the robustness of DNNs. We perform various experiments to assess the removability of adversarial examples by corrupting with additional noise and pre-processing with denoising autoencoders (DAEs). We find that DAEs can remove substantial amounts of the adversarial noise. How- ever, when stacking the DAE with the original DNN, the resulting network can again be attacked by new adversarial examples with even smaller distortion. As a solution, we propose Deep Contractive Network, a model with a new end-to-end training procedure that includes a smoothness penalty inspired by the contractive autoencoder (CAE). This increases the network robustness to adversarial examples, without a significant performance penalty.", "", "", "Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset." ] }
1604.04428
2336701472
We introduce a novel artificial neural network architecture that integrates robustness to adversarial input in the network structure. The main idea of our approach is to force the network to make predictions on what the given instance of the class under consideration would look like and subsequently test those predictions. By forcing the network to redraw the relevant parts of the image and subsequently comparing this new image to the original, we are having the network give a "proof" of the presence of the object.
We use 3D models to train the classifier. Though this is artificial data, it can be used as training material for real data, e.g. for object detection @cite_3 @cite_0 or even aligning 3D models within an 2D image @cite_18 @cite_14 @cite_21 . The work of @cite_18 does this using HOG descriptors, while @cite_14 @cite_21 use neural networks. They have trained a CNN to predict the viewpoint of 3D models and were successful in applying this model to real-world images.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_21", "@cite_3", "@cite_0" ], "mid": [ "2010625607", "2190165033", "1591870335", "2211115409", "2083544878" ], "abstract": [ "This paper poses object category detection in images as a type of 2D-to-3D alignment problem, utilizing the large quantities of 3D CAD models that have been made publicly available online. Using the \"chair\" class as a running example, we propose an exemplar-based 3D category representation, which can explicitly model chairs of different styles as well as the large variation in viewpoint. We develop an approach to establish part-based correspondences between 3D CAD models and real photographs. This is achieved by (i) representing each 3D model using a set of view-dependent mid-level visual elements learned from synthesized views in a discriminative fashion, (ii) carefully calibrating the individual element detectors on a common dataset of negative images, and (iii) matching visual elements to the test image allowing for small mutual deformations but preserving the viewpoint and style constraints. We demonstrate the ability of our system to align 3D models with 2D objects in the challenging PASCAL VOC images, which depict a wide variety of chairs in complex scenes.", "This paper presents an end-to-end convolutional neural network (CNN) for 2D-3D exemplar detection. We demonstrate that the ability to adapt the features of natural images to better align with those of CAD rendered views is critical to the success of our technique. We show that the adaptation can be learned by compositing rendered views of textured object models on natural images. Our approach can be naturally incorporated into a CNN detection pipeline and extends the accuracy and speed benefits from recent advances in deep learning to 2D-3D exemplar detection. We applied our method to two tasks: instance detection, where we evaluated on the IKEA dataset, and object category detection, where we out-perform for \"chair\" detection on a subset of the Pascal VOC dataset.", "Object viewpoint estimation from 2D images is an essential task in computer vision. However, two issues hinder its progress: scarcity of training data with viewpoint annotations, and a lack of powerful features. Inspired by the growing availability of 3D models, we propose a framework to address both issues by combining render-based image synthesis and CNNs (Convolutional Neural Networks). We believe that 3D models have the potential in generating a large number of images of high variation, which can be well exploited by deep CNN with a high learning capacity. Towards this goal, we propose a scalable and overfit-resistant image synthesis pipeline, together with a novel CNN specifically tailored for the viewpoint estimation task. Experimentally, we show that the viewpoint estimation from our pipeline can significantly outperform state-of-the-art methods on PASCAL 3D+ benchmark.", "Crowdsourced 3D CAD models are becoming easily accessible online, and can potentially generate an infinite number of training images for almost any object category.We show that augmenting the training data of contemporary Deep Convolutional Neural Net (DCNN) models with such synthetic data can be effective, especially when real training data is limited or not well matched to the target domain. Most freely available CAD models capture 3D shape but are often missing other low level cues, such as realistic object texture, pose, or background. In a detailed analysis, we use synthetic CAD-rendered images to probe the ability of DCNN to learn without these cues, with surprising findings. In particular, we show that when the DCNN is fine-tuned on the target detection task, it exhibits a large degree of invariance to missing low-level cues, but, when pretrained on generic ImageNet classification, it learns better when the low-level cues are simulated. We show that our synthetic DCNN training approach significantly outperforms previous methods on the PASCAL VOC2007 dataset when learning in the few-shot scenario and improves performance in a domain shift scenario on the Office benchmark.", "The most successful 2D object detection methods require a large number of images annotated with object bounding boxes to be collected for training. We present an alternative approach that trains on virtual data rendered from 3D models, avoiding the need for manual labeling. Growing demand for virtual reality applications is quickly bringing about an abundance of available 3D models for a large variety of object categories. While mainstream use of 3D models in vision has focused on predicting the 3D pose of objects, we investigate the use of such freely available 3D models for multicategory 2D object detection. To address the issue of dataset bias that arises from training on virtual data and testing on real images, we propose a simple and fast adaptation approach based on decorrelated features. We also compare two kinds of virtual data, one rendered with real-image textures and one without. Evaluation on a benchmark domain adaptation dataset demonstrates that our method performs comparably to existing methods trained on large-scale real image domains." ] }
1604.04018
2952365771
In this paper, we propose a novel approach for text detec- tion in natural images. Both local and global cues are taken into account for localizing text lines in a coarse-to-fine pro- cedure. First, a Fully Convolutional Network (FCN) model is trained to predict the salient map of text regions in a holistic manner. Then, text line hypotheses are estimated by combining the salient map and character components. Fi- nally, another FCN classifier is used to predict the centroid of each character, in order to remove the false hypotheses. The framework is general for handling text in multiple ori- entations, languages and fonts. The proposed method con- sistently achieves the state-of-the-art performance on three text detection benchmarks: MSRA-TD500, ICDAR2015 and ICDAR2013.
Text detection in natural images has received much attention from the communities of computer vision and document analysis. However, most text detection methods focus on detecting horizontal or near-horizontal text mainly in two ways: 1) localizing the bounding boxes of words @cite_10 @cite_1 @cite_7 @cite_36 @cite_31 @cite_28 @cite_24 @cite_21 , 2) combining detection and recognition procedures into an end-to-end text recognition method @cite_6 @cite_27 . Comprehensive surveys for scene text detection and recognition can be referred to @cite_12 @cite_23 .
{ "cite_N": [ "@cite_7", "@cite_36", "@cite_28", "@cite_21", "@cite_1", "@cite_6", "@cite_24", "@cite_27", "@cite_23", "@cite_31", "@cite_10", "@cite_12" ], "mid": [ "", "", "", "", "", "70975097", "", "", "654550266", "2131163834", "2142159465", "2135231474" ], "abstract": [ "", "", "", "", "", "The goal of this work is text spotting in natural images. This is divided into two sequential tasks: detecting words regions in the image, and recognizing the words within these regions. We make the following contributions: first, we develop a Convolutional Neural Network (CNN) classifier that can be used for both tasks. The CNN has a novel architecture that enables efficient feature sharing (by using a number of layers in common) for text detection, character case-sensitive and insensitive classification, and bigram classification. It exceeds the state-of-the-art performance for all of these. Second, we make a number of technical changes over the traditional CNN architectures, including no downsampling for a per-pixel sliding window, and multi-mode learning with a mixture of linear models (maxout). Third, we have a method of automated data mining of Flickr, that generates word and character level annotations. Finally, these components are used together to form an end-to-end, state-of-the-art text spotting system. We evaluate the text-spotting system on two standard benchmarks, the ICDAR Robust Reading data set and the Street View Text data set, and demonstrate improvements over the state-of-the-art on multiple measures.", "", "", "Text, as one of the most influential inventions of humanity, has played an important role in human life, so far from ancient times. The rich and precise information embodied in text is very useful in a wide range of vision-based applications, therefore text detection and recognition in natural scenes have become important and active research topics in computer vision and document analysis. Especially in recent years, the community has seen a surge of research efforts and substantial progresses in these fields, though a variety of challenges (e.g. noise, blur, distortion, occlusion and variation) still remain. The purposes of this survey are three-fold: 1) introduce up-to-date works, 2) identify state-of-the-art algorithms, and 3) predict potential research directions in the future. Moreover, this paper provides comprehensive links to publicly available resources, including benchmark datasets, source codes, and online demos. In summary, this literature review can serve as a good reference for researchers in the areas of scene text detection and recognition.", "Text detection and localization in natural scene images is important for content-based image analysis. This problem is challenging due to the complex background, the non-uniform illumination, the variations of text font, size and line orientation. In this paper, we present a hybrid approach to robustly detect and localize texts in natural scene images. A text region detector is designed to estimate the text existing confidence and scale information in image pyramid, which help segment candidate text components by local binarization. To efficiently filter out the non-text components, a conditional random field (CRF) model considering unary component properties and binary contextual component relationships with supervised parameter learning is proposed. Finally, text components are grouped into text lines words with a learning-based energy minimization method. Since all the three stages are learning-based, there are very few parameters requiring manual tuning. Experimental results evaluated on the ICDAR 2005 competition dataset show that our approach yields higher precision and recall performance compared with state-of-the-art methods. We also evaluated our approach on a multilingual image dataset with promising results.", "We present a novel image operator that seeks to find the value of stroke width for each image pixel, and demonstrate its use on the task of text detection in natural images. The suggested operator is local and data dependent, which makes it fast and robust enough to eliminate the need for multi-scale computation or scanning windows. Extensive testing shows that the suggested scheme outperforms the latest published algorithms. Its simplicity allows the algorithm to detect texts in many fonts and languages.", "This paper analyzes, compares, and contrasts technical challenges, methods, and the performance of text detection and recognition research in color imagery. It summarizes the fundamental problems and enumerates factors that should be considered when addressing these problems. Existing techniques are categorized as either stepwise or integrated and sub-problems are highlighted including text localization, verification, segmentation and recognition. Special issues associated with the enhancement of degraded text and the processing of video text, multi-oriented, perspectively distorted and multilingual text are also addressed. The categories and sub-categories of text are illustrated, benchmark datasets are enumerated, and the performance of the most representative approaches is compared. This review provides a fundamental comparison and analysis of the remaining problems in the field." ] }
1604.04018
2952365771
In this paper, we propose a novel approach for text detec- tion in natural images. Both local and global cues are taken into account for localizing text lines in a coarse-to-fine pro- cedure. First, a Fully Convolutional Network (FCN) model is trained to predict the salient map of text regions in a holistic manner. Then, text line hypotheses are estimated by combining the salient map and character components. Fi- nally, another FCN classifier is used to predict the centroid of each character, in order to remove the false hypotheses. The framework is general for handling text in multiple ori- entations, languages and fonts. The proposed method con- sistently achieves the state-of-the-art performance on three text detection benchmarks: MSRA-TD500, ICDAR2015 and ICDAR2013.
In this section, we focus on the most relevant works that are presented for multi-oriented text detection. Multi-oriented text detection in the wild is first studied by @cite_20 @cite_14 . Their detection pipelines are similar to the traditional methods based on connected component extraction, integrating orientation estimation of each character and text line. @cite_2 treated each MSER component as a vertex in a graph, then text detection is transferred into a graph partitioning problem. @cite_41 proposed a multi-stage clustering algorithm for grouping MSER components to detect multi-oriented text. @cite_27 proposed an end-to-end system based on SWT @cite_10 for multi-oriented text. Recently, a challenging benchmark for multi-oriented text detection has been released for the ICDAR2015 text detection competition, and many researchers have reported their results on it.
{ "cite_N": [ "@cite_14", "@cite_41", "@cite_27", "@cite_2", "@cite_10", "@cite_20" ], "mid": [ "", "2019478948", "", "2065613686", "2142159465", "2166949156" ], "abstract": [ "", "Text detection in natural scene images is an important prerequisite for many content-based image analysis tasks, while most current research efforts only focus on horizontal or near horizontal scene text. In this paper, first we present a unified distance metric learning framework for adaptive hierarchical clustering, which can simultaneously learn similarity weights (to adaptively combine different feature similarities) and the clustering threshold (to automatically determine the number of clusters). Then, we propose an effective multi-orientation scene text detection system, which constructs text candidates by grouping characters based on this adaptive clustering. Our text candidates construction method consists of several sequential coarse-to-fine grouping steps: morphology-based grouping via single-link clustering, orientation-based grouping via divisive hierarchical clustering, and projection-based grouping also via divisive clustering. The effectiveness of our proposed system is evaluated on several public scene text databases, e.g., ICDAR Robust Reading Competition data sets (2011 and 2013), MSRA-TD500 and NEOCR. Specifically, on the multi-orientation text data set MSRA-TD500, the @math measure of our system is @math percent, much better than the state-of-the-art performance. We also construct and release a practical challenging multi-orientation scene text data set (USTB-SV1K), which is available at http: prir.ustb.edu.cn TexStar MOMV-text-detection .", "", "In this paper, higher-order correlation clustering (HOCC) is used for text line detection in natural images. We treat text line detection as a graph partitioning problem, where each vertex is represented by a Maximally Stable Extremal Region (MSER). First, weak hypothesises are proposed by coarsely grouping MSERs based on their spatial alignment and appearance consistency. Then, higher-order correlation clustering (HOCC) is used to partition the MSERs into text line candidates, using the hypotheses as soft constraints to enforce long range interactions. We further propose a regularization method to solve the Semidefinite Programming problem in the inference. Finally we use a simple texton-based texture classifier to filter out the non-text areas. This framework allows us to naturally handle multiple orientations, languages and fonts. Experiments show that our approach achieves competitive performance compared to the state of the art.", "We present a novel image operator that seeks to find the value of stroke width for each image pixel, and demonstrate its use on the task of text detection in natural images. The suggested operator is local and data dependent, which makes it fast and robust enough to eliminate the need for multi-scale computation or scanning windows. Extensive testing shows that the suggested scheme outperforms the latest published algorithms. Its simplicity allows the algorithm to detect texts in many fonts and languages.", "Text information in natural scene images serves as important clues for many image-based applications such as scene understanding, content-based image retrieval, assistive navigation, and automatic geocoding. However, locating text from a complex background with multiple colors is a challenging task. In this paper, we explore a new framework to detect text strings with arbitrary orientations in complex natural scene images. Our proposed framework of text string detection consists of two steps: 1) image partition to find text character candidates based on local gradient features and color uniformity of character components and 2) character candidate grouping to detect text strings based on joint structural features of text characters in each text string such as character size differences, distances between neighboring characters, and character alignment. By assuming that a text string has at least three characters, we propose two algorithms of text string detection: 1) adjacent character grouping method and 2) text line grouping method. The adjacent character grouping method calculates the sibling groups of each character candidate as string segments and then merges the intersecting sibling groups into text string. The text line grouping method performs Hough transform to fit text line among the centroids of text candidates. Each fitted text line describes the orientation of a potential text string. The detected text string is presented by a rectangle region covering all characters whose centroids are cascaded in its text line. To improve efficiency and accuracy, our algorithms are carried out in multi-scales. The proposed methods outperform the state-of-the-art results on the public Robust Reading Dataset, which contains text only in horizontal orientation. Furthermore, the effectiveness of our methods to detect text strings with arbitrary orientations is evaluated on the Oriented Scene Text Dataset collected by ourselves containing text strings in nonhorizontal orientations." ] }
1604.04018
2952365771
In this paper, we propose a novel approach for text detec- tion in natural images. Both local and global cues are taken into account for localizing text lines in a coarse-to-fine pro- cedure. First, a Fully Convolutional Network (FCN) model is trained to predict the salient map of text regions in a holistic manner. Then, text line hypotheses are estimated by combining the salient map and character components. Fi- nally, another FCN classifier is used to predict the centroid of each character, in order to remove the false hypotheses. The framework is general for handling text in multiple ori- entations, languages and fonts. The proposed method con- sistently achieves the state-of-the-art performance on three text detection benchmarks: MSRA-TD500, ICDAR2015 and ICDAR2013.
In addition, it is worth mentioning that both of the recent approaches @cite_30 @cite_40 @cite_21 and our method, which used the deep convolutional neural network, have achieved superior performance over conventional approaches in several aspects: 1) learn a more robust component representation by pixel labeling with CNN @cite_6 ; 2) leverage the powerful discrimination ability of CNN for better eliminating false positives @cite_3 @cite_17 ; 3) learn a strong character word recognizer with CNN for end-to-end text detection @cite_30 @cite_37 . However, these methods only focus on horizontal text detection.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_21", "@cite_3", "@cite_6", "@cite_40", "@cite_17" ], "mid": [ "1607307044", "1922126009", "", "117491841", "70975097", "", "1935817682" ], "abstract": [ "Full end-to-end text recognition in natural images is a challenging problem that has received much attention recently. Traditional systems in this area have relied on elaborate models incorporating carefully hand-engineered features or large amounts of prior knowledge. In this paper, we take a different route and combine the representational power of large, multilayer neural networks together with recent developments in unsupervised feature learning, which allows us to use a common framework to train highly-accurate text detector and character recognizer modules. Then, using only simple off-the-shelf methods, we integrate these two modules into a full end-to-end, lexicon-driven, scene text recognition system that achieves state-of-the-art performance on standard benchmarks, namely Street View Text and ICDAR 2003.", "In this work we present an end-to-end system for text spotting--localising and recognising text in natural scene images--and text based image retrieval. This system is based on a region proposal mechanism for detection and deep convolutional neural networks for recognition. Our pipeline uses a novel combination of complementary proposal generation techniques to ensure high recall, and a fast subsequent filtering stage for improving precision. For the recognition and ranking of proposals, we train very large convolutional neural networks to perform word recognition on the whole proposal region at the same time, departing from the character classifier based systems of the past. These networks are trained solely on data produced by a synthetic text generation engine, requiring no human labelled data. Analysing the stages of our pipeline, we show state-of-the-art performance throughout. We perform rigorous experiments across a number of standard end-to-end text spotting benchmarks and text-based image retrieval datasets, showing a large improvement over all previous methods. Finally, we demonstrate a real-world application of our text spotting system to allow thousands of hours of news footage to be instantly searchable via a text query.", "", "Maximally Stable Extremal Regions (MSERs) have achieved great success in scene text detection. However, this low-level pixel operation inherently limits its capability for handling complex text information efficiently (e. g. connections between text or background components), leading to the difficulty in distinguishing texts from background components. In this paper, we propose a novel framework to tackle this problem by leveraging the high capability of convolutional neural network (CNN). In contrast to recent methods using a set of low-level heuristic features, the CNN network is capable of learning high-level features to robustly identify text components from text-like outliers (e.g. bikes, windows, or leaves). Our approach takes advantages of both MSERs and sliding-window based methods. The MSERs operator dramatically reduces the number of windows scanned and enhances detection of the low-quality texts. While the sliding-window with CNN is applied to correctly separate the connections of multiple characters in components. The proposed system achieved strong robustness against a number of extreme text variations and serious real-world problems. It was evaluated on the ICDAR 2011 benchmark dataset, and achieved over 78 in F-measure, which is significantly higher than previous methods.", "The goal of this work is text spotting in natural images. This is divided into two sequential tasks: detecting words regions in the image, and recognizing the words within these regions. We make the following contributions: first, we develop a Convolutional Neural Network (CNN) classifier that can be used for both tasks. The CNN has a novel architecture that enables efficient feature sharing (by using a number of layers in common) for text detection, character case-sensitive and insensitive classification, and bigram classification. It exceeds the state-of-the-art performance for all of these. Second, we make a number of technical changes over the traditional CNN architectures, including no downsampling for a per-pixel sliding window, and multi-mode learning with a mixture of linear models (maxout). Third, we have a method of automated data mining of Flickr, that generates word and character level annotations. Finally, these components are used together to form an end-to-end, state-of-the-art text spotting system. We evaluate the text-spotting system on two standard benchmarks, the ICDAR Robust Reading data set and the Street View Text data set, and demonstrate improvements over the state-of-the-art on multiple measures.", "", "Recently, a variety of real-world applications have triggered huge demand for techniques that can extract textual information from natural scenes. Therefore, scene text detection and recognition have become active research topics in computer vision. In this work, we investigate the problem of scene text detection from an alternative perspective and propose a novel algorithm for it. Different from traditional methods, which mainly make use of the properties of single characters or strokes, the proposed algorithm exploits the symmetry property of character groups and allows for direct extraction of text lines from natural images. The experiments on the latest ICDAR benchmarks demonstrate that the proposed algorithm achieves state-of-the-art performance. Moreover, compared to conventional approaches, the proposed algorithm shows stronger adaptability to texts in challenging scenarios." ] }
1604.04053
2335901184
Deep Convolution Neural Networks (CNNs) have shown impressive performance in various vision tasks such as image classification, object detection and semantic segmentation. For object detection, particularly in still images, the performance has been significantly increased last year thanks to powerful deep networks (e.g. GoogleNet) and detection frameworks (e.g. Regions with CNN features (RCNN)). The lately introduced ImageNet [6] task on object detection from video (VID) brings the object detection task into the video domain, in which objects' locations at each frame are required to be annotated with bounding boxes. In this work, we introduce a complete framework for the VID task based on still-image object detection and general object tracking. Their relations and contributions in the VID task are thoroughly studied and evaluated. In addition, a temporal convolution network is proposed to incorporate temporal information to regularize the detection results and shows its effectiveness for the task. Code is available at https: github.com myfavouritekk vdetlib.
There have also been methods on action localization. At each frame of human action video, the system is required to annotate a bounding box for the human action of interest. The methods that are based on action proposals are related to our work. Yu and Yuang al @cite_8 proposed to generate action proposals by calculating actionness scores and solving a maximum set coverage problem. Jain al @cite_34 adopted the Selective Search strategy on super-voxels to generate tubulet proposals and proposed new features to differentiate human actions from background movements. In @cite_1 , candidate regions are fed into two CNNs to learn feature representations, which is followed by a SVM to make prediction on actions using appearance and motion cues. The regions are then linked across frames based on the action predictions and their spatial overlap.
{ "cite_N": [ "@cite_34", "@cite_1", "@cite_8" ], "mid": [ "2018068650", "1923332106", "1945129080" ], "abstract": [ "This paper considers the problem of action localization, where the objective is to determine when and where certain actions appear. We introduce a sampling strategy to produce 2D+t sequences of bounding boxes, called tubelets. Compared to state-of-the-art alternatives, this drastically reduces the number of hypotheses that are likely to include the action of interest. Our method is inspired by a recent technique introduced in the context of image localization. Beyond considering this technique for the first time for videos, we revisit this strategy for 2D+t sequences obtained from super-voxels. Our sampling strategy advantageously exploits a criterion that reflects how action related motion deviates from background motion. We demonstrate the interest of our approach by extensive experiments on two public datasets: UCF Sports and MSR-II. Our approach significantly outperforms the state-of-the-art on both datasets, while restricting the search of actions to a fraction of possible bounding box sequences.", "We address the problem of action detection in videos. Driven by the latest progress in object detection from 2D images, we build action models using rich feature hierarchies derived from shape and kinematic cues. We incorporate appearance and motion in two ways. First, starting from image region proposals we select those that are motion salient and thus are more likely to contain the action. This leads to a significant reduction in the number of regions being processed and allows for faster computations. Second, we extract spatio-temporal feature representations to build strong classifiers using Convolutional Neural Networks. We link our predictions to produce detections consistent in time, which we call action tubes. We show that our approach outperforms other techniques in the task of action detection.", "In this paper we target at generating generic action proposals in unconstrained videos. Each action proposal corresponds to a temporal series of spatial bounding boxes, i.e., a spatio-temporal video tube, which has a good potential to locate one human action. Assuming each action is performed by a human with meaningful motion, both appearance and motion cues are utilized to measure the actionness of the video tubes. After picking those spatiotemporal paths of high actionness scores, our action proposal generation is formulated as a maximum set coverage problem, where greedy search is performed to select a set of action proposals that can maximize the overall actionness score. Compared with existing action proposal approaches, our action proposals do not rely on video segmentation and can be generated in nearly real-time. Experimental results on two challenging datasets, MSRII and UCF 101, validate the superior performance of our action proposals as well as competitive results on action detection and search." ] }
1604.04053
2335901184
Deep Convolution Neural Networks (CNNs) have shown impressive performance in various vision tasks such as image classification, object detection and semantic segmentation. For object detection, particularly in still images, the performance has been significantly increased last year thanks to powerful deep networks (e.g. GoogleNet) and detection frameworks (e.g. Regions with CNN features (RCNN)). The lately introduced ImageNet [6] task on object detection from video (VID) brings the object detection task into the video domain, in which objects' locations at each frame are required to be annotated with bounding boxes. In this work, we introduce a complete framework for the VID task based on still-image object detection and general object tracking. Their relations and contributions in the VID task are thoroughly studied and evaluated. In addition, a temporal convolution network is proposed to incorporate temporal information to regularize the detection results and shows its effectiveness for the task. Code is available at https: github.com myfavouritekk vdetlib.
Object tracking has been studied for decades @cite_29 @cite_16 @cite_17 . Recently, deep CNNs have been used for object tracking and achieved impressive tracking accuracy @cite_9 @cite_31 @cite_13 . Wang al @cite_9 proposed to create an object-specific tracker by online selecting the most influential features from an ImageNet pre-trained CNN, which outperforms state-of-the-art trackers by a large margin. Nam al @cite_31 trained a multi-domain CNN for learning generic representations for tracking objects. When tracking a new target, a new network is created by combining the shared layers in the pre-trained CNN with a new binary classification layer, which is online updated. However, even for the CNN-based trackers, they might still drift in long-term tracking because they mostly utilize the object appearance information within the video without semantic understanding on its class.
{ "cite_N": [ "@cite_9", "@cite_29", "@cite_31", "@cite_16", "@cite_13", "@cite_17" ], "mid": [ "2211629196", "2051588547", "1857884451", "1937954682", "2470456807", "1915599933" ], "abstract": [ "We propose a new approach for general object tracking with fully convolutional neural network. Instead of treating convolutional neural network (CNN) as a black-box feature extractor, we conduct in-depth study on the properties of CNN features offline pre-trained on massive image data and classification task on ImageNet. The discoveries motivate the design of our tracking system. It is found that convolutional layers in different levels characterize the target from different perspectives. A top layer encodes more semantic features and serves as a category detector, while a lower layer carries more discriminative information and can better separate the target from distracters with similar appearance. Both layers are jointly used with a switch mechanism during tracking. It is also found that for a tracking target, only a subset of neurons are relevant. A feature map selection method is developed to remove noisy and irrelevant feature maps, which can reduce computation redundancy and improve tracking accuracy. Extensive evaluation on the widely used tracking benchmark [36] shows that the proposed tacker outperforms the state-of-the-art significantly.", "Robust multi-object tracking-by-detection requires the correct assignment of noisy detection results to object trajectories. We address this problem by proposing an online approach based on the observation that object detectors primarily fail if objects are significantly occluded. In contrast to most existing work, we only rely on geometric information to efficiently overcome detection failures. In particular, we exploit the spatio-temporal evolution of occlusion regions, detector reliability, and target motion prediction to robustly handle missed detections. In combination with a conservative association scheme for visible objects, this allows for real-time tracking of multiple objects from a single static camera, even in complex scenarios. Our evaluations on publicly available multi-object tracking benchmark datasets demonstrate favorable performance compared to the state-of-the-art in online and offline multi-object tracking.", "We propose a novel visual tracking algorithm based on the representations from a discriminatively trained Convolutional Neural Network (CNN). Our algorithm pretrains a CNN using a large set of videos with tracking groundtruths to obtain a generic target representation. Our network is composed of shared layers and multiple branches of domain-specific layers, where domains correspond to individual training sequences and each branch is responsible for binary classification to identify target in each domain. We train each domain in the network iteratively to obtain generic target representations in the shared layers. When tracking a target in a new sequence, we construct a new network by combining the shared layers in the pretrained CNN with a new binary classification layer, which is updated online. Online tracking is performed by evaluating the candidate windows randomly sampled around the previous target state. The proposed algorithm illustrates outstanding performance in existing tracking benchmarks.", "Most modern trackers typically employ a bounding box given in the first frame to track visual objects, where their tracking results are often sensitive to the initialization. In this paper, we propose a new tracking method, Reliable Patch Trackers (RPT), which attempts to identify and exploit the reliable patches that can be tracked effectively through the whole tracking process. Specifically, we present a tracking reliability metric to measure how reliably a patch can be tracked, where a probability model is proposed to estimate the distribution of reliable patches under a sequential Monte Carlo framework. As the reliable patches distributed over the image, we exploit the motion trajectories to distinguish them from the background. Therefore, the visual object can be defined as the clustering of homo-trajectory patches, where a Hough voting-like scheme is employed to estimate the target state. Encouraging experimental results on a large set of sequences showed that the proposed approach is very effective and in comparison to the state-of-the-art trackers. The full source code of our implementation will be publicly available.", "Due to the limited amount of training samples, finetuning pre-trained deep models online is prone to overfitting. In this paper, we propose a sequential training method for convolutional neural networks (CNNs) to effectively transfer pre-trained deep features for online applications. We regard a CNN as an ensemble with each channel of the output feature map as an individual base learner. Each base learner is trained using different loss criterions to reduce correlation and avoid over-training. To achieve the best ensemble online, all the base learners are sequentially sampled into the ensemble via important sampling. To further improve the robustness of each base learner, we propose to train the convolutional layers with random binary masks, which serves as a regularization to enforce each base learner to focus on different input features. The proposed online training method is applied to visual tracking problem by transferring deep features trained on massive annotated visual data and is shown to significantly improve tracking performance. Extensive experiments are conducted on two challenging benchmark data set and demonstrate that our tracking algorithm can outperform state-of-the-art methods with a considerable margin.", "Variations in the appearance of a tracked object, such as changes in geometry photometry, camera viewpoint, illumination, or partial occlusion, pose a major challenge to object tracking. Here, we adopt cognitive psychology principles to design a flexible representation that can adapt to changes in object appearance during tracking. Inspired by the well-known Atkinson-Shiffrin Memory Model, we propose MUlti-Store Tracker (MUSTer), a dual-component approach consisting of short- and long-term memory stores to process target appearance memories. A powerful and efficient Integrated Correlation Filter (ICF) is employed in the short-term store for short-term tracking. The integrated long-term component, which is based on keypoint matching-tracking and RANSAC estimation, can interact with the long-term memory and provide additional information for output control. MUSTer was extensively evaluated on the CVPR2013 Online Object Tracking Benchmark (OOTB) and ALOV++ datasets. The experimental results demonstrated the superior performance of MUSTer in comparison with other state-of-art trackers." ] }
1604.04004
2337024056
Image quality is an important practical challenge that is often overlooked in the design of machine vision systems. Commonly, machine vision systems are trained and tested on high quality image datasets, yet in practical applications the input images can not be assumed to be of high quality. Recently, deep neural networks have obtained state-of-the-art performance on many machine vision tasks. In this paper we provide an evaluation of 4 state-of-the-art deep neural network models for image classification under quality distortions. We consider five types of quality distortions: blur, noise, contrast, JPEG, and JPEG2000 compression. We show that the existing networks are susceptible to these quality distortions, particularly to blur and noise. These results enable future work in developing deep neural networks that are more invariant to quality distortions.
In surveillance applications, face recognition in low quality images is an important capability. There are many works that attempt to recognize low-resolution faces @cite_0 @cite_10 . Besides low-resolution, other image quality distortions may affect performance. Karam and Zhu @cite_4 present a face recognition dataset that considers five different types of quality distortions. They however do not evaluate the performance of any models on this new dataset. @cite_6 present an approach based on sparse representations that achieves good performance on this dataset.
{ "cite_N": [ "@cite_0", "@cite_10", "@cite_6", "@cite_4" ], "mid": [ "2054515210", "", "2254116390", "2079382814" ], "abstract": [ "This paper addresses the very low resolution (VLR) problem in face recognition in which the resolution of the face image to be recognized is lower than 16 × 16. With the increasing demand of surveillance camera-based applications, the VLR problem happens in many face application systems. Existing face recognition algorithms are not able to give satisfactory performance on the VLR face image. While face super-resolution (SR) methods can be employed to enhance the resolution of the images, the existing learning-based face SR methods do not perform well on such a VLR face image. To overcome this problem, this paper proposes a novel approach to learn the relationship between the high-resolution image space and the VLR image space for face SR. Based on this new approach, two constraints, namely, new data and discriminative constraints, are designed for good visuality and face recognition applications under the VLR problem, respectively. Experimental results show that the proposed SR algorithm based on relationship learning outperforms the existing algorithms in public face databases.", "", "Most of the existing domain adaptation learning (DAL) methods relies on a single source domain to learn a classifier with well-generalized performance for the target domain of interest, which may lead to the so-called negative transfer problem. To this end, many multi-source adaptation methods have been proposed. While the advantages of using multi-source domains of information for establishing an adaptation model have been widely recognized, how to boost the robustness of the computational model for multi-source adaptation learning has only recently received attention. To address this issue for achieving enhanced performance, we propose in this paper a novel algorithm called multi-source Adaptation Regularization Joint Kernel Sparse Representation (ARJKSR) for robust visual classification problems. Specifically, ARJKSR jointly represents target dataset by a sparse linear combination of training data of each source domain in some optimal Reproduced Kernel Hilbert Space (RKHS), recovered by simultaneously minimizing the inter-domain distribution discrepancy and maximizing the local consistency, whilst constraining the observations from both target and source domains to share their sparse representations. The optimization problem of ARJKSR can be solved using an efficient alternative direction method. Under the framework ARJKSR, we further learn a robust label prediction matrix for the unlabeled instances of target domain based on the classical graph-based semi-supervised learning (GSSL) diagram, into which multiple Laplacian graphs constructed with the ARJKSR are incorporated. The validity of our method is examined by several visual classification problems. Results demonstrate the superiority of our method in comparison to several state-of-the-arts.", "The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications." ] }
1604.04004
2337024056
Image quality is an important practical challenge that is often overlooked in the design of machine vision systems. Commonly, machine vision systems are trained and tested on high quality image datasets, yet in practical applications the input images can not be assumed to be of high quality. Recently, deep neural networks have obtained state-of-the-art performance on many machine vision tasks. In this paper we provide an evaluation of 4 state-of-the-art deep neural network models for image classification under quality distortions. We consider five types of quality distortions: blur, noise, contrast, JPEG, and JPEG2000 compression. We show that the existing networks are susceptible to these quality distortions, particularly to blur and noise. These results enable future work in developing deep neural networks that are more invariant to quality distortions.
@cite_7 consider deep neural network performance on low resolution crops of an image. They find minimal recognizable configurations of images (MIRCs) which are the smallest crops for which human observers can still predict the correct class. MIRCs are discovered by repeatedly cropping the input image and asking human observers if they can still recognize the cropped image. The MIRC regions are blurry because in general they represent very small regions. The authors test deep networks on the MIRC regions and show that they cannot match human performance. By contrast, in this paper we consider blurring the entire image rather than selecting a small region of the image, in addition to other types of distortions that occur in practical applications.
{ "cite_N": [ "@cite_7" ], "mid": [ "2280426979" ], "abstract": [ "Discovering the visual features and representations used by the brain to recognize objects is a central problem in the study of vision. Recently, neural network models of visual object recognition, including biological and deep network models, have shown remarkable progress and have begun to rival human performance in some challenging tasks. These models are trained on image examples and learn to extract features and representations and to use them for categorization. It remains unclear, however, whether the representations and learning processes discovered by current models are similar to those used by the human visual system. Here we show, by introducing and using minimal recognizable images, that the human visual system uses features and processes that are not used by current models and that are critical for recognition. We found by psychophysical studies that at the level of minimal recognizable images a minute change in the image can have a drastic effect on recognition, thus identifying features that are critical for the task. Simulations then showed that current models cannot explain this sensitivity to precise feature configurations and, more generally, do not learn to recognize minimal images at a human level. The role of the features shown here is revealed uniquely at the minimal level, where the contribution of each feature is essential. A full understanding of the learning and use of such features will extend our understanding of visual recognition and its cortical mechanisms and will enhance the capacity of computational models to learn from visual experience and to deal with recognition and detailed image interpretation." ] }
1604.04004
2337024056
Image quality is an important practical challenge that is often overlooked in the design of machine vision systems. Commonly, machine vision systems are trained and tested on high quality image datasets, yet in practical applications the input images can not be assumed to be of high quality. Recently, deep neural networks have obtained state-of-the-art performance on many machine vision tasks. In this paper we provide an evaluation of 4 state-of-the-art deep neural network models for image classification under quality distortions. We consider five types of quality distortions: blur, noise, contrast, JPEG, and JPEG2000 compression. We show that the existing networks are susceptible to these quality distortions, particularly to blur and noise. These results enable future work in developing deep neural networks that are more invariant to quality distortions.
In this paper, we present the first large scale evaluation of deep networks on natural images under different types and different levels of image quality distortions. In contrast to @cite_1 @cite_4 , we use the ILSVRC 2012 dataset (ImageNet) @cite_8 which consists of 1000 object classes. The original images from this database are relatively high quality. We augment this dataset by introducing several distortions and then evaluate the performance of state-of-the-art deep neural networks on these distorted images.
{ "cite_N": [ "@cite_1", "@cite_4", "@cite_8" ], "mid": [ "2902832059", "2079382814", "2117539524" ], "abstract": [ "", "The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications.", "The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements." ] }
1604.04339
2340017589
We propose a method for high-performance semantic image segmentation (or semantic pixel labelling) based on very deep residual networks, which achieves the state-of-the-art performance. A few design factors are carefully considered to this end. We make the following contributions. (i) First, we evaluate different variations of a fully convolutional residual network so as to find the best configuration, including the number of layers, the resolution of feature maps, and the size of field-of-view. Our experiments show that further enlarging the field-of-view and increasing the resolution of feature maps are typically beneficial, which however inevitably leads to a higher demand for GPU memories. To walk around the limitation, we propose a new method to simulate a high resolution network with a low resolution network, which can be applied during training and or testing. (ii) Second, we propose an online bootstrapping method for training. We demonstrate that online bootstrapping is critically important for achieving good accuracy. (iii) Third we apply the traditional dropout to some of the residual blocks, which further improves the performance. (iv) Finally, our method achieves the currently best mean intersection-over-union 78.3 on the PASCAL VOC 2012 dataset, as well as on the recent dataset Cityscapes. ∗This research was in part supported by the Data to Decisions Cooperative Research Centre. C. Shen’s participation was in part supported by an ARC Future Fellowship (FT120100969). C. Shen is the corresponding author. 1 ar X iv :1 60 4. 04 33 9v 1 [ cs .C V ] 1 5 A pr 2 01 6
The main contribution that enables them to train so deep networks is that they connect some of the layers with shortcuts, which directly pass through the signals and can thus avoid the vanishing gradient effect which may be a problem for very deep plain networks. In a more recent work, they redesigned their residual blocks to avoid over-fitting, which enabled them to train an even deeper 200-layer residual network. Deep ResNets can be seen as a simplified version of the highway network @cite_1 .
{ "cite_N": [ "@cite_1" ], "mid": [ "1026270304" ], "abstract": [ "Theoretical and empirical evidence indicates that the depth of neural networks is crucial for their success. However, training becomes more difficult as depth increases, and training of very deep networks remains an open problem. Here we introduce a new architecture designed to overcome this. Our so-called highway networks allow unimpeded information flow across many layers on information highways. They are inspired by Long Short-Term Memory recurrent networks and use adaptive gating units to regulate the information flow. Even with hundreds of layers, highway networks can be trained directly through simple gradient descent. This enables the study of extremely deep and efficient architectures." ] }
1604.04339
2340017589
We propose a method for high-performance semantic image segmentation (or semantic pixel labelling) based on very deep residual networks, which achieves the state-of-the-art performance. A few design factors are carefully considered to this end. We make the following contributions. (i) First, we evaluate different variations of a fully convolutional residual network so as to find the best configuration, including the number of layers, the resolution of feature maps, and the size of field-of-view. Our experiments show that further enlarging the field-of-view and increasing the resolution of feature maps are typically beneficial, which however inevitably leads to a higher demand for GPU memories. To walk around the limitation, we propose a new method to simulate a high resolution network with a low resolution network, which can be applied during training and or testing. (ii) Second, we propose an online bootstrapping method for training. We demonstrate that online bootstrapping is critically important for achieving good accuracy. (iii) Third we apply the traditional dropout to some of the residual blocks, which further improves the performance. (iv) Finally, our method achieves the currently best mean intersection-over-union 78.3 on the PASCAL VOC 2012 dataset, as well as on the recent dataset Cityscapes. ∗This research was in part supported by the Data to Decisions Cooperative Research Centre. C. Shen’s participation was in part supported by an ARC Future Fellowship (FT120100969). C. Shen is the corresponding author. 1 ar X iv :1 60 4. 04 33 9v 1 [ cs .C V ] 1 5 A pr 2 01 6
Long et al. @cite_5 first proposed the framework of FCN for semantic segmentation, which is both effective and efficient. They also enhanced the final feature maps with those from intermediate layers, which enables their model to make finer predictions. Chen et al. @cite_16 increased the resolution of feature maps by spontaneously removing some of the down-sampling operations and accordingly introducing kernel dilation into their networks. They also found that a classifier composed of small kernels with a large dilation performed as well as a classifier with large kernels, and that reducing the size of field-of-view had an adverse impact on performance. As post-processing, they applied dense CRFs to refine the predicted category score maps for further improvement.
{ "cite_N": [ "@cite_5", "@cite_16" ], "mid": [ "1903029394", "2964288706" ], "abstract": [ "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.", "Abstract: Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU." ] }
1604.04339
2340017589
We propose a method for high-performance semantic image segmentation (or semantic pixel labelling) based on very deep residual networks, which achieves the state-of-the-art performance. A few design factors are carefully considered to this end. We make the following contributions. (i) First, we evaluate different variations of a fully convolutional residual network so as to find the best configuration, including the number of layers, the resolution of feature maps, and the size of field-of-view. Our experiments show that further enlarging the field-of-view and increasing the resolution of feature maps are typically beneficial, which however inevitably leads to a higher demand for GPU memories. To walk around the limitation, we propose a new method to simulate a high resolution network with a low resolution network, which can be applied during training and or testing. (ii) Second, we propose an online bootstrapping method for training. We demonstrate that online bootstrapping is critically important for achieving good accuracy. (iii) Third we apply the traditional dropout to some of the residual blocks, which further improves the performance. (iv) Finally, our method achieves the currently best mean intersection-over-union 78.3 on the PASCAL VOC 2012 dataset, as well as on the recent dataset Cityscapes. ∗This research was in part supported by the Data to Decisions Cooperative Research Centre. C. Shen’s participation was in part supported by an ARC Future Fellowship (FT120100969). C. Shen is the corresponding author. 1 ar X iv :1 60 4. 04 33 9v 1 [ cs .C V ] 1 5 A pr 2 01 6
Zheng et al. @cite_19 simulate the dense CRFs with an recurrent neural network (RNN), which can be trained end-to-end together with the down-lying convolution layers. Lin et al. @cite_18 jointly trained CRFs with down-lying convolution layers, thus they are able to capture both patch-patch' and patch-background' context with CRFs, rather than just pursue local smoothness as most of the previous methods do.
{ "cite_N": [ "@cite_19", "@cite_18" ], "mid": [ "2124592697", "2296478878" ], "abstract": [ "Pixel-level labelling tasks, such as semantic segmentation, play a central role in image understanding. Recent approaches have attempted to harness the capabilities of deep learning techniques for image recognition to tackle pixel-level labelling tasks. One central issue in this methodology is the limited capacity of deep learning techniques to delineate visual objects. To solve this problem, we introduce a new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling. To this end, we formulate Conditional Random Fields with Gaussian pairwise potentials and mean-field approximate inference as Recurrent Neural Networks. This network, called CRF-RNN, is then plugged in as a part of a CNN to obtain a deep network that has desirable properties of both CNNs and CRFs. Importantly, our system fully integrates CRF modelling with CNNs, making it possible to train the whole deep network end-to-end with the usual back-propagation algorithm, avoiding offline post-processing methods for object delineation. We apply the proposed method to the problem of semantic image segmentation, obtaining top results on the challenging Pascal VOC 2012 segmentation benchmark.", "We propose an approach for exploiting contextual information in semantic image segmentation, and particularly investigate the use of patch-patch context and patch-background context in deep CNNs. We formulate deep structured models by combining CNNs and Conditional Random Fields (CRFs) for learning the patch-patch context between image regions. Specifically, we formulate CNN-based pairwise potential functions to capture semantic correlations between neighboring patches. Efficient piecewise training of the proposed deep structured model is then applied in order to avoid repeated expensive CRF inference during the course of back propagation. For capturing the patch-background context, we show that a network design with traditional multi-scale image inputs and sliding pyramid pooling is very effective for improving performance. We perform comprehensive evaluation of the proposed method. We achieve new state-of-the-art performance on a number of challenging semantic segmentation datasets." ] }
1604.04339
2340017589
We propose a method for high-performance semantic image segmentation (or semantic pixel labelling) based on very deep residual networks, which achieves the state-of-the-art performance. A few design factors are carefully considered to this end. We make the following contributions. (i) First, we evaluate different variations of a fully convolutional residual network so as to find the best configuration, including the number of layers, the resolution of feature maps, and the size of field-of-view. Our experiments show that further enlarging the field-of-view and increasing the resolution of feature maps are typically beneficial, which however inevitably leads to a higher demand for GPU memories. To walk around the limitation, we propose a new method to simulate a high resolution network with a low resolution network, which can be applied during training and or testing. (ii) Second, we propose an online bootstrapping method for training. We demonstrate that online bootstrapping is critically important for achieving good accuracy. (iii) Third we apply the traditional dropout to some of the residual blocks, which further improves the performance. (iv) Finally, our method achieves the currently best mean intersection-over-union 78.3 on the PASCAL VOC 2012 dataset, as well as on the recent dataset Cityscapes. ∗This research was in part supported by the Data to Decisions Cooperative Research Centre. C. Shen’s participation was in part supported by an ARC Future Fellowship (FT120100969). C. Shen is the corresponding author. 1 ar X iv :1 60 4. 04 33 9v 1 [ cs .C V ] 1 5 A pr 2 01 6
There are some recent works in the literature exploring sampling methods during training, which are concurrent with ours. Loshchilov and Hutter @cite_2 studied mini-batch selection in terms of image classification. They picked hard training images from the whole training set according to their current losses, which were lazily updated once an image had been forwarded through the network being trained. Shrivastava et al. @cite_13 proposed to select hard region-of-interests (RoIs) for object detection. They only computed the feature maps of an image once, and forwarded all RoIs of the image on top of these feature maps. Thus they are able to find the hard RoIs with a small extra computational cost.
{ "cite_N": [ "@cite_13", "@cite_2" ], "mid": [ "2963516811", "2174940656" ], "abstract": [ "The field of object detection has made significant advances riding on the wave of region-based ConvNets, but their training procedure still includes many heuristics and hyperparameters that are costly to tune. We present a simple yet surprisingly effective online hard example mining (OHEM) algorithm for training region-based ConvNet detectors. Our motivation is the same as it has always been – detection datasets contain an overwhelming number of easy examples and a small number of hard examples. Automatic selection of these hard examples can make training more effective and efficient. OHEM is a simple and intuitive algorithm that eliminates several heuristics and hyperparameters in common use. But more importantly, it yields consistent and significant boosts in detection performance on benchmarks like PASCAL VOC 2007 and 2012. Its effectiveness increases as datasets become larger and more difficult, as demonstrated by the results on the MS COCO dataset. Moreover, combined with complementary advances in the field, OHEM leads to state-of-the-art results of 78.9 and 76.3 mAP on PASCAL VOC 2007 and 2012 respectively.", "Deep neural networks are commonly trained using stochastic non-convex optimization procedures, which are driven by gradient information estimated on fractions (batches) of the dataset. While it is commonly accepted that batch size is an important parameter for offline tuning, the benefits of online selection of batches remain poorly understood. We investigate online batch selection strategies for two state-of-the-art methods of stochastic gradient-based optimization, AdaDelta and Adam. As the loss function to be minimized for the whole dataset is an aggregation of loss functions of individual datapoints, intuitively, datapoints with the greatest loss should be considered (selected in a batch) more frequently. However, the limitations of this intuition and the proper control of the selection pressure over time are open questions. We propose a simple strategy where all datapoints are ranked w.r.t. their latest known loss value and the probability to be selected decays exponentially as a function of rank. Our experimental results on the MNIST dataset suggest that selecting batches speeds up both AdaDelta and Adam by a factor of about 5." ] }
1604.04339
2340017589
We propose a method for high-performance semantic image segmentation (or semantic pixel labelling) based on very deep residual networks, which achieves the state-of-the-art performance. A few design factors are carefully considered to this end. We make the following contributions. (i) First, we evaluate different variations of a fully convolutional residual network so as to find the best configuration, including the number of layers, the resolution of feature maps, and the size of field-of-view. Our experiments show that further enlarging the field-of-view and increasing the resolution of feature maps are typically beneficial, which however inevitably leads to a higher demand for GPU memories. To walk around the limitation, we propose a new method to simulate a high resolution network with a low resolution network, which can be applied during training and or testing. (ii) Second, we propose an online bootstrapping method for training. We demonstrate that online bootstrapping is critically important for achieving good accuracy. (iii) Third we apply the traditional dropout to some of the residual blocks, which further improves the performance. (iv) Finally, our method achieves the currently best mean intersection-over-union 78.3 on the PASCAL VOC 2012 dataset, as well as on the recent dataset Cityscapes. ∗This research was in part supported by the Data to Decisions Cooperative Research Centre. C. Shen’s participation was in part supported by an ARC Future Fellowship (FT120100969). C. Shen is the corresponding author. 1 ar X iv :1 60 4. 04 33 9v 1 [ cs .C V ] 1 5 A pr 2 01 6
The method of @cite_2 is similar to ours in the sense that they all select hard training samples based on the current losses of individual data-points. However, we only search hard pixels within the current mini-batch, rather than the whole training set. In this sense, the method of @cite_13 is more similar to ours. To our knowledge, our method is the first to propose online bootstrapping of hard pixel samples for the problem of semantic image segmentation.
{ "cite_N": [ "@cite_13", "@cite_2" ], "mid": [ "2963516811", "2174940656" ], "abstract": [ "The field of object detection has made significant advances riding on the wave of region-based ConvNets, but their training procedure still includes many heuristics and hyperparameters that are costly to tune. We present a simple yet surprisingly effective online hard example mining (OHEM) algorithm for training region-based ConvNet detectors. Our motivation is the same as it has always been – detection datasets contain an overwhelming number of easy examples and a small number of hard examples. Automatic selection of these hard examples can make training more effective and efficient. OHEM is a simple and intuitive algorithm that eliminates several heuristics and hyperparameters in common use. But more importantly, it yields consistent and significant boosts in detection performance on benchmarks like PASCAL VOC 2007 and 2012. Its effectiveness increases as datasets become larger and more difficult, as demonstrated by the results on the MS COCO dataset. Moreover, combined with complementary advances in the field, OHEM leads to state-of-the-art results of 78.9 and 76.3 mAP on PASCAL VOC 2007 and 2012 respectively.", "Deep neural networks are commonly trained using stochastic non-convex optimization procedures, which are driven by gradient information estimated on fractions (batches) of the dataset. While it is commonly accepted that batch size is an important parameter for offline tuning, the benefits of online selection of batches remain poorly understood. We investigate online batch selection strategies for two state-of-the-art methods of stochastic gradient-based optimization, AdaDelta and Adam. As the loss function to be minimized for the whole dataset is an aggregation of loss functions of individual datapoints, intuitively, datapoints with the greatest loss should be considered (selected in a batch) more frequently. However, the limitations of this intuition and the proper control of the selection pressure over time are open questions. We propose a simple strategy where all datapoints are ranked w.r.t. their latest known loss value and the probability to be selected decays exponentially as a function of rank. Our experimental results on the MNIST dataset suggest that selecting batches speeds up both AdaDelta and Adam by a factor of about 5." ] }
1604.04038
2336902217
Pushing is a motion primitive useful to handle objects that are too large, too heavy, or too cluttered to be grasped. It is at the core of much of robotic manipulation, in particular when physical interaction is involved. It seems reasonable then to wish for robots to understand how pushed objects move. In reality, however, robots often rely on approximations which yield models that are computable, but also restricted and inaccurate. Just how close are those models? How reasonable are the assumptions they are based on? To help answer these questions, and to get a better experimental understanding of pushing, we present a comprehensive and high-fidelity dataset of planar pushing experiments. The dataset contains timestamped poses of a circular pusher and a pushed object, as well as forces at the interaction.We vary the push interaction in 6 dimensions: surface material, shape of the pushed object, contact position, pushing direction, pushing speed, and pushing acceleration. An industrial robot automates the data capturing along precisely controlled position-velocity-acceleration trajectories of the pusher, which give dense samples of positions and forces of uniform quality. We finish the paper by characterizing the variability of friction, and evaluating the most common assumptions and simplifications made by models of frictional pushing in robotics.
One of the most common assumptions in robotic pushing, and possibly in robotic manipulation, is quasistatic interaction. In the context of pushing, quasistatic interaction means that the velocity of the involved objects is small enough that inertia is negligible. Instantaneous motion is then a consequence of the balance between contact forces, frictional forces, and gravity. The quasistatic assumption makes the problem more tractable, yielding simpler models, and is a reasonable assumption for the scales and speeds in much of robotic manipulation @cite_2 .
{ "cite_N": [ "@cite_2" ], "mid": [ "1762028192" ], "abstract": [ "\"Manipulation\" refers to a variety of physical changes made to the world around us. Mechanics of Robotic Manipulation addresses one form of robotic manipulation, moving objects, and the various processes involved---grasping, carrying, pushing, dropping, throwing, and so on. Unlike most books on the subject, it focuses on manipulation rather than manipulators. This attention to processes rather than devices allows a more fundamental approach, leading to results that apply to a broad range of devices, not just robotic arms. The book draws both on classical mechanics and on classical planning, which introduces the element of imperfect information. The book does not propose a specific solution to the problem of manipulation, but rather outlines a path of inquiry." ] }
1604.04279
2951794770
What does a typical visit to Paris look like? Do people first take photos of the Louvre and then the Eiffel Tower? Can we visually model a temporal event like "Paris Vacation" using current frameworks? In this paper, we explore how we can automatically learn the temporal aspects, or storylines of visual concepts from web data. Previous attempts focus on consecutive image-to-image transitions and are unsuccessful at recovering the long-term underlying story. Our novel Skipping Recurrent Neural Network (S-RNN) model does not attempt to predict each and every data point in the sequence, like classic RNNs. Rather, S-RNN uses a framework that skips through the images in the photo stream to explore the space of all ordered subsets of the albums via an efficient sampling procedure. This approach reduces the negative impact of strong short-term correlations, and recovers the latent story more accurately. We show how our learned storylines can be used to analyze, predict, and summarize photo albums from Flickr. Our experimental results provide strong qualitative and quantitative evidence that S-RNN is significantly better than other candidate methods such as LSTMs on learning long-term correlations and recovering latent storylines. Moreover, we show how storylines can help machines better understand and summarize photo streams by inferring a brief personalized story of each individual album.
Summarizing video clips is an active area of research @cite_15 . Many approaches have been developed seeking cues ranging from low-level motion and appearances @cite_29 @cite_30 @cite_0 to high level concepts @cite_42 @cite_26 and attentions @cite_2 . This line of research has been recently extended to photo albums, and more external factors are considered for summarization besides the narrative structure. For example, in @cite_28 the authors put forward three criteria: quality, diversity, and coverage. Later, in @cite_40 a system is proposed that considers the social context ( characters, aesthetics) into the summarization framework. Sadeghi al @cite_45 also consider if a photo is memorable or iconic. Moreover, most of these approaches are , namely the associated summaries for videos albums are first collected by crowd-sourcing, then a model is learned to generate good summaries. While performance-wise it may seem best to leverage human supervision and external factors when available, practically it suffers serious issues like scalability and inconsistency in the ground-truth collection process, and generalizablility when applied to other domains. On the other hand, the task of summarization will be less ambiguous if the concept is given, which is exactly what we want to explore in this work.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_28", "@cite_29", "@cite_42", "@cite_0", "@cite_40", "@cite_45", "@cite_2", "@cite_15" ], "mid": [ "2144577430", "2103908291", "2028793570", "", "2106229755", "2147328102", "2135130633", "2073075412", "2006180404", "2094998392" ], "abstract": [ "We propose a unified approach for video summarization based on the analysis of video structures and video highlights. Two major components in our approach are scene modeling and highlight detection. Scene modeling is achieved by normalized cut algorithm and temporal graph analysis, while highlight detection is accomplished by motion attention modeling. In our proposed approach, a video is represented as a complete undirected graph and the normalized cut algorithm is carried out to globally and optimally partition the graph into video clusters. The resulting clusters form a directed temporal graph and a shortest path algorithm is proposed to efficiently detect video scenes. The attention values are then computed and attached to the scenes, clusters, shots, and subshots in a temporal graph. As a result, the temporal graph can inherently describe the evolution and perceptual importance of a video. In our application, video summaries that emphasize both content balance and perceptual quality can be generated directly from a temporal graph that embeds both the structure and attention information.", "Given the enormous growth in user-generated videos, it is becoming increasingly important to be able to navigate them efficiently. As these videos are generally of poor quality, summarization methods designed for well-produced videos do not generalize to them. To address this challenge, we propose to use web-images as a prior to facilitate summarization of user-generated videos. Our main intuition is that people tend to take pictures of objects to capture them in a maximally informative way. Such images could therefore be used as prior information to summarize videos containing a similar set of objects. In this work, we apply our novel insight to develop a summarization algorithm that uses the web-image based prior information in an unsupervised manner. Moreover, to automatically evaluate summarization algorithms on a large scale, we propose a framework that relies on multiple summaries obtained through crowdsourcing. We demonstrate the effectiveness of our evaluation framework by comparing its performance to that of multiple human evaluators. Finally, we present results for our framework tested on hundreds of user-generated videos.", "In this paper, we propose a framework for generation of representative subset summaries from large personal photo collections. These summaries will help in effective sharing and browsing of the personal photos. We define three salient properties: quality, diversity and coverage that an informative summary should satisfy. We propose methods to compute these properties using multidimensional content and context data. The objective of summarization is modeled as an optimization of these properties, given the size constraints. We also propose metrics which will evaluate the photo summaries based on their representation of the larger corpus and the ability to satisfy user's information needs. We use a dataset of 40K personal photos collected by crawling photo sharing and storage sites of sixteen users. Our experiments show that the summarization algorithm works better than the baseline algorithms.", "", "We present a video summarization approach for egocentric or “wearable” camera data. Given hours of video, the proposed method produces a compact storyboard summary of the camera wearer's day. In contrast to traditional keyframe selection techniques, the resulting summary focuses on the most important objects and people with which the camera wearer interacts. To accomplish this, we develop region cues indicative of high-level saliency in egocentric video — such as the nearness to hands, gaze, and frequency of occurrence — and learn a regressor to predict the relative importance of any new region based on these cues. Using these predictions and a simple form of temporal event detection, our method selects frames for the storyboard that reflect the key object-driven happenings. Critically, the approach is neither camera-wearer-specific nor object-specific; that means the learned importance metric need not be trained for a given user or context, and it can predict the importance of objects and people that have never been seen previously. Our results with 17 hours of egocentric data show the method's promise relative to existing techniques for saliency and summarization.", "New methods for detecting shot boundaries in video sequences and for extracting key frames using metrics based on information theory are proposed. The method for shot boundary detection relies on the mutual information (MI) and the joint entropy (JE) between the frames. It can detect cuts, fade-ins and fade-outs. The detection technique was tested on the TRECVID2003 video test set having different types of shots and containing significant object and camera motion inside the shots. It is demonstrated that the method detects both fades and abrupt cuts with high accuracy. The information theory measure provides us with better results because it exploits the inter-frame information in a more compact way than frame subtraction. It was also successfully compared to other methods published in literature. The method for key frame extraction uses MI as well. We show that it captures satisfactorily the visual content of the shot.", "Information overload is one of today's major concerns. As high-resolution digital cameras become increasingly pervasive, unprecedented amounts of social media are being uploaded to online social networks on a daily basis. In order to support users on selecting the best photos to create an online photo album, attention has been devoted to the development of automatic approaches for photo storytelling. In this paper, we present a novel photo collection summarization system that learns some of the users' social context by analyzing their online photo albums, and includes storytelling principles and face and image aesthetic ranking in order to assist users in creating new photo albums to be shared online. In an in-depth user study conducted with 12 subjects, the proposed system was validated as a first step in the photo album creation process, helping users reduce workload to accomplish such a task. Our findings suggest that a human audio video professional with cinematographic skills does not perform better than our proposed system.", "We propose the problem of automated photo album creation from an unordered image collection. The problem is difficult as it involves a number of complex perceptual tasks that facilitate selection and ordering of photos to create a compelling visual narrative. To help solve this problem, we collect (and will make available) a new benchmark dataset based on Flickr images. Flickr Album Dataset and provides a variety of annotations useful for the task, including manually created albums of various lengths. We analyze the problem and provide experimental evidence, through user studies, that both selection and ordering of photos within an album is important for human observers. To capture and learn rules of album composition, we propose a discriminative structured model capable of encoding simple preferences for contextual layout of the scene (e.g., spatial layout of faces, global scene context, and presence absence of attributes) and ordering between photos (e.g., exclusion principles or correlations). The parameters of the model are learned using a structured SVM framework. Once learned, the model allows automatic composition of photo albums from unordered and untagged collections of images. We quantitatively evaluate the results obtained using our model against manually created albums and baselines on a dataset of 63 personal photo collections from 5 different topics.", "Automatic generation of video summarization is one of the key techniques in video management and browsing. In this paper, we present a generic framework of video summarization based on the modeling of viewer's attention. Without fully semantic understanding of video content, this framework takes advantage of understanding of video content, this framework takes advantage of computational attention models and eliminates the needs of complex heuristic rules in video summarization. A set of methods of audio-visual attention model features are proposed and presented. The experimental evaluations indicate that the computational attention based approach is an effective alternative to video semantic analysis for video summarization.", "The demand for various multimedia applications is rapidly increasing due to the recent advance in the computing and network infrastructure, together with the widespread use of digital video technology. Among the key elements for the success of these applications is how to effectively and efficiently manage and store a huge amount of audio visual information, while at the same time providing user-friendly access to the stored data. This has fueled a quickly evolving research area known as video abstraction. As the name implies, video abstraction is a mechanism for generating a short summary of a video, which can either be a sequence of stationary images (keyframes) or moving images (video skims). In terms of browsing and navigation, a good video abstract will enable the user to gain maximum information about the target video sequence in a specified time constraint or sufficient information in the minimum time. Over past years, various ideas and techniques have been proposed towards the effective abstraction of video contents. The purpose of this article is to provide a systematic classification of these works. We identify and detail, for each approach, the underlying components and how they are addressed in specific works." ] }
1604.04279
2951794770
What does a typical visit to Paris look like? Do people first take photos of the Louvre and then the Eiffel Tower? Can we visually model a temporal event like "Paris Vacation" using current frameworks? In this paper, we explore how we can automatically learn the temporal aspects, or storylines of visual concepts from web data. Previous attempts focus on consecutive image-to-image transitions and are unsuccessful at recovering the long-term underlying story. Our novel Skipping Recurrent Neural Network (S-RNN) model does not attempt to predict each and every data point in the sequence, like classic RNNs. Rather, S-RNN uses a framework that skips through the images in the photo stream to explore the space of all ordered subsets of the albums via an efficient sampling procedure. This approach reduces the negative impact of strong short-term correlations, and recovers the latent story more accurately. We show how our learned storylines can be used to analyze, predict, and summarize photo albums from Flickr. Our experimental results provide strong qualitative and quantitative evidence that S-RNN is significantly better than other candidate methods such as LSTMs on learning long-term correlations and recovering latent storylines. Moreover, we show how storylines can help machines better understand and summarize photo streams by inferring a brief personalized story of each individual album.
Recurrent neural networks @cite_10 are a subset of neural networks that can carry information across time steps. Compared to other models for sequential modeling ( hidden Markov models, linear dynamic systems), they are better at capturing the long-range and high-order time-dependencies, and have shown superior performance on tasks like language modeling @cite_13 and text generation @cite_4 . In this work we extend the network to model high dimensional trajectories in videos and user albums through the space of continuous visual features. Interestingly, since our network is trained to predict images several steps away, it can be viewed as a simple and effective way to learn long term memories @cite_16 and predict context @cite_22 as well. Fundamentally, LSTM still looks at only the next image and decides if it should be stored it in memory, but S-RNN reasons over all future images, and decides which it should store in memory (greedy vs. global). We outperform multiple LSTM baselines in our results. Furthermore, running LSTMs directly on high-dimensional continuous features is non-trivial, and we present a network that accomplishes that.
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_10", "@cite_16", "@cite_13" ], "mid": [ "196214544", "2950133940", "2110485445", "", "179875071" ], "abstract": [ "Recurrent Neural Networks (RNNs) are very powerful sequence models that do not enjoy widespread use because it is extremely difficult to train them properly. Fortunately, recent advances in Hessian-free optimization have been able to overcome the difficulties associated with training RNNs, making it possible to apply them successfully to challenging sequence problems. In this paper we demonstrate the power of RNNs trained with the new Hessian-Free optimizer (HF) by applying them to character-level language modeling tasks. The standard RNN architecture, while effective, is not ideally suited for such tasks, so we introduce a new RNN variant that uses multiplicative (or \"gated\") connections which allow the current input character to determine the transition matrix from one hidden state vector to the next. After training the multiplicative RNN with the HF optimizer for five days on 8 high-end Graphics Processing Units, we were able to surpass the performance of the best previous single method for character-level language modeling – a hierarchical non-parametric sequence model. To our knowledge this represents the largest recurrent neural network application to date.", "The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.", "Time underlies many interesting human behaviors. Thus, the question of how to represent time in connectionist models is very important. One approach is to represent time implicitly by its effects on processing rather than explicitly (as in a spatial representation). The current report develops a proposal along these lines first described by Jordan (1986) which involves the use of recurrent links in order to provide networks with a dynamic memory. In this approach, hidden unit patterns are fed back to themselves; the internal representations which develop thus reflect task demands in the context of prior internal states. A set of simulations is reported which range from relatively simple problems (temporal version of XOR) to discovering syntactic semantic features for words. The networks are able to learn interesting internal representations which incorporate task demands with memory demands; indeed, in this approach the notion of memory is inextricably bound up with task processing. These representations reveal a rich structure, which allows them to be highly context-dependent while also expressing generalizations across classes of items. These representations suggest a method for representing lexical categories and the type token distinction.", "", "A new recurrent neural network based language model (RNN LM) with applications to speech recognition is presented. Results indicate that it is possible to obtain around 50 reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. Speech recognition experiments show around 18 reduction of word error rate on the Wall Street Journal task when comparing models trained on the same amount of data, and around 5 on the much harder NIST RT05 task, even when the backoff model is trained on much more data than the RNN LM. We provide ample empirical evidence to suggest that connectionist language models are superior to standard n-gram techniques, except their high computational (training) complexity. Index Terms: language modeling, recurrent neural networks, speech recognition" ] }
1604.04206
2335881461
A recent work shows how we can optimize a tree based mode of operation for a rate 1 hash function. In particular, an algorithm and a theorem are presented for selecting a good tree topology in order to optimize both the running time and the number of processors at each step of the computation. Because this paper deals only with trees having their leaves at the same depth, the number of saved computing resources is perfectly optimal only for this category of trees. In this note, we address the more general case and describe a simple algorithm which, starting from such a tree topology, reworks it to further reduce the number of processors and the total amount of work done to hash a message.
A solution to this problem is a multiset of arities. Note that with such a solution, we can construct a tree having exactly @math leaves, a tree where the number of nodes of the first level is exactly @math , the number of nodes of the second level is @math , and so on. Among all possible solutions, we would like the one which minimizes both the number of processors and the amount of . We recall that the amount of , denoted @math , corresponds to the total amount of computation time to process a message of length @math . For a tree having its leaves at the same depth, it can be evaluated as: @math We recall the third theorem of @cite_2 which selects the good parameters for a tree having its leaves at the same depth.
{ "cite_N": [ "@cite_2" ], "mid": [ "2200726339" ], "abstract": [ "This paper focuses on parallel hash functions based on tree modes of operation for a compression function. We discuss the various forms of optimality that can be obtained when designing such parallel hash functions. The first result is a scheme which optimizes the tree topology in order to decrease at best the running time. Then, without affecting the optimal running time we show that we can slightly change the corresponding tree topology so as to decrease at best the number of required processors as well. Consequently, the resulting scheme optimizes in the first place the running time and in the second place the number of required processors. The present work is of independent interest if we consider the problem of parallelizing the evaluation of an expression where the operator used is neither associative nor commutative." ] }
1604.04114
2952289685
This paper presents a study of operational and type-theoretic properties of different resolution strategies in Horn clause logic. We distinguish four different kinds of resolution: resolution by unification (SLD-resolution), resolution by term-matching, the recently introduced structural resolution, and partial (or lazy) resolution. We express them all uniformly as abstract reduction systems, which allows us to undertake a thorough comparative analysis of their properties. To match this small-step semantics, we propose to take Howard's System H as a type-theoretic semantic counterpart. Using System H, we interpret Horn formulas as types, and a derivation for a given formula as the proof term inhabiting the type given by the formula. We prove soundness of these abstract reduction systems relative to System H, and we show completeness of SLD-resolution and structural resolution relative to System H. We identify conditions under which structural resolution is operationally equivalent to SLD-resolution. We show correspondence between term-matching resolution for Horn clause programs without existential variables and term rewriting.
To the best of our knowledge, studying logic programming proof-theoretically dates back to Girard's suggestion to use the cut rule to model resolution for Horn formulas [Chapter 13.4] Girard:1989 . Miller et. al. @cite_0 use cut-free sequent calculus to represent a proof for a query. More specifically, given a query @math and a logic program @math , @math has a refutation iff there is a derivation in cut-free sequent calculus for @math . Using sequent calculus as a proof theoretic framework gives the flexibility to incorporate different kinds of formulas, e.g. classical formulas and linear formulas into this framework.
{ "cite_N": [ "@cite_0" ], "mid": [ "2093111544" ], "abstract": [ "Abstract Miller, D., G. Nadathur, F. Pfenning and A. Scedrov, Uniform proofs as a foundation for logic programming, Annals of Pure and Applied Logic 51 (1991) 125–157. A proof-theoretic characterization of logical languages that form suitable bases for Prolog-like programming languages is provided. This characterization is based on the principle that the declarative meaning of a logic program, provided by provability in a logical system, should coincide with its operational meaning, provided by interpreting logical connectives as simple and fixed search instructions. The operational semantics is formalized by the identification of a class of cut-free sequent proofs called uniform proofs . A uniform proof is one that can be found by a goal-directed search that respects the interpretation of the logical connectives as search instructions. The concept of a uniform proof is used to define the notion of an abstract logic programming language , and it is shown that first-order and higher-order Horn clauses with classical provability are examples of such a language. Horn clauses are then generalized to hereditary Harrop formulas and it is shown that first-order and higher-order versions of this new class of formulas are also abstract logic programming languages if the inference rules are those of either intuitionistic or minimal logic. The programming language significance of the various generalizations to first-order Horn clauses is briefly discussed." ] }
1604.04114
2952289685
This paper presents a study of operational and type-theoretic properties of different resolution strategies in Horn clause logic. We distinguish four different kinds of resolution: resolution by unification (SLD-resolution), resolution by term-matching, the recently introduced structural resolution, and partial (or lazy) resolution. We express them all uniformly as abstract reduction systems, which allows us to undertake a thorough comparative analysis of their properties. To match this small-step semantics, we propose to take Howard's System H as a type-theoretic semantic counterpart. Using System H, we interpret Horn formulas as types, and a derivation for a given formula as the proof term inhabiting the type given by the formula. We prove soundness of these abstract reduction systems relative to System H, and we show completeness of SLD-resolution and structural resolution relative to System H. We identify conditions under which structural resolution is operationally equivalent to SLD-resolution. We show correspondence between term-matching resolution for Horn clause programs without existential variables and term rewriting.
Interactive theorem prover Twelf @cite_8 pioneered implementation of proof search on top of a depedently typed system called LF @cite_9 . Similar to Twelf, we believe that type systems serve as a suitable foundation for logic programming. Comparing to Twelf, we specify and analyze different resolution strategies (other than SLD-resolution) and study their intrinsic relations. We also pay more attention to various kinds of productivity compared to Twelf.
{ "cite_N": [ "@cite_9", "@cite_8" ], "mid": [ "2102259218", "2519527069" ], "abstract": [ "The LF logical framework codifies a methodology for representing deductive systems, such as programming languages and logics, within a dependently typed λ-calculus. In this methodology, the syntactic and deductive apparatus of a system is encoded as the canonical forms of associated LF types; an encoding is correct (adequate) if and only if it defines a compositional bijection between the apparatus of the deductive system and the associated canonical forms. Given an adequate encoding, one may establish metatheoretic properties of a deductive system by reasoning about the associated LF representation. The Twelf implementation of the LF logical framework is a convenient and powerful tool for putting this methodology into practice. Twelf supports both the representation of a deductive system and the mechanical verification of proofs of metatheorems about it. The purpose of this article is to provide an up-to-date overview of the LF λ-calculus, the LF methodology for adequate representation, and the Twelf methodology for mechanizing metatheory. We begin by defining a variant of the original LF language, called Canonical LF, in which only canonical forms (long βη-normal forms) are permitted. This variant is parameterized by a subordination relation, which enables modular reasoning about LF representations. We then give an adequate representation of a simply typed λ-calculus in Canonical LF, both to illustrate adequacy and to serve as an object of analysis. Using this representation, we formalize and verify the proofs of some metatheoretic results, including preservation, determinacy, and strengthening. Each example illustrates a significant aspect of using LF and Twelf for formalized metatheory.", "Twelf is a meta-logical framework for the specification, implementation, and meta-theory of deductive systems from the theory of programming languages and logics. It relies on the LF type theory and the judgments-as-types methodology for specification [HHP93], a constraint logic programming interpreter for implementation [Pfe91], and the meta-logic M 2 for reasoning about object languages encoded in LF [SP98]. It is a significant extension and complete reimplementation of the Elf system [Pfe94]. Twelf is written in Standard ML and runs under SML of New Jersey and MLWorks on Unix and Window platfbrms. The current version (1.2) is distributed with a complete manual, example suites, a tutorial in the form of on-line lecture notes [Pfe], and an Emacs interface. Source and binary distributions are accessible via the Twelf home page http: www.cs.cmu.edu - twelf." ] }
1604.04114
2952289685
This paper presents a study of operational and type-theoretic properties of different resolution strategies in Horn clause logic. We distinguish four different kinds of resolution: resolution by unification (SLD-resolution), resolution by term-matching, the recently introduced structural resolution, and partial (or lazy) resolution. We express them all uniformly as abstract reduction systems, which allows us to undertake a thorough comparative analysis of their properties. To match this small-step semantics, we propose to take Howard's System H as a type-theoretic semantic counterpart. Using System H, we interpret Horn formulas as types, and a derivation for a given formula as the proof term inhabiting the type given by the formula. We prove soundness of these abstract reduction systems relative to System H, and we show completeness of SLD-resolution and structural resolution relative to System H. We identify conditions under which structural resolution is operationally equivalent to SLD-resolution. We show correspondence between term-matching resolution for Horn clause programs without existential variables and term rewriting.
Structural resolution is a result of joint research efforts by Komendantskaya et. al. ( @cite_10 , @cite_18 , @cite_17 ). The goal of the analysis of structural resolution is to support coinductive reasoning in logic programming. For example, given the query @math in Example , one may want not only to obtain a substitution for @math , but also a guarantee that the queries to @math are nonterminating and, moreover, that derivations for @math will not fail if continued to infinity. To support this, has been developed @cite_2 @cite_15 as a compile time technique to detect observational productivity of logic programs.
{ "cite_N": [ "@cite_18", "@cite_2", "@cite_15", "@cite_10", "@cite_17" ], "mid": [ "1593507982", "2276638936", "2963613549", "1030180884", "1989408212" ], "abstract": [ "Coalgebra may be used to provide semantics for SLD-derivations, both finite and infinite. We first give such semantics to classical SLD-derivations, proving results such as adequacy, soundness and completeness. Then, based upon coalgebraic semantics, we propose a new sound and complete algorithm for parallel derivations. We analyse this new algorithm in terms of the Theory of Observables, and we prove correctness and full abstraction results.", "Logic programming (LP) is a programming language based on first-order Horn clause logic that uses SLD-resolution as a semi-decision procedure. Finite SLD-computations are inductively sound and complete with respect to least Herbrand models of logic programs. Dually, the corecursive approach to SLD-resolution views infinite SLD-computations as successively approximating infinite terms contained in programs' greatest complete Herbrand models. State-of-the-art algorithms implementing corecursion in LP are based on loop detection. However, such algorithms support inference of logical entailment only for rational terms, and they do not account for the important property of productivity in infinite SLD-computations. Loop detection thus lags behind coinductive methods in interactive theorem proving (ITP) and term-rewriting systems (TRS). Structural resolution is a newly proposed alternative to SLD-resolution that makes it possible to define and semi-decide a notion of productivity appropriate to LP. In this paper, we prove soundness of structural resolution relative to Herbrand model semantics for productive inductive, coinductive, and mixed inductive-coinductive logic programs. We introduce two algorithms that support coinductive proof search for infinite productive terms. One algorithm combines the method of loop detection with productive structural resolution, thus guaranteeing productivity of coinductive proofs for infinite rational terms. The other allows to make lazy sound observations of fragments of infinite irrational productive terms. This puts coinductive methods in LP on par with productivity-based observational approaches to coinduction in ITP and TRS.", "Automated analysis of recursive derivations in logic programming is known to be a hard problem. Both termination and non-termination are undecidable problems in Turing-complete languages. However, some declarative languages offer a practical work-around for this problem, by making a clear distinction between whether a program is meant to be understood inductively or coinductively. For programs meant to be understood inductively, termination must be guaranteed, whereas for programs meant to be understood coinductively, productive non-termination (or “productivity”) must be ensured. In practice, such classification helps to better understand and implement some non-terminating computations.", "We introduce a Three Tier Tree Calculus (T 3C) that defines in a systematic way three tiers of tree structures underlying proof search in logic programming. We use T 3C to define a new – structural –version of resolution for logic programming.", "Coinductive definitions, such as that of an infinite stream, may often be described by elegant logic programs, but ones for which SLD-refutation is of no value as SLD-derivations fall into infinite loops. Such definitions give rise to questions of lazy corecursive derivations and parallelism, as execution of such logic programs can have both recursive and corecursive features at once. Observational and coalgebraic semantics have been used to study them abstractly. The programming developments have often occurred separately and have usually been implementation-led. Here, we give a coherent semantics-led account of the issues, starting with abstract category theoretic semantics, developing coalgebra to characterize naturally arising trees and proceeding towards implementation of a new dialect, CoALP, of logic programming, characterised by guarded lazy corecursion and parallelism." ] }
1604.04114
2952289685
This paper presents a study of operational and type-theoretic properties of different resolution strategies in Horn clause logic. We distinguish four different kinds of resolution: resolution by unification (SLD-resolution), resolution by term-matching, the recently introduced structural resolution, and partial (or lazy) resolution. We express them all uniformly as abstract reduction systems, which allows us to undertake a thorough comparative analysis of their properties. To match this small-step semantics, we propose to take Howard's System H as a type-theoretic semantic counterpart. Using System H, we interpret Horn formulas as types, and a derivation for a given formula as the proof term inhabiting the type given by the formula. We prove soundness of these abstract reduction systems relative to System H, and we show completeness of SLD-resolution and structural resolution relative to System H. We identify conditions under which structural resolution is operationally equivalent to SLD-resolution. We show correspondence between term-matching resolution for Horn clause programs without existential variables and term rewriting.
@cite_16 's (CoLP) extends SLD-resolution with a method to use atomic . That is, during the execution, if the current queries @math contain a query @math that unifies via @math with a @math in the earlier execution, then the next step of resolution will be given by @math . The coinductvie hyposesis mechanism in CoLP can be viewed as a form of loop detection. However, CoLP cannot detect hypotheses for more complex patterns of coinduction that produce coinductive subgoals that fail to unify. As discussed in introduction, it is not a suitable tool to analyze the productivity of infinite data structures in logic programming.
{ "cite_N": [ "@cite_16" ], "mid": [ "1963107927" ], "abstract": [ "Coinduction has recently been introduced as a powerful technique for reasoning about unfounded sets, unbounded structures, and interactive computations. Where induction corresponds to least fixed point semantics, coinduction corresponds to greatest fixed point semantics. In this paper we discuss the introduction of coinduction into logic programming. We discuss applications of coinductive logic programming to verification and model checking, lazy evaluation, concurrent logic programming and non-monotonic reasoning." ] }
1604.04125
2336939173
Humans perceive their surroundings in great detail even though most of our visual field is reduced to low-fidelity color-deprived (e.g. dichromatic) input by the retina. In contrast, most deep learning architectures are computationally wasteful in that they consider every part of the input when performing an image processing task. Yet, the human visual system is able to perform visual reasoning despite having only a small fovea of high visual acuity. With this in mind, we wish to understand the extent to which connectionist architectures are able to learn from and reason with low acuity, distorted inputs. Specifically, we train autoencoders to generate full-detail images from low-detail "foveations" of those images and then measure their ability to reconstruct the full-detail images from the foveated versions. By varying the type of foveation, we can study how well the architectures can cope with various types of distortion. We find that the autoencoder compensates for lower detail by learning increasingly global feature functions. In many cases, the learnt features are suitable for reconstructing the original full-detail image. For example, we find that the networks accurately perceive color in the periphery, even when 75 of the input is achromatic.
Denoising images has been investigated using architectures other than autoencoders. @cite_13 presented an approach to remove noise from corrupted inputs using sparse coding and deep networks pre-trained with DAEs. Their end to end system could automatically remove complex patterns like text from an image in addition to simple patterns like pixels missing at random. The type of noise additions they investigated were white Gaussian noise, SP noise (flipping pixels randomly), and image background changes. Along the same lines, post deblurring denoising @cite_10 and using convolutional neural networks for natural image denoising of patterns such as specks, dirt and rain has been investigated @cite_4 .
{ "cite_N": [ "@cite_10", "@cite_13", "@cite_4" ], "mid": [ "1973567017", "2146337213", "2098477387" ], "abstract": [ "Image deconvolution is the ill-posed problem of recovering a sharp image, given a blurry one generated by a convolution. In this work, we deal with space-invariant non-blind deconvolution. Currently, the most successful methods involve a regularized inversion of the blur in Fourier domain as a first step. This step amplifies and colors the noise, and corrupts the image information. In a second (and arguably more difficult) step, one then needs to remove the colored noise, typically using a cleverly engineered algorithm. However, the methods based on this two-step approach do not properly address the fact that the image information has been corrupted. In this work, we also rely on a two-step procedure, but learn the second step on a large dataset of natural images, using a neural network. We will show that this approach outperforms the current state-of-the-art on a large dataset of artificially blurred images. We demonstrate the practical applicability of our method in a real-world example with photographic out-of-focus blur.", "We present a novel approach to low-level vision problems that combines sparse coding and deep networks pre-trained with denoising auto-encoder (DA). We propose an alternative training scheme that successfully adapts DA, originally designed for unsupervised feature learning, to the tasks of image denoising and blind inpainting. Our method's performance in the image denoising task is comparable to that of KSVD which is a widely used sparse coding technique. More importantly, in blind image inpainting task, the proposed method provides solutions to some complex problems that have not been tackled before. Specifically, we can automatically remove complex patterns like superimposed text from an image, rather than simple patterns like pixels missing at random. Moreover, the proposed method does not need the information regarding the region that requires inpainting to be given a priori. Experimental results demonstrate the effectiveness of the proposed method in the tasks of image denoising and blind inpainting. We also show that our new training scheme for DA is more effective and can improve the performance of unsupervised feature learning.", "We present an approach to low-level vision that combines two main ideas: the use of convolutional networks as an image processing architecture and an unsupervised learning procedure that synthesizes training samples from specific noise models. We demonstrate this approach on the challenging problem of natural image denoising. Using a test set with a hundred natural images, we find that convolutional networks provide comparable and in some cases superior performance to state of the art wavelet and Markov random field (MRF) methods. Moreover, we find that a convolutional network offers similar performance in the blind de-noising setting as compared to other techniques in the non-blind setting. We also show how convolutional networks are mathematically related to MRF approaches by presenting a mean field theory for an MRF specially designed for image denoising. Although these approaches are related, convolutional networks avoid computational difficulties in MRF approaches that arise from probabilistic learning and inference. This makes it possible to learn image processing architectures that have a high degree of representational power (we train models with over 15,000 parameters), but whose computational expense is significantly less than that associated with inference in MRF approaches with even hundreds of parameters." ] }
1604.03793
2336733340
The recently developed massively parallel satisfiability (SAT) solver HordeSAT was designed in a modular way to allow the integration of any sequential CDCL-based SAT solver in its core. We integrated the QCDCL-based quantified Boolean formula (QBF) solver DepQBF in HordeSAT to obtain a massively parallel QBF solver—HordeQBF. In this paper we describe the details of this integration and report on results of the experimental evaluation of HordeQBF’s performance. HordeQBF achieves superlinear average and median speedup on the hard application instances of the 2014 QBF Gallery.
Approaches to parallel QBF solving are based on shared and distributed memory architectures. PQSolve @cite_1 is an early parallel DPLL @cite_6 solver without knowledge sharing. It comes with a dynamic master slave framework implemented using the message passing interface (MPI) @cite_25 . Search space is partitioned among master and slaves by variable assignments. QMiraXT @cite_22 is a multithreaded QCDCL solver with search space partitioning. PAQuBE @cite_13 is an MPI-based parallel variant of the QCDCL solver QuBE @cite_10 . Clause and cube sharing in PAQuBE can be adapted dynamically at run time. Search space is partitioned like in the SAT solver PSATO based on @cite_16 . The MPI-based solver MPI -DepQBF @cite_18 implements a master worker architecture without knowledge sharing. A worker consists of an instance of the QCDCL solver DepQBF @cite_23 . The master balances the workload by generating subproblems defined by variable assignments (assumptions), which are solved by the workers. Parallel solving approaches have also been presented for quantified CSPs @cite_20 and non-PCNF QBFs @cite_14 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_22", "@cite_10", "@cite_1", "@cite_6", "@cite_23", "@cite_16", "@cite_13", "@cite_25", "@cite_20" ], "mid": [ "288696217", "2074035359", "2400844085", "", "1513004633", "1571893670", "9290171", "2092775861", "1886779203", "2081612620", "2159102201" ], "abstract": [ "Inspired by recent work on parallel SAT solving, we present a lightweight approach for solving quantified Boolean formulas (QBFs) in parallel. In particular, our approach uses a sequential state-of-the-art QBF solver to evaluate subformulas in working processes. It abstains from globally exchanging information between the workers, but keeps learnt information only locally. To this end, we equipped the state-of-the-art QBF solver DepQBF with assumption-based reasoning and integrated it in our novel solver MPIDepQBF as backend solver. Extensive experiments on standard computers as well as on the supercomputer Tsubame show the impact of our approach.", "In this paper, we present the main lines and a first implementation of an open general parallel architecture that we propose for various computation problems about Quantified Boolean Formulae. One main feature of our approach is to deal with QBF without syntactic restrictions, as prenex form or conjunctive normal form. Another main point is to develop a general parallel framework in which we will be able in the future to introduce various specialized algorithms dedicated to particular subproblems.", "This paper introduces the state-of-the-art multithreaded QBF solver QMiraXT. QMiraXT is the first parallel QBF Solver that supports advanced features such as: conflict solution analysis with non-chronological backtracking; knowledge sharing between threads; and novel preprocessing and decision heuristics. By utilizing these features, QMiraXT is significantly faster on industrial and formal verification problems than other solvers. In summary, with 4 threads, QMiraXT solved 22 more benchmarks, providing a speedup of 3.82 compared to the next best sequential solver.", "", "In this paper, we present PQSOLVE, a distributed theorem-prover for Quantified Boolean Formulae. First, we introduce our sequential algorithm QSOLVE, which uses new heuristics and improves the use of known heuristics to prune the search tree. As a result, QSOLVE is more efficient than the QSAT-solvers previously known. We have parallelized QSOLVE. The resulting distributed QSAT-solver PQSOLVE uses parallel search techniques, which we have developed for distributed game tree search. PQSOLVE runs efficiently on distributed systems, i. e. parallel systems without any shared memory. We briefly present experiments that show a speedup of about 114 on 128 processors. To the best of our knowledge we are the first to introduce an efficient parallel QSAT-solver.", "The high computational complexity of advanced reasoning tasks such as belief revision and planning calls for efficient and reliable algorithms for reasoning problems harder than NP. In this paper we propose Evaluate, an algorithm for evaluating Quantified Boolean Formulae, a language that extends propositional logic in a way such that many advanced forms of propositional reasoning, e.g., reasoning about knowledge, can be easily formulated as evaluation of a QBF. Algorithms for evaluation of QBFs are suitable for the experimental analysis on a wide range of complexity classes, a property not easily found in other formalisms. Evaluate is based on a generalization of the Davis-Putnam procedure for SAT, and is guaranteed to work in polynomial space. Before presenting Evaluate, we discuss all the abstract properties of QBFs that we singled out to make the algorithm more efficient. We also briefly mention the main results of the experimental analysis, which is reported elsewhere.", "We present DepQBF 0.1, a new search-based solver for quantified boolean formulae (QBF). It integrates compact dependency graphs to overcome the restrictions imposed by linear quantifier prefixes of QBFs in prenex conjunctive normal form (PCNF). DepQBF 0.1 was placed first in the main track of QBFEVAL’10 in a score-based ranking. We provide a general system overview and describe selected orthogonal features such as restarts and removal of learnt constraints.", "Abstract We present a distributed parallel prover for propositional satisfiability (SAT), called PSATO, for networks of workstations. PSATO is based on the sequential SAT prover SATO, which is an efficient implementation of the Davis –Putnam algorithm. The master–slave model is used for communication. A simple and effective workload balancing method distributes the workload among workstations. A key property of our method is that the concurrent processes explore disjoint portions of the search space. In this way, we use parallelism without introducing redundant search. Our approach provides solutions to the problems of (i) cumulating intermediate results of separate runs of reasoning programs; (ii) designing highly scalable parallel algorithms and (iii) supporting “fault-tolerant” distributed computing. Several dozens of open problems in the study of quasigroups have been solved using PSATO. We also show how a useful technique called the cyclic group construction has been coded in propositional logic.", "In this paper we present the parallel QBF Solver PaQuBE. This new solver leverages the additional computational power that can be exploited from modern computer architectures, from pervasive multi-core boxes to clusters and grids, to solve more relevant instances faster than previous generation solvers. Furthermore, PaQuBE's progressive MPI based parallel framework is the first to support advanced knowledge sharing in which solution cubes as well as conflict clauses can be exchanged between solvers. Knowledge sharing plays a critical role in the performance of PaQuBE. However, due to the overhead associated with sending and receiving MPI messages, and the restricted communication network bandwidth available between solvers, it is essential to optimize not only what information is shared, but the way in which it is shared. In this context, we compare multiple conflict clause and solution cube sharing strategies, and finally show that an adaptive method provides the best overall results.", "MPI (Message Passing Interface) is a specification for a standard library for message passing that was defined by the MPI Forum, a broadly based group of parallel computer vendors, library writers, and applications specialists. Multiple implementations of MPI have been developed. In this paper, we describe MPICH, unique among existing implementations in its design goal of combining portability with high performance. We document its portability and performance and describe the architecture by which these features are simultaneously achieved. We also discuss the set of tools that accompany the free distribution of MPICH, which constitute the beginnings of a portable parallel programming environment. A project of this scope inevitably imparts lessons about parallel computing, the specification being followed, the current hardware and software environment for parallel computing, and project management; we describe those we have learned. Finally, we discuss future developments for MPICH, including those necessary to accommodate extensions to the MPI Standard now being contemplated by the MPI Forum.", "Quantified constraint satisfaction problems have been the topic of an increasing number of studies for a few years. However, only sequential resolution algorithms have been proposed so far. This paper presents a parallel QCSP+ solving algorithm based on a problem-partition approach. It then discuss about work distribution policies and presents several experimental results comparing several parameters." ] }
1604.03632
2761100584
Peer review, evaluation, and selection is a fundamental aspect of modern science. Funding bodies the world over employ experts to review and select the best proposals of those submitted for funding. The problem of peer selection, however, is much more general: a professional society may want to give a subset of its members awards based on the opinions of all members; an instructor for a MOOC or online course may want to crowdsource grading; or a marketing company may select ideas from group brainstorming sessions based on peer evaluation. We make three fundamental contributions to the study of procedures or mechanisms for peer selection, a specific type of group decision-making problem, studied in computer science, economics, and political science. First, we propose a novel mechanism that is strategyproof, i.e., agents cannot benefit by reporting insincere valuations. Second, we demonstrate the effectiveness of our mechanism by a comprehensive simulation-based comparison with a suite of mechanisms found in the literature. Finally, our mechanism employs a randomized rounding technique that is of independent interest, as it solves the apportionment problem that arises in various settings where discrete resources such as parliamentary representation slots need to be divided proportionally.
Peer review and peer selection are the cornerstone of modern science and hence, the quality, veracity, and accuracy of peer review and peer evaluation is a topic of interest across a broad set of disciplines. Most empirical evaluations of peer review and peer selection study the effectiveness and limits of the system, typically by assembling large corpora of peer-reviewed proposals and cross-examining them with new panels or review processes @cite_21 @cite_37 . Questions of bias, nepotism, sexism, cronyism, among other issues, have received extensive coverage, and have been substantiated to varying degrees, in the literature . However, a consistent conclusion in this research is that, in order to decrease the role of chance and or any systematic bias, the community needs to broaden the base of reviewers. Indeed, one way for the results of the review process to reflect the views of the entire scientific constituency and provide more value to the community is to increase the number of reviewers @cite_31 @cite_34 . The key scientific question lies in finding a mechanism that allows for crowdsourcing the work of reviewing, without compromising the incentives and quality of the peer review and selection process.
{ "cite_N": [ "@cite_37", "@cite_21", "@cite_31", "@cite_34" ], "mid": [ "2014437184", "2091375917", "", "2506952740" ], "abstract": [ "A random assignment is ordinally efficient if it is not stochastically dominated with respect to individual preferences over sure objects. Ordinal efficiency implies (is implied by) ex post (ex ante) efficiency. A simple algorithm characterizes ordinally efficient assignments: our solution, probabilistic serial (PS), is a central element within their set. Random priority (RP) orders agents from the uniform distribution, then lets them choose successively their best remaining object. RP is ex post, but not always ordinally, efficient. PS is envy-free, RP is not; RP is strategy-proof, PS is not. Ordinal efficiency, Strategyproofness, and equal treatment of equals are incompatible. Journal of Economic Literature Classification Numbers: C78, D61, D63.", "An experiment in which 150 proposals submitted to the National Science Foundation were evaluated independently by a new set of reviewers indicates that getting a research grant depends to a significant extent on chance. The degree of disagreement within the population of eligible reviewers is such that whether or not a proposal is funded depends in a large proportion of cases upon which reviewers happen to be selected for it. No evidence of systematic bias in the selection of NSF reviewers was found.", "", "Peer assessment is the most common approach to evaluating scientific work, and it is also gaining popularity for scaling evaluation of student work in large and distributed classes. The key idea is that each peer reviewer or grader rates a relatively small subset of the items, and that some method of manual, semi-automatic, or fully-automatic aggregation of all assessments defines the eventual rating of all items – the grade in peer grading, or whether to accept or reject a scientific manuscript. In this paper, we explore in how far a Bayesian Ordinal Peer Assessment (BOPA) method can provide additional decision support when making acceptance rejection decisions for a scientific conference. Using data from the 2015 ACM Conference on Knowledge Discovery and Data Mining (KDD), where this system was deployed, we discuss the potential merit of the BOPA approach compared to conventional decision support offered by the Microsoft Conference Management System (CMT)." ] }
1604.03632
2761100584
Peer review, evaluation, and selection is a fundamental aspect of modern science. Funding bodies the world over employ experts to review and select the best proposals of those submitted for funding. The problem of peer selection, however, is much more general: a professional society may want to give a subset of its members awards based on the opinions of all members; an instructor for a MOOC or online course may want to crowdsource grading; or a marketing company may select ideas from group brainstorming sessions based on peer evaluation. We make three fundamental contributions to the study of procedures or mechanisms for peer selection, a specific type of group decision-making problem, studied in computer science, economics, and political science. First, we propose a novel mechanism that is strategyproof, i.e., agents cannot benefit by reporting insincere valuations. Second, we demonstrate the effectiveness of our mechanism by a comprehensive simulation-based comparison with a suite of mechanisms found in the literature. Finally, our mechanism employs a randomized rounding technique that is of independent interest, as it solves the apportionment problem that arises in various settings where discrete resources such as parliamentary representation slots need to be divided proportionally.
The criticism that prominent peer selection mechanisms such as those under consideration by American and European funding bodies @cite_28 @cite_1 are strategyproof @cite_29 has underscored the need to devise mechanisms with better incentive properties. The literature most directly relevant to this article is a series of papers on strategyproof (impartial) selection @cite_19 @cite_15 . We survey these mechanisms in the next section. Most of the work on strategyproof peer selection focuses on the setting in which agents simply approve (nominate) a subset of agents @cite_15 @cite_25 @cite_2 @cite_19 , with the latter three of these restricting attention to the setting in which exactly one agent is selected ( @math ). present an interesting strategyproof mechanism (Credible Subset) that performs well when each agent reviews a very small number of agents relative to the total number of agents. Other recent work focuses on tradeoffs between different axioms concerning peer selection .
{ "cite_N": [ "@cite_28", "@cite_29", "@cite_1", "@cite_19", "@cite_2", "@cite_15", "@cite_25" ], "mid": [ "2006437246", "2278055525", "2963570920", "1558752712", "2047179339", "2093375346", "181649221" ], "abstract": [ "The procedure that is currently employed to allocate time on telescopes is horribly onerous on those unfortunate astronomers who serve on the committees that administer the process, and is in danger of complete collapse as the number of applications steadily increases. Here, an alternative is presented, whereby the task is distributed around the astronomical community, with a suitable mechanism design established to steer the outcome toward awarding this precious resource to those projects where there is a consensus across the community that the science is most exciting and innovative.", "The National Science Foundation (NSF) will be exper- imenting with a new distributed approach to reviewing proposals, whereby a group of principal investigators (PIs) or proposers in a subfield act as reviewers for the proposals submitted by the same set of PIs. To encourage honesty, PIs chances for getting funded are tied to the quality of their reviews (with respect to the reviews provided by the entire group), in addition to the quality of their proposals. Intuitively, this approach can more fairly distribute the review workload, discourage frivolous proposal submission, and encourage high quality reviews. On the other hand, this method has already raised concerns about the integrity of the process and the possibility of strategic manipulation. In this paper, we take a closer look at three specific issues in an attempt to gai n a better understanding of the strengths and limitations of the new process beyond first impressions and anecdotal evidence . We start by considering the benefits and drawbacks of bundling the quality of PIs reviews with the scientific merit of their prop osals. We then consider the issue of collusion and favoritism. Finally, we examine whether the new process puts controversial proposals at a disadvantage. We conclude that some benefits of using revie w quality as an incentive mechanism may outweigh its drawbacks. On the other hand, even a coalition of two PIs can cause signifi cant harm to the process, as the built-in incentives are not strong enough to deter collusion. While we also confirm the common suspicion that the process is skewed toward non-controversial proposals, the more unexpected finding is that among equally controversial proposals, those of lower quality get a leg up through this process. Thus the process not only favors non-controversial proposals, but in some sense, mediocrity. We also discuss possible ways to improve this review process.", "", "A group of peers must choose one of them to receive a prize; everyone cares only about winning, not about who gets the prize if someone else. An award rule is impartial if one's message never influences whether or not one wins the prize. We explore the consequences of impartiality when each agent nominates a single (other) agent for the prize. @PARASPLIT On the positive side, we construct impartial nomination rules where both the influence of individual messages and the requirements to win the prize are not very different across agents. Partition the agents in two or more districts, each of size at least 3, and call an agent a local winner if he is nominated by a majority of members of his own district; the rule selects a local winner with the largest support from nonlocal winners, or a fixed default agent in case there is no local winner. @PARASPLIT On the negative side, impartiality implies that ballots cannot be processed anonymously as in plurality voting. Moreover, we cannot simultaneously guarantee that the winner always gets at least one nomination, and that an agent nominated by everyone else always wins.", "We study the problem of selecting a member of a set of agents based on impartial nominations by agents from that set. The problem was studied previously by and by Holzman and Moulin and has important applications in situations where representatives are selected from within a group or where publishing or funding decisions are made based on a process of peer review. Our main result concerns a randomized mechanism that in expectation selects an agent with at least half the maximum number of nominations. Subject to impartiality, this is best possible.", "We consider the special case of approval voting when the set of agents and the set of alternatives coincide. This captures situations in which the members of an organization want to elect a president or a committee from their ranks, as well as a variety of problems in networked environments, for example in internet search, social networks like Twitter, or reputation systems like Epinions. More precisely, we look at a setting where each member of a set of n agents approves or disapproves of any other member of the set and we want to select a subset of k agents, for a given value of k, in a strategyproof and approximately efficient way. Here, strategyproofness means that no agent can improve its own chances of being selected by changing the set of other agents it approves. A mechanism is said to provide an approximation ratio of α for some α ≥ 1 if the ratio between the sum of approval scores of any set of size k and that of the set selected by the mechanism is always at most α. We show that for k ∈ 1, 2,..., n − 1 , no deterministic strategyproof mechanism can provide a finite approximation ratio. We then present a randomized strategyproof mechanism that provides an approximation ratio that is bounded from above by four for any value of k, and approaches one as k grows.", "We examine strategy-proof elections to select a winner amongst a set of agents, each of whom cares only about winning. This impartial selection problem was introduced independently by Holzman and Moulin [5] and [1]. Fischer and Klimm [4] showed that the permutation mechanism is impartial and ( )-optimal, that is, it selects an agent who gains, in expectation, at least half the number of votes of the most popular agent. Furthermore, they showed the mechanism is ( 7 12 )-optimal if agents cannot abstain in the election. We show that a better guarantee is possible, provided the most popular agent receives at least a large enough, but constant, number of votes. Specifically, we prove that, for any e > 0, there is a constant N e (independent of the number n of voters) such that, if the maximum number of votes of the most popular agent is at least N e then the permutation mechanism is (( 3 4 - ) )-optimal. This result is tight." ] }
1604.03632
2761100584
Peer review, evaluation, and selection is a fundamental aspect of modern science. Funding bodies the world over employ experts to review and select the best proposals of those submitted for funding. The problem of peer selection, however, is much more general: a professional society may want to give a subset of its members awards based on the opinions of all members; an instructor for a MOOC or online course may want to crowdsource grading; or a marketing company may select ideas from group brainstorming sessions based on peer evaluation. We make three fundamental contributions to the study of procedures or mechanisms for peer selection, a specific type of group decision-making problem, studied in computer science, economics, and political science. First, we propose a novel mechanism that is strategyproof, i.e., agents cannot benefit by reporting insincere valuations. Second, we demonstrate the effectiveness of our mechanism by a comprehensive simulation-based comparison with a suite of mechanisms found in the literature. Finally, our mechanism employs a randomized rounding technique that is of independent interest, as it solves the apportionment problem that arises in various settings where discrete resources such as parliamentary representation slots need to be divided proportionally.
Both and examine the selection problem in which agents simply approve (nominate) a subset of agents. , , and restrict their attention to a setting in which exactly one agent is selected ( @math ). also present a new permutation'' mechanism that achieves the same bound as the Partition mechanisms (those presented in , involving a division of agents into groups) for @math . and showed that for the peer selection problem, deterministic impartial mechanisms are extremely limited, and must sometimes select an agent with zero nominations even though other agents receive nominations, or an agent with one nomination when another agent receives @math nominations @cite_2 . built on this work to show that allowing a mechanism where agents simply approve of some subset of agents to select fewer than @math agents allows the mechanism to guarantee some bounds on the selected items---they are within about @math from the optimal selection. present a more general mechanism called Credible Subset that is strategyproof but may select with non-zero probability. Credible Subset performs well when each agent reviews a few other agents, and this number is considerably smaller than @math .
{ "cite_N": [ "@cite_2" ], "mid": [ "2047179339" ], "abstract": [ "We study the problem of selecting a member of a set of agents based on impartial nominations by agents from that set. The problem was studied previously by and by Holzman and Moulin and has important applications in situations where representatives are selected from within a group or where publishing or funding decisions are made based on a process of peer review. Our main result concerns a randomized mechanism that in expectation selects an agent with at least half the maximum number of nominations. Subject to impartiality, this is best possible." ] }
1604.03632
2761100584
Peer review, evaluation, and selection is a fundamental aspect of modern science. Funding bodies the world over employ experts to review and select the best proposals of those submitted for funding. The problem of peer selection, however, is much more general: a professional society may want to give a subset of its members awards based on the opinions of all members; an instructor for a MOOC or online course may want to crowdsource grading; or a marketing company may select ideas from group brainstorming sessions based on peer evaluation. We make three fundamental contributions to the study of procedures or mechanisms for peer selection, a specific type of group decision-making problem, studied in computer science, economics, and political science. First, we propose a novel mechanism that is strategyproof, i.e., agents cannot benefit by reporting insincere valuations. Second, we demonstrate the effectiveness of our mechanism by a comprehensive simulation-based comparison with a suite of mechanisms found in the literature. Finally, our mechanism employs a randomized rounding technique that is of independent interest, as it solves the apportionment problem that arises in various settings where discrete resources such as parliamentary representation slots need to be divided proportionally.
The peer selection problem is closely related to peer-based grading marking @cite_35 @cite_34 @cite_38 @cite_36 @cite_0 @cite_33 @cite_10 especially when students are graded based on percentile scores. For peer grading, mechanisms have been proposed that make a student's grade slightly dependent on the student's grading accuracy (see e.g., and ). However such mechanisms are not strategyproof as one may alter one's reviews to obtain a better personal grade.
{ "cite_N": [ "@cite_35", "@cite_38", "@cite_33", "@cite_36", "@cite_0", "@cite_34", "@cite_10" ], "mid": [ "2120810064", "2088700588", "1925706653", "", "2318131249", "2506952740", "1969883013" ], "abstract": [ "Crowdsourcing offers a practical method for ranking and scoring large amounts of items. To investigate the algorithms and incentives that can be used in crowdsourcing quality evaluations, we built CrowdGrader, a tool that lets students submit and collaboratively grade solutions to homework assignments. We present the algorithms and techniques used in CrowdGrader, and we describe our results and experience in using the tool for several computer-science assignments. CrowdGrader combines the student-provided grades into a consensus grade for each submission using a novel crowdsourcing algorithm that relies on a reputation system. The algorithm iterativerly refines inter-dependent estimates of the consensus grades, and of the grading accuracy of each student. On synthetic data, the algorithm performs better than alternatives not based on reputation. On our preliminary experimental data, the performance seems dependent on the nature of review errors, with errors that can be ascribed to the reviewer being more tractable than those arising from random external events. To provide an incentive for reviewers, the grade each student receives in an assignment is a combination of the consensus grade received by their submissions, and of a reviewing grade capturing their reviewing effort and accuracy. This incentive worked well in practice.", "Peer and self-assessment offer an opportunity to scale both assessment and learning to global classrooms. This article reports our experiences with two iterations of the first large online class to use peer and self-assessment. In this class, peer grades correlated highly with staff-assigned grades. The second iteration had 42.9p of students’ grades within 5p of the staff grade, and 65.5p within 10p. On average, students assessed their work 7p higher than staff did. Students also rated peers’ work from their own country 3.6p higher than those from elsewhere. We performed three experiments to improve grading accuracy. We found that giving students feedback about their grading bias increased subsequent accuracy. We introduce short, customizable feedback snippets that cover common issues with assignments, providing students more qualitative peer feedback. Finally, we introduce a data-driven approach that highlights high-variance items for improvement. We find that rubrics that use a parallel sentence structure, unambiguous wording, and well-specified dimensions have lower variance. After revising rubrics, median grading error decreased from 12.4p to 9.9p.", "We propose the PeerRank method for peer assessment. This constructs a grade for an agent based on the grades proposed by the agents evaluating the agent. Since the grade of an agent is a measure of their ability to grade correctly, the PeerRank method weights grades by the grades of the grading agent. The PeerRank method also provides an incentive for agents to grade correctly. As the grades of an agent depend on the grades of the grading agents, and as these grades themselves depend on the grades of other agents, we define the PeerRank method by a fixed point equation similar to the PageRank method for ranking web-pages. We identify some formal properties of the PeerRank method (for example, it satisfies axioms of unanimity, no dummy, no discrimination and symmetry), discuss some examples, compare with related work and evaluate the performance on some synthetic data. Our results show considerable promise, reducing the error in grade predictions by a factor of 2 or more in many cases over the natural baseline of averaging peer grades.", "", "", "Peer assessment is the most common approach to evaluating scientific work, and it is also gaining popularity for scaling evaluation of student work in large and distributed classes. The key idea is that each peer reviewer or grader rates a relatively small subset of the items, and that some method of manual, semi-automatic, or fully-automatic aggregation of all assessments defines the eventual rating of all items – the grade in peer grading, or whether to accept or reject a scientific manuscript. In this paper, we explore in how far a Bayesian Ordinal Peer Assessment (BOPA) method can provide additional decision support when making acceptance rejection decisions for a scientific conference. Using data from the 2015 ACM Conference on Knowledge Discovery and Data Mining (KDD), where this system was deployed, we discuss the potential merit of the BOPA approach compared to conventional decision support offered by the Microsoft Conference Management System (CMT).", "We describe Mechanical TA, an automated peer review system, and report on our experience using it over three years. Mechanical TA differs from many other peer review systems by involving human teaching assistants (TAs) as a way to assure review quality. Human TAs both evaluate the peer reviews of students who have not yet demonstrated reviewing proficiency and spot check the reviews of students who have. Mechanical TA also features \"calibration\" reviews, allowing students to quickly gain experience with the peer-review process. We used Mechanical TA for weekly essay assignments in a class of about 70 students, a course design that would have been impossible if every assignment had had to be graded by a TA. We show evidence that it helped to support student learning, leading us to believe that the system may also be useful to others." ] }
1604.03901
2339763956
This paper studies single-image depth perception in the wild, i.e., recovering depth from a single image taken in unconstrained settings. We introduce a new dataset "Depth in the Wild" consisting of images in the wild annotated with relative depth between pairs of random points. We also propose a new algorithm that learns to estimate metric depth using annotations of relative depth. Compared to the state of the art, our algorithm is simpler and performs better. Experiments show that our algorithm, combined with existing RGB-D data and our new relative depth annotations, significantly improves single-image depth perception in the wild.
RGB-D Datasets: Prior work on constructing RGB-D datasets has relied on either Kinect @cite_25 @cite_5 @cite_1 @cite_30 or LIDAR @cite_39 @cite_32 . Existing Kinect-based datasets are limited to indoor scenes; existing LIDAR-based datasets are biased towards scenes of man-made structures @cite_39 @cite_32 . In contrast, our dataset covers a much wider variety of scenes; it can be easily expanded with large-scale crowdsourcing and the virually umlimited Internet images.
{ "cite_N": [ "@cite_30", "@cite_1", "@cite_32", "@cite_39", "@cite_5", "@cite_25" ], "mid": [ "2253156915", "", "2132947399", "2115579991", "125693051", "" ], "abstract": [ "We have created a dataset of more than ten thousand 3D scans of real objects. To create the dataset, we recruited 70 operators, equipped them with consumer-grade mobile 3D scanning setups, and paid them to scan objects in their environments. The operators scanned objects of their choosing, outside the laboratory and without direct supervision by computer vision professionals. The result is a large and diverse collection of object scans: from shoes, mugs, and toys to grand pianos, construction vehicles, and large outdoor sculptures. We worked with an attorney to ensure that data acquisition did not violate privacy constraints. The acquired data was placed irrevocably in the public domain and is available freely at http: redwood-data.org 3dscan.", "", "We consider the problem of estimating detailed 3D structure from a single still image of an unstructured environment. Our goal is to create 3D models that are both quantitatively accurate as well as visually pleasing. For each small homogeneous patch in the image, we use a Markov random field (MRF) to infer a set of \"plane parametersrdquo that capture both the 3D location and 3D orientation of the patch. The MRF, trained via supervised learning, models both image depth cues as well as the relationships between different parts of the image. Other than assuming that the environment is made up of a number of small planes, our model makes no explicit assumptions about the structure of the scene; this enables the algorithm to capture much more detailed 3D structure than does prior art and also give a much richer experience in the 3D flythroughs created using image-based rendering, even for scenes with significant nonvertical structure. Using this approach, we have created qualitatively correct 3D models for 64.9 percent of 588 images downloaded from the Internet. We have also extended our model to produce large-scale 3D models from a few images.", "We present a novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research. In total, we recorded 6 hours of traffic scenarios at 10-100 Hz using a variety of sensor modalities such as high-resolution color and grayscale stereo cameras, a Velodyne 3D laser scanner and a high-precision GPS IMU inertial navigation system. The scenarios are diverse, capturing real-world traffic situations, and range from freeways over rural areas to inner-city scenes with many static and dynamic objects. Our data is calibrated, synchronized and timestamped, and we provide the rectified and raw image sequences. Our dataset also contains object labels in the form of 3D tracklets, and we provide online benchmarks for stereo, optical flow, object detection and other tasks. This paper describes our recording platform, the data format and the utilities that we provide.", "We present an approach to interpret the major surfaces, objects, and support relations of an indoor scene from an RGBD image. Most existing work ignores physical interactions or is applied only to tidy rooms and hallways. Our goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships. One of our main interests is to better understand how 3D cues can best inform a structured 3D interpretation. We also contribute a novel integer programming formulation to infer physical support relations. We offer a new dataset of 1449 RGBD images, capturing 464 diverse indoor scenes, with detailed annotations. Our experiments demonstrate our ability to infer support relations in complex scenes and verify that our 3D scene cues and inferred support lead to better object segmentation.", "" ] }
1604.03901
2339763956
This paper studies single-image depth perception in the wild, i.e., recovering depth from a single image taken in unconstrained settings. We introduce a new dataset "Depth in the Wild" consisting of images in the wild annotated with relative depth between pairs of random points. We also propose a new algorithm that learns to estimate metric depth using annotations of relative depth. Compared to the state of the art, our algorithm is simpler and performs better. Experiments show that our algorithm, combined with existing RGB-D data and our new relative depth annotations, significantly improves single-image depth perception in the wild.
Depth from a Single Image: Image-to-depth is a long-standing problem with a large body of literature @cite_0 @cite_29 @cite_15 @cite_36 @cite_9 @cite_7 @cite_40 @cite_21 @cite_4 @cite_0 @cite_11 @cite_18 @cite_26 @cite_23 @cite_33 @cite_17 . The recent convergence of deep neural networks and RGB-D datasets @cite_5 @cite_39 has led to major advances @cite_38 @cite_9 @cite_27 @cite_40 @cite_4 @cite_10 . But the networks in these previous works, with the exception of @cite_10 , were trained exclusively using ground-truth metric depth, whereas our approach uses relative depth.
{ "cite_N": [ "@cite_36", "@cite_29", "@cite_5", "@cite_15", "@cite_10", "@cite_38", "@cite_18", "@cite_4", "@cite_21", "@cite_39", "@cite_23", "@cite_17", "@cite_26", "@cite_7", "@cite_40", "@cite_27", "@cite_33", "@cite_9", "@cite_0", "@cite_11" ], "mid": [ "2074254947", "2139905387", "125693051", "2158211626", "2221366145", "260801291", "", "2124907686", "2949447631", "2115579991", "2245606284", "", "2026203852", "1992178727", "2951713345", "", "1999671454", "", "", "" ], "abstract": [ "We describe a technique that automatically generates plausible depth maps from videos using non-parametric depth sampling. We demonstrate our technique in cases where past methods fail (non-translating cameras and dynamic scenes). Our technique is applicable to single images as well as videos. For videos, we use local motion cues to improve the inferred depth maps, while optical flow is used to ensure temporal depth consistency. For training and evaluation, we use a Kinect-based system to collect a large data set containing stereoscopic videos with known depths. We show that our depth estimation technique outperforms the state-of-the-art on benchmark databases. Our technique can be used to automatically convert a monoscopic video into stereo for 3D visualization, and we demonstrate this through a variety of visually pleasing results for indoor and outdoor scenes, including results from the feature film Charade.", "We consider the task of 3-d depth estimation from a single still image. We take a supervised learning approach to this problem, in which we begin by collecting a training set of monocular images (of unstructured indoor and outdoor environments which include forests, sidewalks, trees, buildings, etc.) and their corresponding ground-truth depthmaps. Then, we apply supervised learning to predict the value of the depthmap as a function of the image. Depth estimation is a challenging problem, since local features alone are insufficient to estimate depth at a point, and one needs to consider the global context of the image. Our model uses a hierarchical, multiscale Markov Random Field (MRF) that incorporates multiscale local- and global-image features, and models the depths and the relation between depths at different points in the image. We show that, even on unstructured scenes, our algorithm is frequently able to recover fairly accurate depthmaps. We further propose a model that incorporates both monocular cues and stereo (triangulation) cues, to obtain significantly more accurate depth estimates than is possible using either monocular or stereo cues alone.", "We present an approach to interpret the major surfaces, objects, and support relations of an indoor scene from an RGBD image. Most existing work ignores physical interactions or is applied only to tidy rooms and hallways. Our goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships. One of our main interests is to better understand how 3D cues can best inform a structured 3D interpretation. We also contribute a novel integer programming formulation to infer physical support relations. We offer a new dataset of 1449 RGBD images, capturing 464 diverse indoor scenes, with detailed annotations. Our experiments demonstrate our ability to infer support relations in complex scenes and verify that our 3D scene cues and inferred support lead to better object segmentation.", "We consider the task of depth estimation from a single monocular image. We take a supervised learning approach to this problem, in which we begin by collecting a training set of monocular images (of unstructured outdoor environments which include forests, trees, buildings, etc.) and their corresponding ground-truth depthmaps. Then, we apply supervised learning to predict the depthmap as a function of the image. Depth estimation is a challenging problem, since local features alone are insufficient to estimate depth at a point, and one needs to consider the global context of the image. Our model uses a discriminatively-trained Markov Random Field (MRF) that incorporates multiscale local- and global-image features, and models both depths at individual points as well as the relation between depths at different points. We show that, even on unstructured scenes, our algorithm is frequently able to recover fairly accurate depthmaps.", "We propose a framework that infers mid-level visual properties of an image by learning about ordinal relationships. Instead of estimating metric quantities directly, the system proposes pairwise relationship estimates for points in the input image. These sparse probabilistic ordinal measurements are globalized to create a dense output map of continuous metric measurements. Estimating order relationships between pairs of points has several advantages over metric estimation: it solves a simpler problem than metric regression, humans are better at relative judgements, so data collection is easier, ordinal relationships are invariant to monotonic transformations of the data, thereby increasing the robustness of the system and providing qualitatively different information. We demonstrate that this frame-work works well on two important mid-level vision tasks: intrinsic image decomposition and depth from an RGB image. We train two systems with the same architecture on data from these two modalities. We provide an analysis of the resulting models, showing that they learn a number of simple rules to make ordinal decisions. We apply our algorithm to depth estimation, with good results, and intrinsic image decomposition, with state-of-the-art results.", "In this paper we tackle the problem of instance-level segmentation and depth ordering from a single monocular image. Towards this goal, we take advantage of convolutional neural nets and train them to directly predict instance-level segmentations where the instance ID encodes the depth ordering within image patches. To provide a coherent single explanation of an image we develop a Markov random field which takes as input the predictions of convolutional neural nets applied at overlapping patches of different resolutions, as well as the output of a connected component algorithm. It aims to predict accurate instance-level segmentation and depth ordering. We demonstrate the effectiveness of our approach on the challenging KITTI benchmark and show good performance on both tasks.", "", "Predicting the depth (or surface normal) of a scene from single monocular color images is a challenging task. This paper tackles this challenging and essentially underdetermined problem by regression on deep convolutional neural network (DCNN) features, combined with a post-processing refining step using conditional random fields (CRF). Our framework works at two levels, super-pixel level and pixel level. First, we design a DCNN model to learn the mapping from multi-scale image patches to depth or surface normal values at the super-pixel level. Second, the estimated super-pixel depth or surface normal is refined to the pixel level by exploiting various potentials on the depth or surface normal map, which includes a data term, a smoothness term among super-pixels and an auto-regression term characterizing the local structure of the estimation map. The inference problem can be efficiently solved because it admits a closed-form solution. Experiments on the Make3D and NYU Depth V2 datasets show competitive results compared with recent state-of-the-art methods.", "In this paper we propose a method for estimating depth from a single image using a coarse to fine approach. We argue that modeling the fine depth details is easier after a coarse depth map has been computed. We express a global (coarse) depth map of an image as a linear combination of a depth basis learned from training examples. The depth basis captures spatial and statistical regularities and reduces the problem of global depth estimation to the task of predicting the input-specific coefficients in the linear combination. This is formulated as a regression problem from a holistic representation of the image. Crucially, the depth basis and the regression function are coupled and jointly optimized by our learning scheme. We demonstrate that this results in a significant improvement in accuracy compared to direct regression of depth pixel values or approaches learning the depth basis disjointly from the regression function. The global depth estimate is then used as a guidance by a local refinement method that introduces depth details that were not captured at the global level. Experiments on the NYUv2 and KITTI datasets show that our method outperforms the existing state-of-the-art at a considerably lower computational cost for both training and testing.", "We present a novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research. In total, we recorded 6 hours of traffic scenarios at 10-100 Hz using a variety of sensor modalities such as high-resolution color and grayscale stereo cameras, a Velodyne 3D laser scanner and a high-precision GPS IMU inertial navigation system. The scenarios are diverse, capturing real-world traffic situations, and range from freeways over rural areas to inner-city scenes with many static and dynamic objects. Our data is calibrated, synchronized and timestamped, and we provide the rectified and raw image sequences. Our dataset also contains object labels in the form of 3D tracklets, and we provide online benchmarks for stereo, optical flow, object detection and other tasks. This paper describes our recording platform, the data format and the utilities that we provide.", "Intrinsic image decomposition factorizes an observed image into its physical causes. This is most commonly framed as a decomposition into reflectance and shading, although recent progress has made full decompositions into shape, illumination, reflectance, and shading possible. However, existing factorization approaches require depth sensing to initialize the optimization of scene intrinsics. Rather than relying on depth sensors, we show that depth estimated purely from monocular appearance can provide sufficient cues for intrinsic image analysis. Our full intrinsic pipeline regresses depth by a fully convolutional network then jointly optimizes the intrinsic factorization to recover the input image. This combination yields full decompositions by uniting feature learning through deep network regression with physical modeling through statistical priors and random field regularization. This work demonstrates the first pipeline for full intrinsic decomposition of scenes from a single color image input alone.", "", "We consider the problem of estimating the depth of each pixel in a scene from a single monocular image. Unlike traditional approaches [18, 19], which attempt to map from appearance features to depth directly, we first perform a semantic segmentation of the scene and use the semantic labels to guide the 3D reconstruction. This approach provides several advantages: By knowing the semantic class of a pixel or region, depth and geometry constraints can be easily enforced (e.g., “sky” is far away and “ground” is horizontal). In addition, depth can be more readily predicted by measuring the difference in appearance with respect to a given semantic class. For example, a tree will have more uniform appearance in the distance than it does close up. Finally, the incorporation of semantic features allows us to achieve state-of-the-art results with a significantly simpler model than previous works.", "The limitations of current state-of-the-art methods for single-view depth estimation and semantic segmentations are closely tied to the property of perspective geometry, that the perceived size of the objects scales inversely with the distance. In this paper, we show that we can use this property to reduce the learning of a pixel-wise depth classifier to a much simpler classifier predicting only the likelihood of a pixel being at an arbitrarily fixed canonical depth. The likelihoods for any other depths can be obtained by applying the same classifier after appropriate image manipulations. Such transformation of the problem to the canonical depth removes the training data bias towards certain depths and the effect of perspective. The approach can be straight-forwardly generalized to multiple semantic classes, improving both depth estimation and semantic segmentation performance by directly targeting the weaknesses of independent approaches. Conditioning the semantic label on the depth provides a way to align the data to their physical scale, allowing to learn a more discriminative classifier. Conditioning depth on the semantic class helps the classifier to distinguish between ambiguities of the otherwise ill-posed problem. We tested our algorithm on the KITTI road scene dataset and NYU2 indoor dataset and obtained obtained results that significantly outperform current state-of-the-art in both single-view depth and semantic segmentation domain.", "In this paper we address three different computer vision tasks using a single basic architecture: depth prediction, surface normal estimation, and semantic labeling. We use a multiscale convolutional network that is able to adapt easily to each task using only small modifications, regressing from the input image to the output map directly. Our method progressively refines predictions using a sequence of scales, and captures many image details without any superpixels or low-level segmentation. We achieve state-of-the-art performance on benchmarks for all three tasks.", "", "Photos compress 3D visual data to 2D. However, it is still possible to infer depth information even without sophisticated object learning. We propose a solution based on small-scale defocus blur inherent in optical lens and tackle the estimation problem by proposing a non-parametric matching scheme for natural images. It incorporates a matching prior with our newly constructed edgelet dataset using a non-local scheme, and includes semantic depth order cues for physically based inference. Several applications are enabled on natural images, including geometry based rendering and editing.", "", "", "" ] }
1604.03901
2339763956
This paper studies single-image depth perception in the wild, i.e., recovering depth from a single image taken in unconstrained settings. We introduce a new dataset "Depth in the Wild" consisting of images in the wild annotated with relative depth between pairs of random points. We also propose a new algorithm that learns to estimate metric depth using annotations of relative depth. Compared to the state of the art, our algorithm is simpler and performs better. Experiments show that our algorithm, combined with existing RGB-D data and our new relative depth annotations, significantly improves single-image depth perception in the wild.
Our work is inspired by that of @cite_10 , which proposes to use a deep network to repeatedly classify pairs of points sampled based on superpixel segmentation, and to reconstruct per-pixel metric depth by solving an additional optimization problem. Our approach is different: it consists of a single deep network trained end-to-end that directly predicts per-pixel metric depth; there is no intermediate classification of ordinal relations and as a result no optimization needed to resolve inconsistencies.
{ "cite_N": [ "@cite_10" ], "mid": [ "2221366145" ], "abstract": [ "We propose a framework that infers mid-level visual properties of an image by learning about ordinal relationships. Instead of estimating metric quantities directly, the system proposes pairwise relationship estimates for points in the input image. These sparse probabilistic ordinal measurements are globalized to create a dense output map of continuous metric measurements. Estimating order relationships between pairs of points has several advantages over metric estimation: it solves a simpler problem than metric regression, humans are better at relative judgements, so data collection is easier, ordinal relationships are invariant to monotonic transformations of the data, thereby increasing the robustness of the system and providing qualitatively different information. We demonstrate that this frame-work works well on two important mid-level vision tasks: intrinsic image decomposition and depth from an RGB image. We train two systems with the same architecture on data from these two modalities. We provide an analysis of the resulting models, showing that they learn a number of simple rules to make ordinal decisions. We apply our algorithm to depth estimation, with good results, and intrinsic image decomposition, with state-of-the-art results." ] }
1604.03901
2339763956
This paper studies single-image depth perception in the wild, i.e., recovering depth from a single image taken in unconstrained settings. We introduce a new dataset "Depth in the Wild" consisting of images in the wild annotated with relative depth between pairs of random points. We also propose a new algorithm that learns to estimate metric depth using annotations of relative depth. Compared to the state of the art, our algorithm is simpler and performs better. Experiments show that our algorithm, combined with existing RGB-D data and our new relative depth annotations, significantly improves single-image depth perception in the wild.
Beyond intrinsic images, ordinal relations have been used widely in computer vision and machine learning, including object recognition @cite_16 and learning to rank @cite_19 @cite_8 .
{ "cite_N": [ "@cite_19", "@cite_16", "@cite_8" ], "mid": [ "2108862644", "", "2047221353" ], "abstract": [ "The paper is concerned with learning to rank, which is to construct a model or a function for ranking objects. Learning to rank is useful for document retrieval, collaborative filtering, and many other applications. Several methods for learning to rank have been proposed, which take object pairs as 'instances' in learning. We refer to them as the pairwise approach in this paper. Although the pairwise approach offers advantages, it ignores the fact that ranking is a prediction task on list of objects. The paper postulates that learning to rank should adopt the listwise approach in which lists of objects are used as 'instances' in learning. The paper proposes a new probabilistic method for the approach. Specifically it introduces two probability models, respectively referred to as permutation probability and top k probability, to define a listwise loss function for learning. Neural Network and Gradient Descent are then employed as model and algorithm in the learning method. Experimental results on information retrieval show that the proposed listwise approach performs better than the pairwise approach.", "", "This paper presents an approach to automatically optimizing the retrieval quality of search engines using clickthrough data. Intuitively, a good information retrieval system should present relevant documents high in the ranking, with less relevant documents following below. While previous approaches to learning retrieval functions from examples exist, they typically require training data generated from relevance judgments by experts. This makes them difficult and expensive to apply. The goal of this paper is to develop a method that utilizes clickthrough data for training, namely the query-log of the search engine in connection with the log of links the users clicked on in the presented ranking. Such clickthrough data is available in abundance and can be recorded at very low cost. Taking a Support Vector Machine (SVM) approach, this paper presents a method for learning retrieval functions. From a theoretical perspective, this method is shown to be well-founded in a risk minimization framework. Furthermore, it is shown to be feasible even for large sets of queries and features. The theoretical results are verified in a controlled experiment. It shows that the method can effectively adapt the retrieval function of a meta-search engine to a particular group of users, outperforming Google in terms of retrieval quality after only a couple of hundred training examples." ] }
1604.03641
2951357319
Dynamic languages such as Ruby, Python, and JavaScript have many compelling benefits, but the lack of static types means subtle errors can remain latent in code for a long time. While many researchers have developed various systems to bring some of the benefits of static types to dynamic languages, prior approaches have trouble dealing with metaprogramming, which generates code as the program executes. In this paper, we propose Hummingbird, a new system that uses a novel technique, just-in-time static type checking, to type check Ruby code even in the presence of metaprogramming. In Hummingbird, method type signatures are gathered dynamically at run-time, as those methods are created. When a method is called, Hummingbird statically type checks the method body against current type signatures. Thus, Hummingbird provides thorough static checks on a per-method basis, while also allowing arbitrarily complex metaprogramming. For performance, Hummingbird memoizes the static type checking pass, invalidating cached checks only if necessary. We formalize Hummingbird using a core, Ruby-like language and prove it sound. To evaluate Hummingbird, we applied it to six apps, including three that use Ruby on Rails, a powerful framework that relies heavily on metaprogramming. We found that all apps typecheck successfully using Hummingbird, and that Hummingbird’s performance overhead is reasonable. We applied Hummingbird to earlier versions of one Rails app and found several type errors that had been introduced and then fixed. Lastly, we demonstrate using Hummingbird in Rails development mode to typecheck an app as live updates are applied to it.
* Type Systems for Ruby. We have developed several prior type systems for Ruby. Diamondback Ruby (DRuby) @cite_35 is the first comprehensive type system for Ruby that we are aware of. Because checks types at run-time, we opted to implement our own type checker rather than reuse DRuby for type checking, which would have required some awkward shuffling of the type table between Ruby and OCaml. Another reason to reimplement type checking was to keep the type system a little easier to understand---DRuby performs type inference, which is quite complex for this type language, in contrast to , which implements much simpler type checking.
{ "cite_N": [ "@cite_35" ], "mid": [ "2135536553" ], "abstract": [ "Many general-purpose, object-oriented scripting languages are dynamically typed, which provides flexibility but leaves the programmer without the benefits of static typing, including early error detection and the documentation provided by type annotations. This paper describes Diamondback Ruby (DRuby), a tool that blends Ruby's dynamic type system with a static typing discipline. DRuby provides a type language that is rich enough to precisely type Ruby code we have encountered, without unneeded complexity. When possible, DRuby infers static types to discover type errors in Ruby programs. When necessary, the programmer can provide DRuby with annotations that assign static types to dynamic code. These annotations are checked at run time, isolating type errors to unverified code. We applied DRuby to a suite of benchmarks and found several bugs that would cause run-time type errors. DRuby also reported a number of warnings that reveal questionable programming practices in the benchmarks. We believe that DRuby takes a major step toward bringing the benefits of combined static and dynamic typing to Ruby and other object-oriented languages." ] }
1604.03641
2951357319
Dynamic languages such as Ruby, Python, and JavaScript have many compelling benefits, but the lack of static types means subtle errors can remain latent in code for a long time. While many researchers have developed various systems to bring some of the benefits of static types to dynamic languages, prior approaches have trouble dealing with metaprogramming, which generates code as the program executes. In this paper, we propose Hummingbird, a new system that uses a novel technique, just-in-time static type checking, to type check Ruby code even in the presence of metaprogramming. In Hummingbird, method type signatures are gathered dynamically at run-time, as those methods are created. When a method is called, Hummingbird statically type checks the method body against current type signatures. Thus, Hummingbird provides thorough static checks on a per-method basis, while also allowing arbitrarily complex metaprogramming. For performance, Hummingbird memoizes the static type checking pass, invalidating cached checks only if necessary. We formalize Hummingbird using a core, Ruby-like language and prove it sound. To evaluate Hummingbird, we applied it to six apps, including three that use Ruby on Rails, a powerful framework that relies heavily on metaprogramming. We found that all apps typecheck successfully using Hummingbird, and that Hummingbird’s performance overhead is reasonable. We applied Hummingbird to earlier versions of one Rails app and found several type errors that had been introduced and then fixed. Lastly, we demonstrate using Hummingbird in Rails development mode to typecheck an app as live updates are applied to it.
DRuby was effective but did not handle highly dynamic language constructs well. @cite_25 solves this problem using profile-based type inference. To use , the developer runs the program once to record dynamic behavior, e.g., what methods are invoked via send , what strings are passed to eval , etc. then applies DRuby to the original program text plus the profiled strings, e.g., any string that was passed to eval is parsed and analyzed like any other code. While can be effective, we think that 's approach is ultimately more practical because does not require a separate, potentially cumbersome, profiling phase. We note that does not currently handle eval , because it was not used in our subject apps' code, but it could be supported in a straightforward way.
{ "cite_N": [ "@cite_25" ], "mid": [ "2122592814" ], "abstract": [ "Many popular scripting languages such as Ruby, Python, and Perl include highly dynamic language constructs, such as an eval method that evaluates a string as program text. While these constructs allow terse and expressive code, they have traditionally obstructed static analysis. In this paper we present PRuby, an extension to Diamondback Ruby (DRuby), a static type inference system for Ruby. PRuby augments DRuby with a novel dynamic analysis and transformation that allows us to precisely type uses of highly dynamic constructs. PRuby's analysis proceeds in three steps. First, we use run-time instrumentation to gather per-application profiles of dynamic feature usage. Next, we replace dynamic features with statically analyzable alternatives based on the profile. We also add instrumentation to safely handle cases when subsequent runs do not match the profile. Finally, we run DRuby's static type inference on the transformed code to enforce type safety. We used PRuby to gather profiles for a benchmark suite of sample Ruby programs. We found that dynamic features are pervasive throughout the benchmarks and the libraries they include, but that most uses of these features are highly constrained and hence can be effectively profiled. Using the profiles to guide type inference, we found that DRuby can generally statically type our benchmarks modulo some refactoring, and we discovered several previously unknown type errors. These results suggest that profiling and transformation is a lightweight but highly effective approach to bring static typing to highly dynamic languages." ] }
1604.03641
2951357319
Dynamic languages such as Ruby, Python, and JavaScript have many compelling benefits, but the lack of static types means subtle errors can remain latent in code for a long time. While many researchers have developed various systems to bring some of the benefits of static types to dynamic languages, prior approaches have trouble dealing with metaprogramming, which generates code as the program executes. In this paper, we propose Hummingbird, a new system that uses a novel technique, just-in-time static type checking, to type check Ruby code even in the presence of metaprogramming. In Hummingbird, method type signatures are gathered dynamically at run-time, as those methods are created. When a method is called, Hummingbird statically type checks the method body against current type signatures. Thus, Hummingbird provides thorough static checks on a per-method basis, while also allowing arbitrarily complex metaprogramming. For performance, Hummingbird memoizes the static type checking pass, invalidating cached checks only if necessary. We formalize Hummingbird using a core, Ruby-like language and prove it sound. To evaluate Hummingbird, we applied it to six apps, including three that use Ruby on Rails, a powerful framework that relies heavily on metaprogramming. We found that all apps typecheck successfully using Hummingbird, and that Hummingbird’s performance overhead is reasonable. We applied Hummingbird to earlier versions of one Rails app and found several type errors that had been introduced and then fixed. Lastly, we demonstrate using Hummingbird in Rails development mode to typecheck an app as live updates are applied to it.
We also developed DRails @cite_11 , which type checks Rails apps by applying DRuby to translated Rails code. For example, if DRails sees a call to belongs , it outputs Ruby code that explicitly contains the methods generated from the call, which DRuby can then analyze. While DRails was applied to a range of programs, its analysis is quite brittle. Supporting each additional Rails feature in DRails requires implementing, in OCaml, a source-to-source transformation that mimics that feature. This is a huge effort and is hard to sustain as Rails evolves. In contrast, types are generated in Ruby, which is far easier. DRails is also complex to use: The program is combined into one file, then run to gather profile information, then transformed and type checked. Using is far simpler. Finally, DRails is Rails-specific, whereas applies readily to other Ruby frameworks. Due to all these issues, we feel is much more lightweight, agile, scalable, and maintainable than DRails.
{ "cite_N": [ "@cite_11" ], "mid": [ "2081212007" ], "abstract": [ "Ruby on Rails (or just \"Rails\") is a popular web application framework built on top of Ruby, an object-oriented scripting language. While Ruby’s powerful features such as dynamic typing help make Rails development extremely lightweight, this comes at a cost. Dynamic typing in particular means that type errors in Rails applications remain latent until run time, making debugging and maintenance harder. In this paper, we describe DRails, a novel tool that brings static typing to Rails applications to detect a range of run time errors. DRails works by translating Rails programs into pure Ruby code in which Rails’s numerous implicit conventions are made explicit. We then discover type errors by applying DRuby, a previously developed static type inference system, to the translated program. We ran DRails on a suite of applications and found that it was able to detect several previously unknown errors." ] }
1604.03641
2951357319
Dynamic languages such as Ruby, Python, and JavaScript have many compelling benefits, but the lack of static types means subtle errors can remain latent in code for a long time. While many researchers have developed various systems to bring some of the benefits of static types to dynamic languages, prior approaches have trouble dealing with metaprogramming, which generates code as the program executes. In this paper, we propose Hummingbird, a new system that uses a novel technique, just-in-time static type checking, to type check Ruby code even in the presence of metaprogramming. In Hummingbird, method type signatures are gathered dynamically at run-time, as those methods are created. When a method is called, Hummingbird statically type checks the method body against current type signatures. Thus, Hummingbird provides thorough static checks on a per-method basis, while also allowing arbitrarily complex metaprogramming. For performance, Hummingbird memoizes the static type checking pass, invalidating cached checks only if necessary. We formalize Hummingbird using a core, Ruby-like language and prove it sound. To evaluate Hummingbird, we applied it to six apps, including three that use Ruby on Rails, a powerful framework that relies heavily on metaprogramming. We found that all apps typecheck successfully using Hummingbird, and that Hummingbird’s performance overhead is reasonable. We applied Hummingbird to earlier versions of one Rails app and found several type errors that had been introduced and then fixed. Lastly, we demonstrate using Hummingbird in Rails development mode to typecheck an app as live updates are applied to it.
Finally, RubyDust @cite_0 implements type inference for Ruby at run time. RubyDust works by wrapping objects to annotate them with type variables. More precisely, consider a method def m(x) ... end , and let @math be the type variable for x . RubyDust's wrapping is approximately equal to adding x = Wrap.new(x, @math ) to the beginning of m . Uses of the wrapped x generate type constraints on @math and then delegate to the underlying object. The Ruby Type Checker @cite_23 (rtc) is similar but implements type checking rather than type inference.
{ "cite_N": [ "@cite_0", "@cite_23" ], "mid": [ "2113888205", "2156351739" ], "abstract": [ "There have been several efforts to bring static type inference to object-oriented dynamic languages such as Ruby, Python, and Perl. In our experience, however, such type inference systems are extremely difficult to develop, because dynamic languages are typically complex, poorly specified, and include features, such as eval and reflection, that are hard to analyze. In this paper, we introduce constraint-based dynamic type inference, a technique that infers static types based on dynamic program executions. In our approach, we wrap each run-time value to associate it with a type variable, and the wrapper generates constraints on this type variable when the wrapped value is used. This technique avoids many of the often overly conservative approximations of static tools, as constraints are generated based on how values are used during actual program runs. Using wrappers is also easy to implement, since we need only write a constraint resolution algorithm and a transformation to introduce the wrappers. The best part is that we can eat our cake, too: our algorithm will infer sound types as long as it observes every path through each method body---note that the number of such paths may be dramatically smaller than the number of paths through the program as a whole. We have developed Rubydust, an implementation of our algorithm for Ruby. Rubydust takes advantage of Ruby's dynamic features to implement wrappers as a language library. We applied Rubydust to a number of small programs and found it to be both easy to use and useful: Rubydust discovered 1 real type error, and all other inferred types were correct and readable.", "We present the Ruby Type Checker (rtc), a tool that adds type checking to Ruby, an object-oriented, dynamic scripting language. Rtc is implemented as a Ruby library in which all type checking occurs at run time; thus it checks types later than a purely static system, but earlier than a traditional dynamic type system. Rtc supports type annotations on classes, methods, and objects and rtc provides a rich type language that includes union and intersection types, higherorder (block) types, and parametric polymorphism among other features. Rtc is designed so programmers can control exactly where type checking occurs: type-annotated objects serve as the \"roots\" of the type checking process, and unannotated objects are not type checked. We have applied rtc to several programs and found it to be easy to use and effective at checking types." ] }
1604.03641
2951357319
Dynamic languages such as Ruby, Python, and JavaScript have many compelling benefits, but the lack of static types means subtle errors can remain latent in code for a long time. While many researchers have developed various systems to bring some of the benefits of static types to dynamic languages, prior approaches have trouble dealing with metaprogramming, which generates code as the program executes. In this paper, we propose Hummingbird, a new system that uses a novel technique, just-in-time static type checking, to type check Ruby code even in the presence of metaprogramming. In Hummingbird, method type signatures are gathered dynamically at run-time, as those methods are created. When a method is called, Hummingbird statically type checks the method body against current type signatures. Thus, Hummingbird provides thorough static checks on a per-method basis, while also allowing arbitrarily complex metaprogramming. For performance, Hummingbird memoizes the static type checking pass, invalidating cached checks only if necessary. We formalize Hummingbird using a core, Ruby-like language and prove it sound. To evaluate Hummingbird, we applied it to six apps, including three that use Ruby on Rails, a powerful framework that relies heavily on metaprogramming. We found that all apps typecheck successfully using Hummingbird, and that Hummingbird’s performance overhead is reasonable. We applied Hummingbird to earlier versions of one Rails app and found several type errors that had been introduced and then fixed. Lastly, we demonstrate using Hummingbird in Rails development mode to typecheck an app as live updates are applied to it.
* Type Systems for Other Dynamic Languages. Many researchers have proposed type systems for dynamic languages, including Python @cite_39 , JavaScript @cite_5 @cite_30 @cite_14 , Racket @cite_28 @cite_19 @cite_1 , and Lua @cite_21 , or developed new dynamic languages or dialects with special type systems, such as Thorn @cite_7 , TypeScript @cite_20 @cite_24 , and Dart @cite_8 . To our knowledge, these type systems are focused on checking the core language and can have difficulty in the face of metaprogramming.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_7", "@cite_8", "@cite_28", "@cite_21", "@cite_1", "@cite_39", "@cite_24", "@cite_19", "@cite_5", "@cite_20" ], "mid": [ "2156249516", "", "2134720688", "", "1973186567", "2061808084", "2096368963", "", "2091335641", "1856452982", "2129345992", "" ], "abstract": [ "JavaScript is a popular language for client-side web scripting. It has a dubious reputation among programmers for two reasons. First, many JavaScript programs are written against a rapidly evolving API whose implementations are sometimes contradictory and idiosyncratic. Second, the language is only weakly typed and comes virtually without development tools. The present work is a first attempt to address the second point. It does so by defining a type system that tracks the possible traits of an object and flags suspicious type conversions. Because JavaScript is a classless, object-based language with first-class functions, the type system must include singleton types, subtyping, and first class record labels. The type system covers a representative subset of the language and there is a type soundness proof with respect to an operational semantics.", "", "Scripting languages enjoy great popularity due to their support for rapid and exploratory development. They typically have lightweight syntax, weak data privacy, dynamic typing, powerful aggregate data types, and allow execution of the completed parts of incomplete programs. The price of these features comes later in the software life cycle. Scripts are hard to evolve and compose, and often slow. An additional weakness of most scripting languages is lack of support for concurrency - though concurrency is required for scalability and interacting with remote services. This paper reports on the design and implementation of Thorn, a novel programming language targeting the JVM. Our principal contributions are a careful selection of features that support the evolution of scripts into industrial grade programs - e.g., an expressive module system, an optional type annotation facility for declarations, and support for concurrency based on message passing between lightweight, isolated processes. On the implementation side, Thorn has been designed to accommodate the evolution of the language itself through a compiler plugin mechanism and target the Java virtual machine.", "", "When scripts in untyped languages grow into large programs, maintaining them becomes difficult. A lack of types in typical scripting languages means that programmers must (re)discover critical pieces of design information every time they wish to change a program. This analysis step both slows down the maintenance process and may even introduce mistakes due to the violation of undiscovered invariants. This paper presents Typed Scheme, an explicitly typed extension of an untyped scripting language. Its type system is based on the novel notion of occurrence typing, which we formalize and mechanically prove sound. The implementation of Typed Scheme additionally borrows elements from a range of approaches, including recursive types, true unions and subtyping, plus polymorphism combined with a modicum of local inference. Initial experiments with the implementation suggest that Typed Scheme naturally accommodates the programming style of the underlying scripting language, at least for the first few thousand lines of ported code.", "Dynamically typed languages trade flexibility and ease of use for safety, while statically typed languages prioritize the early detection of bugs, and provide a better framework for structure large programs. The idea of optional typing is to combine the two approaches in the same language: the programmer can begin development with dynamic types, and migrate to static types as the program matures. The challenge is designing a type system that feels natural to the programmer that is used to programming in a dynamic language. This paper presents the initial design of Typed Lua, an optionally-typed extension of the Lua scripting language. Lua is an imperative scripting language with first class functions and lightweight metaprogramming mechanisms. The design of Typed Lua's type system has a novel combination of features that preserves some of the idioms that Lua programmers are used to, while bringing static type safety to them. We show how the major features of the type system type these idioms with some examples, and discuss some of the design issues we faced.", "Programmers reason about their programs using a wide variety of formal and informal methods. Programmers in untyped languages such as Scheme or Erlang are able to use any such method to reason about the type behavior of their programs. Our type system for Scheme accommodates common reasoning methods by assigning variable occurrences a subtype of their declared type based on the predicates prior to the occurrence, a discipline dubbed occurrence typing. It thus enables programmers to enrich existing Scheme code with types, while requiring few changes to the code itself. Three years of practical experience has revealed serious shortcomings of our type system. In particular, it relied on a system of ad-hoc rules to relate combinations of predicates, it could not reason about subcomponents of data structures, and it could not follow sophisticated reasoning about the relationship among predicate tests, all of which are used in existing code. In this paper, we reformulate occurrence typing to eliminate these shortcomings. The new formulation derives propositional logic formulas that hold when an expression evaluates to true or false, respectively. A simple proof system is then used to determine types of variable occurrences from these propositions. Our implementation of this revised occurrence type system thus copes with many more untyped programming idioms than the original system.", "", "Current proposals for adding gradual typing to JavaScript, such as Closure, TypeScript and Dart, forgo soundness to deal with issues of scale, code reuse, and popular programming patterns. We show how to address these issues in practice while retaining soundness. We design and implement a new gradual type system, prototyped for expediency as a 'Safe' compilation mode for TypeScript. Our compiler achieves soundness by enforcing stricter static checks and embedding residual runtime checks in compiled code. It emits plain JavaScript that runs on stock virtual machines. Our main theorem is a simulation that ensures that the checks introduced by Safe TypeScript (1) catch any dynamic type error, and (2) do not alter the semantics of type-safe TypeScript code. Safe TypeScript is carefully designed to minimize the performance overhead of runtime checks. At its core, we rely on two new ideas: differential subtyping, a new form of coercive subtyping that computes the minimum amount of runtime type information that must be added to each object; and an erasure modality, which we use to safely and selectively erase type information. This allows us to scale our design to full-fledged TypeScript, including arrays, maps, classes, inheritance, overloading, and generic types. We validate the usability and performance of Safe TypeScript empirically by type-checking and compiling around 120,000 lines of existing TypeScript source code. Although runtime checks can be expensive, the end-to-end overhead is small for code bases that already have type annotations. For instance, we bootstrap the Safe TypeScript compiler (90,000 lines including the base TypeScript compiler): we measure a 15 runtime overhead for type safety, and also uncover programming errors as type safety violations. We conclude that, at least during development and testing, subjecting JavaScript TypeScript programs to safe gradual typing adds significant value to source type annotations at a modest cost.", "In the past, the creators of numerical programs had to choose between simple expression of mathematical formulas and static type checking. While the Lisp family and its dynamically typed relatives support the straightforward expression via a rich numeric tower, existing statically typed languages force programmers to pollute textbook formulas with explicit coercions or unwieldy notation. In this paper, we demonstrate how the type system of Typed Racket accommodates both a textbook programming style and expressive static checking. The type system provides a hierarchy of numeric types that can be freely mixed as well as precise specifications of sign, representation, and range information--all while supporting generic operations. In addition, the type system provides information to the compiler so that it can perform standard numeric optimizations.", "Object-oriented scripting languages like Javascript and Python are popular partly because of their dynamic features. These include the runtime modification of objects and classes through addition of fields or updating of methods. These features make static typing difficult and so usually dynamic typing is used. Consequently, errors such as access to non-existent members are not detected until runtime. We first develop a formalism for an object based language, JS0 with features from Javascript, including dynamic addition of fields and updating of methods. We give an operational semantics and static type system for JS0using structural types. Our types allow objects to evolve in a controlled manner by classifying members as definite or potential. We define a type inference algorithm for JS0 that is sound with respect to the type system. If the type inference algorithm succeeds, then the program is typeable. Therefore, programmers can benefit from the safety offered by the type system, without the need to write explicitly types in their programs.", "" ] }
1604.03641
2951357319
Dynamic languages such as Ruby, Python, and JavaScript have many compelling benefits, but the lack of static types means subtle errors can remain latent in code for a long time. While many researchers have developed various systems to bring some of the benefits of static types to dynamic languages, prior approaches have trouble dealing with metaprogramming, which generates code as the program executes. In this paper, we propose Hummingbird, a new system that uses a novel technique, just-in-time static type checking, to type check Ruby code even in the presence of metaprogramming. In Hummingbird, method type signatures are gathered dynamically at run-time, as those methods are created. When a method is called, Hummingbird statically type checks the method body against current type signatures. Thus, Hummingbird provides thorough static checks on a per-method basis, while also allowing arbitrarily complex metaprogramming. For performance, Hummingbird memoizes the static type checking pass, invalidating cached checks only if necessary. We formalize Hummingbird using a core, Ruby-like language and prove it sound. To evaluate Hummingbird, we applied it to six apps, including three that use Ruby on Rails, a powerful framework that relies heavily on metaprogramming. We found that all apps typecheck successfully using Hummingbird, and that Hummingbird’s performance overhead is reasonable. We applied Hummingbird to earlier versions of one Rails app and found several type errors that had been introduced and then fixed. Lastly, we demonstrate using Hummingbird in Rails development mode to typecheck an app as live updates are applied to it.
One exception is RPython @cite_38 , which introduces a notion of load time, during which highly dynamic features may be used, and run time, when they may not be. In contrast, does not need such a separation.
{ "cite_N": [ "@cite_38" ], "mid": [ "2148535470" ], "abstract": [ "Although the C-based interpreter of Python is reasonably fast, implementations on the CLI or the JVM platforms offers some advantages in terms of robustness and interoperability. Unfortunately, because the CLI and JVM are primarily designed to execute statically typed, object-oriented languages, most dynamic language implementations cannot use the native bytecodes for common operations like method calls and exception handling; as a result, they are not able to take full advantage of the power offered by the CLI and JVM. We describe a different approach that attempts to preserve the flexibility of Python, while still allowing for efficient execution. This is achieved by limiting the use of the more dynamic features of Python to an initial, bootstrapping phase. This phase is used to construct a final RPython (Restricted Python) program that is actually executed. RPython is a proper subset of Python, is statically typed, and does not allow dynamic modification of class or method definitions; however, it can still take advantage of Python features such as mixins and first-class methods and classes. This paper presents an overview of RPython, including its design and its translation to both CLI and JVM bytecode. We show how the bootstrapping phase can be used to implement advanced features, like extensible classes and generative programming. We also discuss what work remains before RPython is truly ready for general use, and compare the performance of RPython with that of other approaches." ] }
1604.03641
2951357319
Dynamic languages such as Ruby, Python, and JavaScript have many compelling benefits, but the lack of static types means subtle errors can remain latent in code for a long time. While many researchers have developed various systems to bring some of the benefits of static types to dynamic languages, prior approaches have trouble dealing with metaprogramming, which generates code as the program executes. In this paper, we propose Hummingbird, a new system that uses a novel technique, just-in-time static type checking, to type check Ruby code even in the presence of metaprogramming. In Hummingbird, method type signatures are gathered dynamically at run-time, as those methods are created. When a method is called, Hummingbird statically type checks the method body against current type signatures. Thus, Hummingbird provides thorough static checks on a per-method basis, while also allowing arbitrarily complex metaprogramming. For performance, Hummingbird memoizes the static type checking pass, invalidating cached checks only if necessary. We formalize Hummingbird using a core, Ruby-like language and prove it sound. To evaluate Hummingbird, we applied it to six apps, including three that use Ruby on Rails, a powerful framework that relies heavily on metaprogramming. We found that all apps typecheck successfully using Hummingbird, and that Hummingbird’s performance overhead is reasonable. We applied Hummingbird to earlier versions of one Rails app and found several type errors that had been introduced and then fixed. Lastly, we demonstrate using Hummingbird in Rails development mode to typecheck an app as live updates are applied to it.
@cite_29 propose a system for type checking programs that use JQuery, a very sophisticated Javascript framework. The proposed type system has special support for JQuery's abstractions, making it quite effective in that domain. On the other hand, it does not easily apply to other frameworks.
{ "cite_N": [ "@cite_29" ], "mid": [ "1760484232" ], "abstract": [ "The jQuery library defines a powerful query language for web applications' scripts to interact with Web page content. This language is exposed as jQuery's api, which is implemented to fail silently so that incorrect queries will not cause the program to halt. Since the correctness of a query depends on the structure of a page, discrepancies between the page's actual structure and what the query expects will also result in failure, but with no error traces to indicate where the mismatch occurred. This work proposes a novel type system to statically detect jQuery errors. The type system extends Typed JavaScript with local structure about the page and with multiplicities about the structure of containers. Together, these two extensions allow us to track precisely which nodes are active in a jQuery object, with minimal programmer annotation effort. We evaluate this work by applying it to sample real-world jQuery programs." ] }
1604.03641
2951357319
Dynamic languages such as Ruby, Python, and JavaScript have many compelling benefits, but the lack of static types means subtle errors can remain latent in code for a long time. While many researchers have developed various systems to bring some of the benefits of static types to dynamic languages, prior approaches have trouble dealing with metaprogramming, which generates code as the program executes. In this paper, we propose Hummingbird, a new system that uses a novel technique, just-in-time static type checking, to type check Ruby code even in the presence of metaprogramming. In Hummingbird, method type signatures are gathered dynamically at run-time, as those methods are created. When a method is called, Hummingbird statically type checks the method body against current type signatures. Thus, Hummingbird provides thorough static checks on a per-method basis, while also allowing arbitrarily complex metaprogramming. For performance, Hummingbird memoizes the static type checking pass, invalidating cached checks only if necessary. We formalize Hummingbird using a core, Ruby-like language and prove it sound. To evaluate Hummingbird, we applied it to six apps, including three that use Ruby on Rails, a powerful framework that relies heavily on metaprogramming. We found that all apps typecheck successfully using Hummingbird, and that Hummingbird’s performance overhead is reasonable. We applied Hummingbird to earlier versions of one Rails app and found several type errors that had been introduced and then fixed. Lastly, we demonstrate using Hummingbird in Rails development mode to typecheck an app as live updates are applied to it.
's TSCHECK @cite_32 is a tool to check the correctness of TypeScript interfaces for JavaScript libraries. TSCHECK discovers a library's API by taking a snapshot after executing the library's top-level code. It then performs checking using a separate static analysis. This is similar to 's tracking of type information at run-time and then performing static checking based on it. However, allows type information to be generated at any time and not just in top-level code.
{ "cite_N": [ "@cite_32" ], "mid": [ "2067104598" ], "abstract": [ "The TypeScript programming language adds optional types to JavaScript, with support for interaction with existing JavaScript libraries via interface declarations. Such declarations have been written for hundreds of libraries, but they can be difficult to write and often contain errors, which may affect the type checking and misguide code completion for the application code in IDEs. We present a pragmatic approach to check correctness of TypeScript declaration files with respect to JavaScript library implementations. The key idea in our algorithm is that many declaration errors can be detected by an analysis of the library initialization state combined with a light-weight static analysis of the library function code. Our experimental results demonstrate the effectiveness of the approach: it has found 142 errors in the declaration files of 10 libraries, with an analysis time of a few minutes per library and with a low number of false positives. Our analysis of how programmers use library interface declarations furthermore reveals some practical limitations of the TypeScript type system." ] }
1604.03641
2951357319
Dynamic languages such as Ruby, Python, and JavaScript have many compelling benefits, but the lack of static types means subtle errors can remain latent in code for a long time. While many researchers have developed various systems to bring some of the benefits of static types to dynamic languages, prior approaches have trouble dealing with metaprogramming, which generates code as the program executes. In this paper, we propose Hummingbird, a new system that uses a novel technique, just-in-time static type checking, to type check Ruby code even in the presence of metaprogramming. In Hummingbird, method type signatures are gathered dynamically at run-time, as those methods are created. When a method is called, Hummingbird statically type checks the method body against current type signatures. Thus, Hummingbird provides thorough static checks on a per-method basis, while also allowing arbitrarily complex metaprogramming. For performance, Hummingbird memoizes the static type checking pass, invalidating cached checks only if necessary. We formalize Hummingbird using a core, Ruby-like language and prove it sound. To evaluate Hummingbird, we applied it to six apps, including three that use Ruby on Rails, a powerful framework that relies heavily on metaprogramming. We found that all apps typecheck successfully using Hummingbird, and that Hummingbird’s performance overhead is reasonable. We applied Hummingbird to earlier versions of one Rails app and found several type errors that had been introduced and then fixed. Lastly, we demonstrate using Hummingbird in Rails development mode to typecheck an app as live updates are applied to it.
* Related Uses of Caching. Several researchers have proposed systems that use caching in a way related to . @cite_33 reduce the overhead of checking data structure contracts (e.g., this is a binary search tree'') at run time by modifying nodes to hold key verification properties. This essentially caches those properties. However, because the properties are complex, the process of caching them is not automated.
{ "cite_N": [ "@cite_33" ], "mid": [ "2278155662" ], "abstract": [ "Executable formal contracts help verify a program at runtime when static verification fails. However, these contracts may be prohibitively slow to execute, especially when they describe the transformations of data structures. In fact, often an efficient data structure operation with O(log(n)) running time executes in O(n log(n)) when naturally written specifications are executed at run time." ] }
1604.03641
2951357319
Dynamic languages such as Ruby, Python, and JavaScript have many compelling benefits, but the lack of static types means subtle errors can remain latent in code for a long time. While many researchers have developed various systems to bring some of the benefits of static types to dynamic languages, prior approaches have trouble dealing with metaprogramming, which generates code as the program executes. In this paper, we propose Hummingbird, a new system that uses a novel technique, just-in-time static type checking, to type check Ruby code even in the presence of metaprogramming. In Hummingbird, method type signatures are gathered dynamically at run-time, as those methods are created. When a method is called, Hummingbird statically type checks the method body against current type signatures. Thus, Hummingbird provides thorough static checks on a per-method basis, while also allowing arbitrarily complex metaprogramming. For performance, Hummingbird memoizes the static type checking pass, invalidating cached checks only if necessary. We formalize Hummingbird using a core, Ruby-like language and prove it sound. To evaluate Hummingbird, we applied it to six apps, including three that use Ruby on Rails, a powerful framework that relies heavily on metaprogramming. We found that all apps typecheck successfully using Hummingbird, and that Hummingbird’s performance overhead is reasonable. We applied Hummingbird to earlier versions of one Rails app and found several type errors that had been introduced and then fixed. Lastly, we demonstrate using Hummingbird in Rails development mode to typecheck an app as live updates are applied to it.
@cite_16 propose memoizing run-time assertion checking to improve performance. This is similar to 's type check caching, but much more sophisticated because the cached assertions arise from a rich logic.
{ "cite_N": [ "@cite_16" ], "mid": [ "2263225824" ], "abstract": [ "The use of annotations, referred to as assertions or contracts, to describe program properties for which run-time tests are to be generated, has become frequent in dynamic programing languages. However, the frameworks proposed to support such run-time testing generally incur high time and or space overheads over standard program execution. We present an approach for reducing this overhead that is based on the use of memoization to cache intermediate results of check evaluation, avoiding repeated checking of previously verified properties. Compared to approaches that reduce checking frequency, our proposal has the advantage of being exhaustive (i.e., all tests are checked at all points) while still being much more efficient than standard run-time checking. Compared to the limited previous work on memoization, it performs the task without requiring modifications to data structure representation or checking code. While the approach is general and system-independent, we present it for concreteness in the context of the Ciao run-time checking framework, which allows us to provide an operational semantics with checks and caching. We also report on a prototype implementation and provide some experimental results that support that using a relatively small cache leads to significant decreases in run-time checking overhead." ] }
1604.03641
2951357319
Dynamic languages such as Ruby, Python, and JavaScript have many compelling benefits, but the lack of static types means subtle errors can remain latent in code for a long time. While many researchers have developed various systems to bring some of the benefits of static types to dynamic languages, prior approaches have trouble dealing with metaprogramming, which generates code as the program executes. In this paper, we propose Hummingbird, a new system that uses a novel technique, just-in-time static type checking, to type check Ruby code even in the presence of metaprogramming. In Hummingbird, method type signatures are gathered dynamically at run-time, as those methods are created. When a method is called, Hummingbird statically type checks the method body against current type signatures. Thus, Hummingbird provides thorough static checks on a per-method basis, while also allowing arbitrarily complex metaprogramming. For performance, Hummingbird memoizes the static type checking pass, invalidating cached checks only if necessary. We formalize Hummingbird using a core, Ruby-like language and prove it sound. To evaluate Hummingbird, we applied it to six apps, including three that use Ruby on Rails, a powerful framework that relies heavily on metaprogramming. We found that all apps typecheck successfully using Hummingbird, and that Hummingbird’s performance overhead is reasonable. We applied Hummingbird to earlier versions of one Rails app and found several type errors that had been introduced and then fixed. Lastly, we demonstrate using Hummingbird in Rails development mode to typecheck an app as live updates are applied to it.
@cite_37 proposed a method to incrementally update analysis results at run-time as code is added, deleted, or changed. Their analysis algorithms are designed for constraint logic programming languages, and are much more complicated than 's type checking.
{ "cite_N": [ "@cite_37" ], "mid": [ "2163660741" ], "abstract": [ "Global analyzers traditionally read and analyze the entire program at once, in a nonincremental way. However, there are many situations which are not well suited to this simple model and which instead require reanalysis of certain parts of a program which has already been analyzed. In these cases, it appears inefficient to perform the analysis of the program again from scratch, as needs to be done with current systems. We describe how the fixed-point algorithms used in current generic analysis engines for (constraint) logic programming languages can be extended to support incremental analysis. The possible changes to a program are classified into three types: addition, deletion, and arbitrary change. For each one of these, we provide one or more algorithms for identifying the parts of the analysis that must be recomputed and for performing the actual recomputation. The potential benefits and drawbacks of these algorithms are discussed. Finally, we present some experimental results obtained with an implementation of the algorithms in the PLAI generic abstract interpretation framework. The results show significant benefits when using the proposed incremental analysis algorithms." ] }