aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1603.06597 | 204991346 | The Domain Name System (DNS) does not provide query privacy. Query obfuscation schemes have been proposed to overcome this limitation, but, so far, they have not been evaluated in a realistic setting. In this paper we evaluate the security of a random set range query scheme in a real-world web surfing scenario. We demonstrate that the scheme does not sufficiently obfuscate characteristic query patterns, which can be used by an adversary to determine the visited websites. We also illustrate how to thwart the attack and discuss practical challenges. Our results suggest that previously published evaluations of range queries may give a false sense of the attainable security, because they do not account for any interdependencies between queries. | The basic DNS range query scheme was introduced by in @cite_6 ; there is also an improved version @cite_10 inspired by private information retrieval @cite_8 . Although the authors suggest their schemes especially for web surfing applications, they fail to demonstrate their practicability using empirical results. | {
"cite_N": [
"@cite_10",
"@cite_6",
"@cite_8"
],
"mid": [
"2116969595",
"2169197659",
"2150307013"
],
"abstract": [
"In a society preoccupied with gradual erosion of electronic privacy, loss of privacy in current DNS queries is an important issue worth considering. From the definition, the privacy problem is to prove that none of the private data can be inferred from the information which is made public. The privacy disclosure problem in DNS Query was well analyzed by from MUE 2007. In this paper, we first analyze the \"Range Query \" from that paper, then by results of that scheme and another well-known client- to-server privacy-preserving query scheme: Two- DBServer Private Information Retrieval theory, we propose a new privacy-preserving DNS Query scheme, which was proved to achieve higher efficiency and theoretic privacy.",
"When a DNS (domain name system) client needs to look up a name, it queries DNS servers to resolve the name on the Internet. The query information from the client was passed through one or more DNS servers. While useful, in the whole query transmission, we say it can leak potentially sensitive information: what a client wants to connect to, or what the client is always paying attention to. From the definition, the privacy problem is to prove that none of the private data can be inferred from the information which is made public. We first analyzed the complete DNS query process now in use; then, from each step of the DNS query process, we discussed the privacy disclosure problem in each step of the query: client side, query transmission process and DNS server side. Finally, we proposed a simple and flexible privacy-preserving query scheme \"range query\", which could maximally decrease privacy disclosure in the whole DNS query process. And we also discuss efficiency and implementation on the range query.",
"We describe schemes that enable a user to access k replicated copies of a database (k spl ges 2) and privately retrieve information stored in the database. This means that each individual database gets no information on the identity of the item retrieved by the user. For a single database, achieving this type of privacy requires communicating the whole database, or n bits (where n is the number of bits in the database). Our schemes use the replication to gain substantial saving. In particular, we have: A two database scheme with communication complexity of O(n sup 1 3 ). A scheme for a constant number, k, of databases with communication complexity O(n sup 1 k ). A scheme for 1 3 log sub 2 n databases with polylogarithmic (in n) communication complexity."
]
} |
1603.06597 | 204991346 | The Domain Name System (DNS) does not provide query privacy. Query obfuscation schemes have been proposed to overcome this limitation, but, so far, they have not been evaluated in a realistic setting. In this paper we evaluate the security of a random set range query scheme in a real-world web surfing scenario. We demonstrate that the scheme does not sufficiently obfuscate characteristic query patterns, which can be used by an adversary to determine the visited websites. We also illustrate how to thwart the attack and discuss practical challenges. Our results suggest that previously published evaluations of range queries may give a false sense of the attainable security, because they do not account for any interdependencies between queries. | Castillo-Perez and Garcia-Alfaro propose a variation of the original range query scheme @cite_6 using multiple DNS resolvers in parallel @cite_4 @cite_16 . They evaluate its performance for ENUM and ONS, two protocols that store data within the DNS infrastructure. Finally, Lu and Tsudik propose PPDNS @cite_5 , a privacy-preserving resolution service that relies on CoDoNs @cite_15 , a next-generation DNS system based on distributed hashtables and a peer-to-peer infrastructure, which has not been widely adopted so far. | {
"cite_N": [
"@cite_4",
"@cite_6",
"@cite_5",
"@cite_15",
"@cite_16"
],
"mid": [
"1576924932",
"2169197659",
"2154473393",
"2083158002",
"2164875918"
],
"abstract": [
"",
"When a DNS (domain name system) client needs to look up a name, it queries DNS servers to resolve the name on the Internet. The query information from the client was passed through one or more DNS servers. While useful, in the whole query transmission, we say it can leak potentially sensitive information: what a client wants to connect to, or what the client is always paying attention to. From the definition, the privacy problem is to prove that none of the private data can be inferred from the information which is made public. We first analyzed the complete DNS query process now in use; then, from each step of the DNS query process, we discussed the privacy disclosure problem in each step of the query: client side, query transmission process and DNS server side. Finally, we proposed a simple and flexible privacy-preserving query scheme \"range query\", which could maximally decrease privacy disclosure in the whole DNS query process. And we also discuss efficiency and implementation on the range query.",
"Privacy leaks are an unfortunate and an integral part of the current Internet domain name resolution. Each DNS query generated by a user reveals -- to one or more DNS servers -- the origin and the target of that query. Over time, users' communication (e.g., browsing) patterns might become exposed to entities with little or no trust. Current DNS privacy leaks stem from fundamental features of DNS and are not easily fixable by simple patches. Moreover, privacy issues have been overlooked by DNS security efforts (such as DNSSEC) and are thus likely to propagate into future versions of DNS. In order to mitigate privacy issues in DNS, this paper proposes a Privacy-Preserving DNS (PPDNS), that offers privacy during domain name resolution. PPDNS is based on distributed hash tables (DHTs), an alternative naming infrastructure, and computational private information retrieval (cPIR), an advanced cryptographic construct. PPDNS takes advantage of the DHT index structure to provide name resolution query privacy, while leveraging cPIR to reduce communication overhead for bandwidth-sensitive clients. Our analysis shows that PPDNS is a viable approach for obtaining a reasonably high level of privacy for name resolution queries. PPDNS also serves as a demonstration of blending advanced systems techniques with their cryptographic counterparts.",
"Name services are critical for mapping logical resource names to physical resources in large-scale distributed systems. The Domain Name System (DNS) used on the Internet, however, is slow, vulnerable to denial of service attacks, and does not support fast updates. These problems stem fundamentally from the structure of the legacy DNS.This paper describes the design and implementation of the Cooperative Domain Name System (CoDoNS), a novel name service, which provides high lookup performance through proactive caching, resilience to denial of service attacks through automatic load-balancing, and fast propagation of updates. CoDoNS derives its scalability, decentralization, self-organization, and failure resilience from peer-to-peer overlays, while it achieves high performance using the Beehive replication framework. Cryptographic delegation, instead of host-based physical delegation, limits potential malfeasance by namespace operators and creates a competitive market for namespace management. Backwards compatibility with existing protocols and wire formats enables CoDoNS to serve as a backup for legacy DNS, as well as a complete replacement. Performance measurements from a real-life deployment of the system in PlanetLab shows that CoDoNS provides fast lookups, automatically reconfigures around faults without manual involvement and thwarts distributed denial of service attacks by promptly redistributing load across nodes.",
"The rise of new Internet services, especially those related to the integration of people and physical objects to the net, makes visible the limitations of the DNS protocol. The exchange of data through DNS procedures flows today into hostile networks as clear text. Packets within this exchange can easily be captured by intermediary nodes in the resolution path and eventually disclosed. Privacy issues may thus arise if sensitive data is captured and sold with malicious purposes. We evaluate in this paper two DNS privacy-preserving approaches recently presented in the literature. We discuss some benefits and limitations of these proposals, and we point out the necessity of additional measures to enhance their security."
]
} |
1603.06597 | 204991346 | The Domain Name System (DNS) does not provide query privacy. Query obfuscation schemes have been proposed to overcome this limitation, but, so far, they have not been evaluated in a realistic setting. In this paper we evaluate the security of a random set range query scheme in a real-world web surfing scenario. We demonstrate that the scheme does not sufficiently obfuscate characteristic query patterns, which can be used by an adversary to determine the visited websites. We also illustrate how to thwart the attack and discuss practical challenges. Our results suggest that previously published evaluations of range queries may give a false sense of the attainable security, because they do not account for any interdependencies between queries. | The aforementioned publications study the security of range queries for singular queries issued independently from each other. In contrast, @cite_0 observes that consecutively issued queries that are dependent on each other have implications for security. They describe a timing attack that allows an adversary to determine the actually desired website and show that consecutive queries have to be serialized in order to prevent the attack. | {
"cite_N": [
"@cite_0"
],
"mid": [
"37081517"
],
"abstract": [
"We propose a dedicated DNS Anonymity Service which protects users' privacy. The design consists of two building blocks: a broadcast scheme for the distribution of a \"top list\" of DNS hostnames, and low-latency Mixes for requesting the remaining hostnames unobservably. We show that broadcasting the 10,000 most frequently queried hostnames allows zero-latency lookups for over 80 of DNS queries at reasonable cost. We demonstrate that the performance of the previously proposed Range Queries approach severely suffers from high lookup latencies in a real-world scenario."
]
} |
1603.06317 | 2952929338 | Robots that autonomously manipulate objects within warehouses have the potential to shorten the package delivery time and improve the efficiency of the e-commerce industry. In this paper, we present a robotic system that is capable of both picking and placing general objects in warehouse scenarios. Given a target object, the robot autonomously detects it from a shelf or a table and estimates its full 6D pose. With this pose information, the robot picks the object using its gripper, and then places it into a container or at a specified location. We describe our pick-and-place system in detail while highlighting our design principles for the warehouse settings, including the perception method that leverages knowledge about its workspace, three grippers designed to handle a large variety of different objects in terms of shape, weight and material, and grasp planning in cluttered scenarios. We also present extensive experiments to evaluate the performance of our picking system and demonstrate that the robot is competent to accomplish various tasks in warehouse settings, such as picking a target item from a tight space, grasping different objects from the shelf, and performing pick-and-place tasks on the table. | Previous work on warehouse automation mainly focuses on autonomous transport, e.g., delivering the packages by using Automated Guided Vehicles (AGV) @cite_18 @cite_23 . Beyond that, in @cite_19 , it describes an autonomous robot for indoor light logistics with limited manipulating ability. This robot is designed for transporting packages to the destination in a pharmaceutical warehouse. Although it is able to pick up the packages from the ground, its manipulation capability is limited and not suitable for the e-commerce application, which requires grasping various objects from shelves and tables. Manipulation in restricted spaces like boxes and shelves leads to difficult high-dimensional motion planning problems. To this end, @cite_12 proposed a sample-based motion planning algorithm that performs local spline refinement to compute smooth, collision-free trajectories and it works well even in environments with narrow passages. | {
"cite_N": [
"@cite_19",
"@cite_18",
"@cite_12",
"@cite_23"
],
"mid": [
"2107678031",
"2155447845",
"2016956950",
""
],
"abstract": [
"In this paper we describe some of the key technologies of a mobile manipulator that are used for package transportation in a pharmaceutical warehouse. The paper presents, beyond the functional aspects of the system, the main modules of the software that controls the mobile manipulator. The robot is a demonstrator of technologies for transporting safely and efficiently goods in partially structured, dynamic and public environments. Several issues are discussed such as the design and integration of mechanical elements, the development of non invasive localization and guidance procedures, the design and control of grasping devices for specific box storage combinations, and the development of testing, verification and validation procedures satisfying strict pharmaceutical regulations. Results of the laboratory tests demonstrate the capability of the prototype.",
"The paper describes an application of computer vision in the autonomous guidance of a traditional forklift truck, ROBOLIFT, which is a product of Elsag Bailey Telerobot and FIAT OM. Computer vision represents the main sensory system for both navigation and load recognition. The system is now shifting from the prototype stage to production and commercialization. Field tests have been carried out and results are reported.",
"We present a novel trajectory computation algorithm to smooth piecewise linear collision-free trajectories computed by sample-based motion planners. Our approach uses cubic B-splines to generate trajectories that are C2 almost everywhere, except on a few isolated points. The algorithm performs local spline refinement to compute smooth, collision-free trajectories and it works well even in environments with narrow passages. We also present a fast and reliable algorithm for collision checking between a robot and the environment along the B-spline trajectories. We highlight the performance of our algorithm on complex benchmarks, including path computation for rigid and articulated models in cluttered environments.",
""
]
} |
1603.06317 | 2952929338 | Robots that autonomously manipulate objects within warehouses have the potential to shorten the package delivery time and improve the efficiency of the e-commerce industry. In this paper, we present a robotic system that is capable of both picking and placing general objects in warehouse scenarios. Given a target object, the robot autonomously detects it from a shelf or a table and estimates its full 6D pose. With this pose information, the robot picks the object using its gripper, and then places it into a container or at a specified location. We describe our pick-and-place system in detail while highlighting our design principles for the warehouse settings, including the perception method that leverages knowledge about its workspace, three grippers designed to handle a large variety of different objects in terms of shape, weight and material, and grasp planning in cluttered scenarios. We also present extensive experiments to evaluate the performance of our picking system and demonstrate that the robot is competent to accomplish various tasks in warehouse settings, such as picking a target item from a tight space, grasping different objects from the shelf, and performing pick-and-place tasks on the table. | Another related area to our system is bin-picking, which addresses the task of automatically picking isolating single items from a given bin @cite_20 @cite_25 @cite_5 @cite_27 . However, in the bin-picking tasks, the working space of the robot is only the given bin, which is a structured, limited and relatively easy operation area compared to our warehouse settings including a table and a shelf with 12 bins. | {
"cite_N": [
"@cite_5",
"@cite_27",
"@cite_25",
"@cite_20"
],
"mid": [
"2005775757",
"2076363786",
"2080342258",
"2043690242"
],
"abstract": [
"Grasping individual objects from an unordered pile in a box has been investigated in static scenarios so far. In this paper, we demonstrate bin picking with an anthropomorphic mobile robot. To this end, we extend global navigation techniques by precise local alignment with a transport box. Objects are detected in range images using a shape primitive-based approach. Our approach learns object models from single scans and employs active perception to cope with severe occlusions. Grasps and arm motions are planned in an efficient local multiresolution height map. All components are integrated and evaluated in a bin picking and part delivery task.",
"We present a method that estimates graspability measures on a single depth map for grasping objects randomly placed in a bin. Our method represents a gripper model by using two mask images, one describing a contact region that should be filled by a target object for stable grasping, and the other describing a collision region that should not be filled by other objects to avoid collisions during grasping. The graspability measure is computed by convolving the mask images with binarized depth maps, which are thresholded differently in each region according to the minimum height of the 3D points in the region and the length of the gripper. Our method does not assume any 3-D model of objects, thus applicable to general objects. Our representation of the gripper model using the two mask images is also applicable to general grippers, such as multi-finger and vacuum grippers. We apply our method to bin picking of piled objects using a robot arm and demonstrate fast pick-and-place operations for various industrial objects.",
"We present a practical vision-based robotic bin-picking system that performs detection and three-dimensional pose estimation of objects in an unstructured bin using a novel camera design, picks up parts from the bin, and performs error detection and pose correction while the part is in the gripper. Two main innovations enable our system to achieve real-time robust and accurate operation. First, we use a multi-flash camera that extracts robust depth edges. Second, we introduce an efficient shape-matching algorithm called fast directional chamfer matching (FDCM), which is used to reliably detect objects and estimate their poses. FDCM improves the accuracy of chamfer matching by including edge orientation. It also achieves massive improvements in matching speed using line-segment approximations of edges, a three-dimensional distance transform, and directional integral images. We empirically show that these speedups, combined with the use of bounds in the spatial and hypothesis domains, give the algorithm sub...",
"This paper proposes a method for bin-picking for objects without assuming the precise geometrical model of objects. We consider the case where the shape of objects are not uniform but are similarly approximated by cylinders. By using the point cloud of a single object, we extract the probabilistic properties with respect to the difference between an object and a cylinder and consider applying the probabilistic properties to the pick-and-place motion planner of an object stacked on a table. By using the probabilistic properties, we can also realize the contact state where a finger maintain contact with the target object while avoiding contact with other objects. We further consider approximating the region occupied by fingers by a rectangular parallelepiped. The pick-and-place motion is planned by using a set of regions in combination with the probabilistic properties. Finally, the effectiveness of the proposed method is confirmed by some numerical examples and experimental result."
]
} |
1603.06317 | 2952929338 | Robots that autonomously manipulate objects within warehouses have the potential to shorten the package delivery time and improve the efficiency of the e-commerce industry. In this paper, we present a robotic system that is capable of both picking and placing general objects in warehouse scenarios. Given a target object, the robot autonomously detects it from a shelf or a table and estimates its full 6D pose. With this pose information, the robot picks the object using its gripper, and then places it into a container or at a specified location. We describe our pick-and-place system in detail while highlighting our design principles for the warehouse settings, including the perception method that leverages knowledge about its workspace, three grippers designed to handle a large variety of different objects in terms of shape, weight and material, and grasp planning in cluttered scenarios. We also present extensive experiments to evaluate the performance of our picking system and demonstrate that the robot is competent to accomplish various tasks in warehouse settings, such as picking a target item from a tight space, grasping different objects from the shelf, and performing pick-and-place tasks on the table. | The perception problem in the warehouse settings is an example of the general object detection, segmentation and pose estimation, which is widely researched in computer vision. But this perception problem also has its own characteristics, i.e., it typically uses multi-modal vision (not only the RGB images) and tightly couples with the following grasping movement @cite_26 . To accurately manipulate the target object, our perception methods need to output the object's full 6D pose. Traditional computer vision methods usually output bounding boxes with highly likely object locations on the input RGB images. These representations are difficult to use in warehouse picking context as the bounding boxes without depth information are not that useful for grasping and manipulating. Recently, there are some work on creating RGB-D datasets @cite_8 @cite_4 for improving object detection and pose estimation in warehouse environments. Specially, the multi-class segmentation used in @cite_21 helps the team achieve the winning entry to the APC 2015. This method gives shape information as output to indicate the location of the target object. Although shape information is sufficient to pick items in some situations by using vacuum grippers, it is inadequate for dexterous grippers to accurately manipulate objects. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_21",
"@cite_8"
],
"mid": [
"2963457196",
"2221752211",
"2301578624",
"2005756025"
],
"abstract": [
"Abstract The motivation of this paper is to develop an intelligent robot assembly system using multi-modal vision for next-generation industrial assembly. The system includes two phases where in the first phase human beings demonstrate assembly to robots and in the second phase robots detect objects, plan grasps, and assemble objects following human demonstration using AI searching. A notorious difficulty to implement such a system is the bad precision of 3D visual detection. This paper presents multi-modal approaches to overcome the difficulty: It uses AR markers in the teaching phase to detect human operation, and uses point clouds and geometric constraints in the robot execution phase to avoid unexpected occlusion and noises. The paper presents several experiments to examine the precision and correctness of the approaches. It demonstrates the applicability of the approaches by integrating them with graph model-based motion planning, and by executing the results on industrial robots in real-world scenarios.",
"An important logistics application of robotics involves manipulators that pick-and-place objects placed in warehouse shelves. A critical aspect of this task corresponds to detecting the pose of a known object in the shelf using visual data. Solving this problem can be assisted by the use of an RGBD sensor, which also provides depth information beyond visual data. Nevertheless, it remains a challenging problem since multiple issues need to be addressed, such as low illumination inside shelves, clutter, texture-less and reflective objects as well as the limitations of depth sensors. This letter provides a new rich dataset for advancing the state-of-the-art in RGBD-based 3D object pose estimation, which is focused on the challenges that arise when solving warehouse pick-and-place tasks. The publicly available dataset includes thousands of images and corresponding ground truth data for the objects used during the first Amazon Picking Challenge at different poses and clutter conditions. Each image is accompanied with ground truth information to assist in the evaluation of algorithms for object detection. To show the utility of the dataset, a recent algorithm for RGBD-based pose estimation is evaluated in this letter. Given the measured performance of the algorithm on the dataset, this letter shows how it is possible to devise modifications and improvements to increase the accuracy of pose estimation algorithms. This process can be easily applied to a variety of different methodologies for object pose detection and improve performance in the domain of warehouse pick-and-place.",
"We present a method for multi-class segmentation from RGB-D data in a realistic warehouse picking setting. The method computes pixel-wise probabilities and combines them to find a coherent object segmentation. It reliably segments objects in cluttered scenarios, even when objects are translucent, reflective, highly deformable, have fuzzy surfaces, or consist of loosely coupled components. The robust performance results from the exploitation of problem structure inherent to the warehouse setting. The proposed method proved its capabilities as part of our winning entry to the 2015 Amazon Picking Challenge. We present a detailed experimental analysis of the contribution of different information sources, compare our method to standard segmentation techniques, and assess possible extensions that further enhance the algorithm's capabilities. We release our software and data sets as open source.",
"The state of the art in computer vision has rapidly advanced over the past decade largely aided by shared image datasets. However, most of these datasets tend to consist of assorted collections of images from the web that do not include 3D information or pose information. Furthermore, they target the problem of object category recognition—whereas solving the problem of object instance recognition might be sufficient for many robotic tasks. To address these issues, we present a highquality, large-scale dataset of 3D object instances, with accurate calibration information for every image. We anticipate that “solving” this dataset will effectively remove many perceptionrelated problems for mobile, sensing-based robots. The contributions of this work consist of: (1) BigBIRD, a dataset of 100 objects (and growing), composed of, for each object, 600 3D point clouds and 600 high-resolution (12 MP) images spanning all views, (2) a method for jointly calibrating a multi-camera system, (3) details of our data collection system, which collects all required data for a single object in under 6 minutes with minimal human effort, and (4) multiple software components (made available in open source), used to automate multi-sensor calibration and the data collection process. All code and data are available at http: rll.eecs.berkeley.edu bigbird."
]
} |
1603.06068 | 2309017829 | Being based on Web technologies, Linked Data is distributed and decentralised in its nature. Hence, for the purpose of finding relevant Linked Data on the Web, search indices play an important role. Also for avoiding network communication overhead and latency, applications rely on indices or caches over Linked Data. These indices and caches are based on local copies of the original data and, thereby, introduce redundancy. Furthermore, as changes at the original Linked Data sources are not automatically propagated to the local copies, there is a risk of having inaccurate indices and caches due to outdated information. In this paper I discuss and compare methods for measuring the accuracy of indices. I will present different measures which have been used in related work and evaluate their advantages and disadvantages from a theoretic point of view as well as from a practical point of view by analysing their behaviour on real world data in an empirical experiment. | In recent years, various index models over LOD have been proposed. Many of them focus on specific aspects of the data or are dedicated to support application specific tasks. When looking at the RDF basis of LOD, one has to consider also the works on indices for RDF triple stores, such as Hexastore @cite_0 or RDF3X @cite_1 . These indices are intended for optimising access to a single, centrally managed data storage solution. In this case, accuracy of the index is not an issue as all changes in the data are under control of the storage solution and are reflected in the index immediately. | {
"cite_N": [
"@cite_0",
"@cite_1"
],
"mid": [
"2135577024",
"2000656232"
],
"abstract": [
"Despite the intense interest towards realizing the Semantic Web vision, most existing RDF data management schemes are constrained in terms of efficiency and scalability. Still, the growing popularity of the RDF format arguably calls for an effort to offset these drawbacks. Viewed from a relational-database perspective, these constraints are derived from the very nature of the RDF data model, which is based on a triple format. Recent research has attempted to address these constraints using a vertical-partitioning approach, in which separate two-column tables are constructed for each property. However, as we show, this approach suffers from similar scalability drawbacks on queries that are not bound by RDF property value. In this paper, we propose an RDF storage scheme that uses the triple nature of RDF as an asset. This scheme enhances the vertical partitioning idea and takes it to its logical conclusion. RDF data is indexed in six possible ways, one for each possible ordering of the three RDF elements. Each instance of an RDF element is associated with two vectors; each such vector gathers elements of one of the other types, along with lists of the third-type resources attached to each vector element. Hence, a sextuple-indexing scheme emerges. This format allows for quick and scalable general-purpose query processing; it confers significant advantages (up to five orders of magnitude) compared to previous approaches for RDF data management, at the price of a worst-case five-fold increase in index space. We experimentally document the advantages of our approach on real-world and synthetic data sets with practical queries.",
"RDF is a data model for schema-free structured information that is gaining momentum in the context of Semantic-Web data, life sciences, and also Web 2.0 platforms. The \"pay-as-you-go\" nature of RDF and the flexible pattern-matching capabilities of its query language SPARQL entail efficiency and scalability challenges for complex queries including long join paths. This paper presents the RDF-3X engine, an implementation of SPARQL that achieves excellent performance by pursuing a RISC-style architecture with streamlined indexing and query processing. The physical design is identical for all RDF-3X databases regardless of their workloads, and completely eliminates the need for index tuning by exhaustive indexes for all permutations of subject-property-object triples and their binary and unary projections. These indexes are highly compressed, and the query processor can aggressively leverage fast merge joins with excellent performance of processor caches. The query optimizer is able to choose optimal join orders even for complex queries, with a cost model that includes statistical synopses for entire join paths. Although RDF-3X is optimized for queries, it also provides good support for efficient online updates by means of a staging architecture: direct updates to the main database indexes are deferred, and instead applied to compact differential indexes which are later merged into the main indexes in a batched manner. Experimental studies with several large-scale datasets with more than 50 million RDF triples and benchmark queries that include pattern matching, manyway star-joins, and long path-joins demonstrate that RDF-3X can outperform the previously best alternatives by one or two orders of magnitude."
]
} |
1603.06068 | 2309017829 | Being based on Web technologies, Linked Data is distributed and decentralised in its nature. Hence, for the purpose of finding relevant Linked Data on the Web, search indices play an important role. Also for avoiding network communication overhead and latency, applications rely on indices or caches over Linked Data. These indices and caches are based on local copies of the original data and, thereby, introduce redundancy. Furthermore, as changes at the original Linked Data sources are not automatically propagated to the local copies, there is a risk of having inaccurate indices and caches due to outdated information. In this paper I discuss and compare methods for measuring the accuracy of indices. I will present different measures which have been used in related work and evaluate their advantages and disadvantages from a theoretic point of view as well as from a practical point of view by analysing their behaviour on real world data in an empirical experiment. | More specific to LOD are indices for optimising federated queries @cite_16 , on-demand queries on the Web @cite_2 or looking up data sources relevant to particular schema patterns @cite_3 . However, most of these approaches do not deal with index accuracy either. Their focus is more on how to implement or make use of the index in specific scenarios. | {
"cite_N": [
"@cite_16",
"@cite_3",
"@cite_2"
],
"mid": [
"1993786250",
"1966915805",
"47437698"
],
"abstract": [
"With the proliferation of the RDF data format, engines for RDF query processing are faced with very large graphs that contain hundreds of millions of RDF triples. This paper addresses the resulting scalability problems. Recent prior work along these lines has focused on indexing and other physical-design issues. The current paper focuses on join processing, as the fine-grained and schema-relaxed use of RDF often entails star- and chain-shaped join queries with many input streams from index scans. We present two contributions for scalable join processing. First, we develop very light-weight methods for sideways information passing between separate joins at query run-time, to provide highly effective filters on the input streams of joins. Second, we improve previously proposed algorithms for join-order optimization by more accurate selectivity estimations for very large RDF graphs. Experimental studies with several RDF datasets, including the UniProt collection, demonstrate the performance gains of our approach, outperforming the previously fastest systems by more than an order of magnitude.",
"We present SchemEX, an approach and tool for a stream-based indexing and schema extraction of Linked Open Data (LOD) at web-scale. The schema index provided by SchemEX can be used to locate distributed data sources in the LOD cloud. It serves typical LOD information needs such as finding sources that contain instances of one specific data type, of a given set of data types (so-called type clusters), or of instances in type clusters that are connected by one or more common properties (so-called equivalence classes). The entire process of extracting the schema from triples and constructing an index is designed to have linear runtime complexity. Thus, the schema index can be computed on-the-fly while the triples are crawled and provided as a stream by a linked data spider. To demonstrate the web-scalability of our approach, we have computed a SchemEX index over the Billion Triples Challenge (BTC) dataset 2011 consisting of 2,170 million triples. In addition, we have computed the SchemEX index on a dataset with 11 million triples. We use this smaller dataset for conducting a detailed qualitative analysis. We are capable of locating relevant data sources with recall between 71 and 98 and a precision between 74 and 100 at a window size of 100 K triples observed in the stream and depending on the complexity of the query, i.e. if one wants to find specific data types, type clusters or equivalence classes.",
"In this paper we analyse the sensitivity of twelve prototypical Linked Data index models towards evolving data. Thus, we consider the reliability and accuracy of results obtained from an index in scenarios where the original data has changed after having been indexed. Our analysis is based on empirical observations over real world data covering a time span of more than one year. The quality of the index models is evaluated w.r.t. their ability to give reliable estimations of the distribution of the indexed data. To this end we use metrics such as perplexity, cross-entropy and Kullback-Leibler divergence. Our experiments show that all considered index models are affected by the evolution of data, but to different degrees and in different ways. We also make the interesting observation that index models based on schema information seem to be relatively stable for estimating densities even if the schema elements diverge a lot."
]
} |
1603.06036 | 2949561619 | Fractal analysis has been widely used in computer vision, especially in texture image processing and texture analysis. The key concept of fractal-based image model is the fractal dimension, which is invariant to bi-Lipschitz transformation of image, and thus capable of representing intrinsic structural information of image robustly. However, the invariance of fractal dimension generally does not hold after filtering, which limits the application of fractal-based image model. In this paper, we propose a novel fractal dimension invariant filtering (FDIF) method, extending the invariance of fractal dimension to filtering operations. Utilizing the notion of local self-similarity, we first develop a local fractal model for images. By adding a nonlinear post-processing step behind anisotropic filter banks, we demonstrate that the proposed filtering method is capable of preserving the local invariance of the fractal dimension of image. Meanwhile, we show that the FDIF method can be re-instantiated approximately via a CNN-based architecture, where the convolution layer extracts anisotropic structure of image and the nonlinear layer enhances the structure via preserving local fractal dimension of image. The proposed filtering method provides us with a novel geometric interpretation of CNN-based image model. Focusing on a challenging image processing task --- detecting complicated curves from the texture-like images, the proposed method obtains superior results to the state-of-art approaches. | Fractal-based image model has been widely used to solve many problems of computer vision, including, texture analysis @cite_27 , bio-medical image processing @cite_40 , and image quality assessment @cite_19 . The local fractal analysis method in @cite_26 and the spectrum of fractal dimension in @cite_33 @cite_16 take advantage of the bi-Lipschitz invariance property of fractal dimension for texture classification, whose features are very robust to the deformation and scale changing of textures. Because the local self-similarity of image is often ubiquitous both within and across scales @cite_28 @cite_11 , natural images can also be modeled as fractals locally @cite_15 @cite_44 . Recently, the fractal model of natural image is applied to image super-resolution @cite_48 @cite_32 , where the local fractal analysis is used to enhance image gradient adaptively. @cite_40 , a fracal-based dissimilarity measurement is proposed to analyze MRI images. However, because the invariance of fractal dimension does not hold after filtering, it is difficult to merge fractal analysis into other image processing methods. | {
"cite_N": [
"@cite_26",
"@cite_33",
"@cite_28",
"@cite_48",
"@cite_32",
"@cite_44",
"@cite_19",
"@cite_40",
"@cite_27",
"@cite_15",
"@cite_16",
"@cite_11"
],
"mid": [
"2146622769",
"",
"2534320940",
"2039539938",
"",
"2158270878",
"",
"",
"",
"2078206416",
"2014801693",
"1976416062"
],
"abstract": [
"We address the problem of developing discriminative, yet invariant, features for texture classification. Texture variations due to changes in scale are amongst the hardest to handle. One of the most successful methods of dealing with such variations is based on choosing interest points and selecting their characteristic scales [ PAMI 2005]. However, selecting a characteristic scale can be unstable for many textures. Furthermore, the reliance on an interest point detector and the inability to evaluate features densely can be serious limitations. Fractals present a mathematically well founded alternative to dealing with the problem of scale. However, they have not become popular as texture features due to their lack of discriminative power. This is primarily because: (a) fractal based classification methods have avoided statistical characterisations of textures (which is essential for accurate analysis) by using global features; and (b) fractal dimension features are unable to distinguish between key texture primitives such as edges, corners and uniform regions. In this paper, we overcome these drawbacks and develop local fractal features that are evaluated densely. The features are robust as they do not depend on choosing interest points or characteristic scales. Furthermore, it is shown that the local fractal dimension is invariant to local bi-Lipschitz transformations whereas its extension is able to correctly distinguish between fundamental texture primitives. Textures are characterised statistically by modelling the full joint PDF of these features. This allows us to develop a texture classification framework which is discriminative, robust and achieves state-of-the-art performance as compared to affine invariant and fractal based methods.",
"",
"Methods for super-resolution can be broadly classified into two families of methods: (i) The classical multi-image super-resolution (combining images obtained at subpixel misalignments), and (ii) Example-Based super-resolution (learning correspondence between low and high resolution image patches from a database). In this paper we propose a unified framework for combining these two families of methods. We further show how this combined approach can be applied to obtain super resolution from as little as a single image (with no database or prior examples). Our approach is based on the observation that patches in a natural image tend to redundantly recur many times inside the image, both within the same scale, as well as across different scales. Recurrence of patches within the same image scale (at subpixel misalignments) gives rise to the classical super-resolution, whereas recurrence of patches across different scales of the same image gives rise to example-based super-resolution. Our approach attempts to recover at each pixel its best possible resolution increase based on its patch redundancy within and across scales.",
"In this paper, we propose a single image super-resolution and enhancement algorithm using local fractal analysis. If we treat the pixels of a natural image as a fractal set, the image gradient can then be regarded as a measure of the fractal set. According to the scale invariance (a special case of bi-Lipschitz invariance) feature of fractal dimension, we will be able to estimate the gradient of a high-resolution image from that of a low-resolution one. Moreover, the high-resolution image can be further enhanced by preserving the local fractal length of gradient during the up-sampling process. We show that a regularization term based on the scale invariance of fractal dimension and length can be effective in recovering details of the high-resolution image. Analysis is provided on the relation and difference among the proposed approach and some other state of the art interpolation methods. Experimental results show that the proposed method has superior super-resolution and enhancement results as compared to other competitors.",
"",
"This paper addresses the problems of 1) representing natural shapes such as mountains, trees, and clouds, and 2) computing their description from image data. To solve these problems, we must be able to relate natural surfaces to their images; this requires a good model of natural surface shapes. Fractal functions are a good choice for modeling 3-D natural surfaces because 1) many physical processes produce a fractal surface shape, 2) fractals are widely used as a graphics tool for generating natural-looking shapes, and 3) a survey of natural imagery has shown that the 3-D fractal surface model, transformed by the image formation process, furnishes an accurate description of both textured and shaded image regions. The 3-D fractal model provides a characterization of 3-D surfaces and their images for which the appropriateness of the model is verifiable. Furthermore, this characterization is stable over transformations of scale and linear transforms of intensity. The 3-D fractal model has been successfully applied to the problems of 1) texture segmentation and classification, 2) estimation of 3-D shape information, and 3) distinguishing between perceptually smooth'' and perceptually textured'' surfaces in the scene.",
"",
"",
"",
"\"...a blend of erudition (fascinating and sometimes obscure historical minutiae abound), popularization (mathematical rigor is relegated to appendices) and exposition (the reader need have little knowledge of the fields involved) ...and the illustrations include many superb examples of computer graphics that are works of art in their own right.\" Nature",
"Image texture provides a rich visual description of the surfaces in the scene. Many texture signatures based on various statistical descriptions and various local measurements have been developed. Existing signatures, in general, are not invariant to 3D geometric transformations, which is a serious limitation for many applications. In this paper we introduce a new texture signature, called the multifractal spectrum (MFS). The MFS is invariant under the bi-Lipschitz map, which includes view-point changes and non-rigid deformations of the texture surface, as well as local affine illumination changes. It provides an efficient framework combining global spatial invariance and local robust measurements. Intuitively, the MFS could be viewed as a \"better histogram\" with greater robustness to various environmental changes and the advantage of capturing some geometrical distribution information encoded in the texture. Experiments demonstrate that the MFS codes the essential structure of textures with very low dimension, and thus represents an useful tool for texture classification.",
"We propose a new high-quality and efficient single-image upscaling technique that extends existing example-based super-resolution frameworks. In our approach we do not rely on an external example database or use the whole input image as a source for example patches. Instead, we follow a local self-similarity assumption on natural images and extract patches from extremely localized regions in the input image. This allows us to reduce considerably the nearest-patch search time without compromising quality in most images. Tests, that we perform and report, show that the local self-similarity assumption holds better for small scaling factors where there are more example patches of greater relevance. We implement these small scalings using dedicated novel nondyadic filter banks, that we derive based on principles that model the upscaling process. Moreover, the new filters are nearly biorthogonal and hence produce high-resolution images that are highly consistent with the input image without solving implicit back-projection equations. The local and explicit nature of our algorithm makes it simple, efficient, and allows a trivial parallel implementation on a GPU. We demonstrate the new method ability to produce high-quality resolution enhancement, its application to video sequences with no algorithmic modification, and its efficiency to perform real-time enhancement of low-resolution video standard into recent high-definition formats."
]
} |
1603.06036 | 2949561619 | Fractal analysis has been widely used in computer vision, especially in texture image processing and texture analysis. The key concept of fractal-based image model is the fractal dimension, which is invariant to bi-Lipschitz transformation of image, and thus capable of representing intrinsic structural information of image robustly. However, the invariance of fractal dimension generally does not hold after filtering, which limits the application of fractal-based image model. In this paper, we propose a novel fractal dimension invariant filtering (FDIF) method, extending the invariance of fractal dimension to filtering operations. Utilizing the notion of local self-similarity, we first develop a local fractal model for images. By adding a nonlinear post-processing step behind anisotropic filter banks, we demonstrate that the proposed filtering method is capable of preserving the local invariance of the fractal dimension of image. Meanwhile, we show that the FDIF method can be re-instantiated approximately via a CNN-based architecture, where the convolution layer extracts anisotropic structure of image and the nonlinear layer enhances the structure via preserving local fractal dimension of image. The proposed filtering method provides us with a novel geometric interpretation of CNN-based image model. Focusing on a challenging image processing task --- detecting complicated curves from the texture-like images, the proposed method obtains superior results to the state-of-art approaches. | CNNs have been widely used to extract visual features from images, which have many successful applications. In these years, this useful tool has been introduced into many low-and middle-level vision problems, e.g., image reconstruction @cite_25 @cite_13 , super-resolution @cite_37 , dynamic texture synthesis @cite_20 , and contour detection @cite_22 @cite_3 . Currently, the physical meanings of different CNN modules are not fully comprehended. For example, the nonlinear layer of CNN, i.e., the rectifier linear unit (ReLU), and its output are often mysterious. For comprehending CNNs in depth, many attempts have been made. Many existing feature extraction methods have been proven to be equivalent to deep CNNs, like deformable part models in @cite_45 and random forests in @cite_51 . A pre-trained deep learning model called scattering convolution network (SCN) is proposed in @cite_18 @cite_2 @cite_23 . This model consists of hierarchical wavelet transformations and translation-invariant operators, which explains deep learning from the viewpoint of signal processing. However, none of these methods discuss the geometrical explanation of CNNs from the viewpoint of fractal analysis. | {
"cite_N": [
"@cite_37",
"@cite_18",
"@cite_22",
"@cite_3",
"@cite_45",
"@cite_23",
"@cite_2",
"@cite_51",
"@cite_13",
"@cite_25",
"@cite_20"
],
"mid": [
"54257720",
"1994906459",
"",
"2102605133",
"1960289438",
"2133257461",
"2072072671",
"1732796048",
"2037642501",
"2146337213",
"18669060"
],
"abstract": [
"We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage.",
"This paper constructs translation-invariant operators on L 2 .R d , which are Lipschitz-continuous to the action of diffeomorphisms. A scattering propagator is a path-ordered product of nonlinear and noncommuting operators, each of which computes the modulus of a wavelet transform. A local integration defines a windowed scattering transform, which is proved to be Lipschitz-continuous to the action of C 2 diffeomorphisms. As the window size increases, it converges to a wavelet scattering transform that is translation invariant. Scattering coefficients also provide representations of stationary processes. Expected values depend upon high-order moments and can discriminate processes having the same power spectrum. Scattering operators are extended on L 2 .G , where G is a compact Lie group, and are invariant under the action of G. Combining a scattering on L 2 .R d and on L 2 .SO.d defines a translation- and rotation-invariant scattering on L 2 .R d . © 2012 Wiley Periodicals, Inc.",
"",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"Deformable part models (DPMs) and convolutional neural networks (CNNs) are two widely used tools for visual recognition. They are typically viewed as distinct approaches: DPMs are graphical models (Markov random fields), while CNNs are “black-box” non-linear classifiers. In this paper, we show that a DPM can be formulated as a CNN, thus providing a synthesis of the two ideas. Our construction involves unrolling the DPM inference algorithm and mapping each step to an equivalent CNN layer. From this perspective, it is natural to replace the standard image features used in DPMs with a learned feature extractor. We call the resulting model a DeepPyramid DPM and experimentally validate it on PASCAL VOC object detection. We find that DeepPyramid DPMs significantly outperform DPMs based on histograms of oriented gradients features (HOG) and slightly outperforms a comparable version of the recently introduced R-CNN detection system, while running significantly faster.",
"Motivated in part by the hierarchical organization of the cortex, a number of algorithms have recently been proposed that try to learn hierarchical, or \"deep,\" structure from unlabeled data. While several authors have formally or informally compared their algorithms to computations performed in visual area V1 (and the cochlea), little attempt has been made thus far to evaluate these algorithms in terms of their fidelity for mimicking computations at deeper levels in the cortical hierarchy. This paper presents an unsupervised learning model that faithfully mimics certain properties of visual area V2. Specifically, we develop a sparse variant of the deep belief networks of (2006). We learn two layers of nodes in the network, and demonstrate that the first layer, similar to prior work on sparse coding and ICA, results in localized, oriented, edge filters, similar to the Gabor functions known to model V1 cell receptive fields. Further, the second layer in our model encodes correlations of the first layer responses in the data. Specifically, it picks up both colinear (\"contour\") features as well as corners and junctions. More interestingly, in a quantitative comparison, the encoding of these more complex \"corner\" features matches well with the results from the Ito & Komatsu's study of biological V2 responses. This suggests that our sparse variant of deep belief networks holds promise for modeling more higher-order features.",
"A wavelet scattering network computes a translation invariant image representation which is stable to deformations and preserves high-frequency information for classification. It cascades wavelet transform convolutions with nonlinear modulus and averaging operators. The first network layer outputs SIFT-type descriptors, whereas the next layers provide complementary invariant information that improves classification. The mathematical analysis of wavelet scattering networks explains important properties of deep convolution networks for classification. A scattering representation of stationary processes incorporates higher order moments and can thus discriminate textures having the same Fourier power spectrum. State-of-the-art classification results are obtained for handwritten digits and texture discrimination, with a Gaussian kernel SVM and a generative PCA classifier.",
"A grand challenge in machine learning is the development of computational algorithms that match or outperform humans in perceptual inference tasks that are complicated by nuisance variation. For instance, visual object recognition involves the unknown object position, orientation, and scale in object recognition while speech recognition involves the unknown voice pronunciation, pitch, and speed. Recently, a new breed of deep learning algorithms have emerged for high-nuisance inference tasks that routinely yield pattern recognition systems with near- or super-human capabilities. But a fundamental question remains: Why do they work? Intuitions abound, but a coherent framework for understanding, analyzing, and synthesizing deep learning architectures has remained elusive. We answer this question by developing a new probabilistic framework for deep learning based on the Deep Rendering Model: a generative probabilistic model that explicitly captures latent nuisance variation. By relaxing the generative model to a discriminative one, we can recover two of the current leading deep learning systems, deep convolutional neural networks and random decision forests, providing insights into their successes and shortcomings, as well as a principled route to their improvement.",
"Image denoising can be described as the problem of mapping from a noisy image to a noise-free image. The best currently available denoising methods approximate this mapping with cleverly engineered algorithms. In this work we attempt to learn this mapping directly with a plain multi layer perceptron (MLP) applied to image patches. While this has been done before, we will show that by training on large image databases we are able to compete with the current state-of-the-art image denoising methods. Furthermore, our approach is easily adapted to less extensively studied types of noise (by merely exchanging the training data), for which we achieve excellent results as well.",
"We present a novel approach to low-level vision problems that combines sparse coding and deep networks pre-trained with denoising auto-encoder (DA). We propose an alternative training scheme that successfully adapts DA, originally designed for unsupervised feature learning, to the tasks of image denoising and blind inpainting. Our method's performance in the image denoising task is comparable to that of KSVD which is a widely used sparse coding technique. More importantly, in blind image inpainting task, the proposed method provides solutions to some complex problems that have not been tackled before. Specifically, we can automatically remove complex patterns like superimposed text from an image, rather than simple patterns like pixels missing at random. Moreover, the proposed method does not need the information regarding the region that requires inpainting to be given a priori. Experimental results demonstrate the effectiveness of the proposed method in the tasks of image denoising and blind inpainting. We also show that our new training scheme for DA is more effective and can improve the performance of unsupervised feature learning.",
"Videos always exhibit various pattern motions, which can be modeled according to dynamics between adjacent frames. Previous methods based on linear dynamic system can model dynamic textures but have limited capacity of representing sophisticated nonlinear dynamics. Inspired by the nonlinear expression power of deep autoencoders, we propose a novel model named dynencoder which has an autoencoder at the bottom and a variant of it at the top (named as dynpredictor). It generates hidden states from raw pixel inputs via the autoencoder and then encodes the dynamic of state transition over time via the dynpredictor. Deep dynencoder can be constructed by proper stacking strategy and trained by layer-wise pre-training and joint fine-tuning. Experiments verify that our model can describe sophisticated video dynamics and synthesize endless video texture sequences with high visual quality. We also design classification and clustering methods based on our model and demonstrate the efficacy of them on traffic scene classification and motion segmentation."
]
} |
1603.06346 | 2302591229 | Dynamically adaptive multi-core architectures have been proposed as an effective solution to optimize performance for peak power constrained processors. In processors, the micro-architectural parameters or voltage frequency of each core can be changed at run-time, thus providing a range of power performance operating points for each core. In this paper, we propose Thread Progress Equalization (TPEq), a run-time mechanism for power constrained performance maximization of multithreaded applications running on dynamically adaptive multicore processors. Compared to existing approaches, TPEq (i) identifies and addresses two primary sources of inter-thread heterogeneity in multithreaded applications, (ii) determines the optimal core configurations in polynomial time with respect to the number of cores and configurations, and (iii) requires no modifications in the user-level source code. Our experimental evaluations demonstrate that TPEq outperforms state-of-the-art run-time power performance optimization techniques proposed in literature for dynamically adaptive multicores by up to 23 percent. | Dynamic power and resource management of multi-core processors is an issue of critical importance. @cite_46 proposed the notion of single-ISA heterogeneous architectures to maximize power efficiency while addressing temporal and spatial application variations. Their focus was primarily on multiprogrammed workloads. A number of papers have proposed scalable thread scheduling and mapping techniques for such workloads @cite_36 @cite_33 @cite_37 @cite_6 @cite_38 . Others have focused on leveraging asymmetry to increase the performance of multithreaded applications by identifying and accelerating critical sections @cite_20 @cite_4 @cite_0 @cite_10 @cite_24 . A more recent work by Craeynest et. al @cite_22 proposes to use fairness-aware equal-progress scheduling on heterogeneous multi-cores, but it is unclear how this technique can be extended to optimal power-constrained performance maximization for adaptive multi-cores, which is the focus of this work. | {
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_4",
"@cite_33",
"@cite_22",
"@cite_36",
"@cite_6",
"@cite_0",
"@cite_24",
"@cite_46",
"@cite_10",
"@cite_20"
],
"mid": [
"",
"",
"2085598418",
"",
"2009171545",
"2005838647",
"",
"2161309452",
"",
"2112085716",
"2020011734",
"2117169191"
],
"abstract": [
"",
"",
"Analyzing multi-threaded programs is quite challenging, but is necessary to obtain good multicore performance while saving energy. Due to synchronization, certain threads make others wait, because they hold a lock or have yet to reach a barrier. We call these critical threads, i.e., threads whose performance is determinative of program performance as a whole. Identifying these threads can reveal numerous optimization opportunities, for the software developer and for hardware. In this paper, we propose a new metric for assessing thread criticality, which combines both how much time a thread is performing useful work and how many co-running threads are waiting. We show how thread criticality can be calculated online with modest hardware additions and with low overhead. We use our metric to create criticality stacks that break total execution time into each thread's criticality component, allowing for easy visual analysis of parallel imbalance. To validate our criticality metric, and demonstrate it is better than previous metrics, we scale the frequency of the most critical thread and show it achieves the largest performance improvement. We then demonstrate the broad applicability of criticality stacks by using them to perform three types of optimizations: (1) program analysis to remove parallel bottlenecks, (2) dynamically identifying the most critical thread and accelerating it using frequency scaling to improve performance, and (3) showing that accelerating only the most critical thread allows for targeted energy reduction.",
"",
"Single-ISA heterogeneous multi-cores consisting of small (e.g., in-order) and big (e.g., out-of-order) cores dramatically improve energy- and power-efficiency by scheduling workloads on the most appropriate core type. A significant body of recent work has focused on improving system throughput through scheduling. However, none of the prior work has looked into fairness. Yet, guaranteeing that all threads make equal progress on heterogeneous multi-cores is of utmost importance for both multi-threaded and multi-program workloads to improve performance and quality-of-service. Furthermore, modern operating systems affinitize workloads to cores (pinned scheduling) which dramatically affects fairness on heterogeneous multi-cores. In this paper, we propose fairness-aware scheduling for single-ISA heterogeneous multi-cores, and explore two flavors for doing so. Equal-time scheduling runs each thread or workload on each core type for an equal fraction of the time, whereas equal-progress scheduling strives at getting equal amounts of work done on each core type. Our experimental results demonstrate an average 14 (and up to 25 ) performance improvement over pinned scheduling through fairness-aware scheduling for homogeneous multi-threaded workloads; equal-progress scheduling improves performance by 32 on average for heterogeneous multi-threaded workloads. Further, we report dramatic improvements in fairness over prior scheduling proposals for multi-program workloads, while achieving system throughput comparable to throughput-optimized scheduling, and an average 21 improvement in throughput over pinned scheduling.",
"Recent research advocates asymmetric multi-core architectures, where cores in the same processor can have different performance. These architectures support single-threaded performance and multithreaded throughput at lower costs (e.g., die size and power). However, they also pose unique challenges to operating systems, which traditionally assume homogeneous hardware. This paper presents AMPS, an operating system scheduler that efficiently supports both SMP-and NUMA-style performance-asymmetric architectures. AMPS contains three components: asymmetry-aware load balancing, faster-core-first scheduling, and NUMA-aware migration. We have implemented AMPS in Linux kernel 2.6.16 and used CPU clock modulation to emulate performance asymmetry on an SMP and NUMA system. For various workloads, we show that AMPS achieves a median speedup of 1.16 with a maximum of 1.44 over stock Linux on the SMP, and a median of 1.07 with a maximum of 2.61 on the NUMA system. Our results also show that AMPS improves fairness and repeatability of application performance measurements.",
"",
"Performance of multithreaded applications is limited by a variety of bottlenecks, e.g. critical sections, barriers and slow pipeline stages. These bottlenecks serialize execution, waste valuable execution cycles, and limit scalability of applications. This paper proposes Bottleneck Identification and Scheduling in Multithreaded Applications (BIS), a cooperative software-hardware mechanism to identify and accelerate the most critical bottlenecks. BIS identifies which bottlenecks are likely to reduce performance by measuring the number of cycles threads have to wait for each bottleneck, and accelerates those bottlenecks using one or more fast cores on an Asymmetric Chip Multi-Processor (ACMP). Unlike previous work that targets specific bottlenecks, BIS can identify and accelerate bottlenecks regardless of their type. We compare BIS to four previous approaches and show that it outperforms the best of them by 15 on average. BIS' performance improvement increases as the number of cores and the number of fast cores in the system increase.",
"",
"This paper proposes and evaluates single-ISA heterogeneous multi-core architectures as a mechanism to reduce processor power dissipation. Our design incorporates heterogeneous cores representing different points in the power performance design space; during an application's execution, system software dynamically chooses the most appropriate core to meet specific performance and power requirements. Our evaluation of this architecture shows significant energy benefits. For an objective function that optimizes for energy efficiency with a tight performance threshold, for 14 SPEC benchmarks, our results indicate a 39 average energy reduction while only sacrificing 3 in performance. An objective function that optimizes for energy-delay with looser performance bounds achieves, on average, nearly a factor of three improvements in energy-delay product while sacrificing only 22 in performance. Energy savings are substantially more than chip-wide voltage frequency scaling.",
"Asymmetric Chip Multiprocessors (ACMPs) are becoming a reality. ACMPs can speed up parallel applications if they can identify and accelerate code segments that are critical for performance. Proposals already exist for using coarse-grained thread scheduling and fine-grained bottleneck acceleration. Unfortunately, there have been no proposals offered thus far to decide which code segments to accelerate in cases where both coarse-grained thread scheduling and fine-grained bottleneck acceleration could have value. This paper proposes Utility-Based Acceleration of Multithreaded Applications on Asymmetric CMPs (UBA), a cooperative software hardware mechanism for identifying and accelerating the most likely critical code segments from a set of multithreaded applications running on an ACMP. The key idea is a new Utility of Acceleration metric that quantifies the performance benefit of accelerating a bottleneck or a thread by taking into account both the criticality and the expected speedup. UBA outperforms the best of two state-of-the-art mechanisms by 11 for single application workloads and by 7 for two-application workloads on an ACMP with 52 small cores and 3 large cores.",
"Asymmetric (or Heterogeneous) Multiprocessors are becoming popular in the current era of multi-cores due to their power efficiency and potential performance and energy efficiency. However, scheduling of multithreaded applications in Asymmetric Multiprocessors is still a challenging problem. Scheduling algorithms for Asymmetric Multiprocessors must not only be aware of asymmetry in processor performance, but have to consider the characteristics of application threads also. In this paper, we propose a new scheduling policy, Age based scheduling, that assigns a thread with a larger remaining execution time to a fast core. Age based scheduling predicts the remaining execution time of threads based on their age, i.e., when the threads were created. These predictions are based on the insight that most threads that are created together tend to have similar execution durations. Using Age based scheduling, we improve the overall performance of several important multithreaded applications including Parsec and asymmetric benchmarks from Splash-II and Omp-SCR. Our evaluations show that Age based scheduling improves performance up to 37 compared to the state-of-the-art Asymmetric Multiprocessor scheduling policy and on average by 10.4 for the Parsec benchmarks. Our results also show that the Age based scheduling policy with profiling improves the average performance by 13.2 for the Parsec benchmarks."
]
} |
1603.06346 | 2302591229 | Dynamically adaptive multi-core architectures have been proposed as an effective solution to optimize performance for peak power constrained processors. In processors, the micro-architectural parameters or voltage frequency of each core can be changed at run-time, thus providing a range of power performance operating points for each core. In this paper, we propose Thread Progress Equalization (TPEq), a run-time mechanism for power constrained performance maximization of multithreaded applications running on dynamically adaptive multicore processors. Compared to existing approaches, TPEq (i) identifies and addresses two primary sources of inter-thread heterogeneity in multithreaded applications, (ii) determines the optimal core configurations in polynomial time with respect to the number of cores and configurations, and (iii) requires no modifications in the user-level source code. Our experimental evaluations demonstrate that TPEq outperforms state-of-the-art run-time power performance optimization techniques proposed in literature for dynamically adaptive multicores by up to 23 percent. | The work on DVFS based dynamic adaptation of multi-core processors has made use of the sum-IPS Watt @cite_34 or MaxBIPS @cite_42 objectives, and different optimization algorithms including distributed optimization @cite_14 @cite_27 and control theory @cite_23 @cite_44 . @cite_21 present a machine learning based approach based on offline workload characterization (and online prediction) but perform DVFS adaptation at a coarse time granularity of 100 billion uops. Recently, Godycki @ have proposed reconfigurable power distribution networks to enable fast, fine-grained, per-core voltage scaling and use this to (as opposed to TPEq's proactive approach) slow down stalled threads and redistribute power to working threads. Also, unlike TPEq, this technique requires programmer inserted hints to determine the remaining work for each thread, and uses a heuristic approach to decide the voltage level of each core. | {
"cite_N": [
"@cite_14",
"@cite_42",
"@cite_21",
"@cite_44",
"@cite_27",
"@cite_23",
"@cite_34"
],
"mid": [
"2101020117",
"2117299787",
"2027177485",
"",
"",
"2134026160",
"2081379617"
],
"abstract": [
"A growing challenge in embedded system design is coping with increasing power densities resulting from packing more and more transistors onto a small die area, which in turn transform into thermal hotspots. In the current late silicon era silicon structures have become more susceptible to transient faults and aging effects resulting from these thermal hotspots. In this paper we present an agent-based power distribution approach (TAPE) which aims to balance the power consumption of a multi many-core architecture in a pro-active manner. By further taking the system's thermal state into consideration when distributing the power throughout the chip, TAPE is able to noticeably reduce the peak temperature. In our simulation we provide a fair comparison with the state-of-the-art approaches HRTM [19] and PDTM [9] using the MiBench benchmark suite [18]. When running multiple applications simultaneously on a multi many-core architecture, we are able to achieve an 11.23 decrease in peak temperature compared to the approach that uses no thermal management [14]. At the same time we reduce the execution time (i.e. we increase the performance of the applications) by 44.2 and reduce the energy consumption by 44.4 compared to PDTM [9]. We also show that our approach exhibits higher scalability, requiring 11.9 times less communication overhead in an architecture with 96 cores compared to the state-of-the-art approaches. Categories and Subject Descriptors: C.3 [Special-Purpose and Application-Based Systems]: Real-time and embedded systems General Terms: Algorithms",
"Chip-level power and thermal implications will continue to rule as one of the primary design constraints and performance limiters. The gap between average and peak power actually widens with increased levels of core integration. As such, if per-core control of power levels (modes) is possible, a global power manager should be able to dynamically set the modes suitably. This would be done in tune with the workload characteristics, in order to always maintain a chip-level power that is below the specified budget. Furthermore, this should be possible without significant degradation of chip-level throughput performance. We analyze and validate this concept in detail in this paper. We assume a per-core DVFS (dynamic voltage and frequency scaling) knob to be available to such a conceptual global power manager. We evaluate several different policies for global multi-core power management. In this analysis, we consider various different objectives such as prioritization and optimized throughput. Overall, our results show that in the context of a workload comprised of SPEC benchmark threads, our best architected policies can come within 1 of the performance of an ideal oracle, while meeting a given chip-level power budget. Furthermore, we show that these global dynamic management policies perform significantly better than static management, even if static scheduling is given oracular knowledge.",
"The ability to cap peak power consumption is a desirable feature in modern data centers for energy budgeting, cost management, and efficient power delivery. Dynamic voltage and frequency scaling (DVFS) is a traditional control knob in the tradeoff between server power and performance. Multi-core processors and the parallel applications that take advantage of them introduce new possibilities for control, wherein workload threads are packed onto a variable number of cores and idle cores enter low-power sleep states. This paper proposes Pack & Cap, a control technique designed to make optimal DVFS and thread packing control decisions in order to maximize performance within a power budget. In order to capture the workload dependence of the performance-power Pareto frontier, a multinomial logistic regression (MLR) classifier is built using a large volume of performance counter, temperature, and power characterization data. When queried during runtime, the classifier is capable of accurately selecting the optimal operating point. We implement and validate this method on a real quad-core system running the PARSEC parallel benchmark suite. When varying the power budget during runtime, Pack & Cap meets power constraints 82 of the time even in the absence of a power measuring device. The addition of thread packing to DVFS as a control knob increases the range of feasible power constraints by an average of 21 when compared to DVFS alone and reduces workload energy consumption by an average of 51.6 compared to existing control techniques that achieve the same power range.",
"",
"",
"Optimizing the performance of a multi-core microprocessor within a power budget has recently received a lot of attention. However, most existing solutions are centralized and cannot scale well with the rapidly increasing level of core integration. While a few recent studies propose power control algorithms for many-core architectures, those solutions assume that the workload of every core is independent and therefore cannot effectively allocate power based on thread criticality to accelerate multi-threaded parallel applications, which are expected to be the primary workloads of many-core architectures. This paper presents a scalable power control solution for many-core microprocessors that is specifically designed to handle realistic workloads, i.e., a mixed group of single-threaded and multi-threaded applications. Our solution features a three-layer design. First, we adopt control theory to precisely control the power of the entire chip to its chip-level budget by adjusting the aggregated frequency of all the cores on the chip. Second, we dynamically group cores running the same applications and then partition the chip-level aggregated frequency quota among different groups for optimized overall microprocessor performance. Finally, we partition the group-level frequency quota among the cores in each group based on the measured thread criticality for shorter application completion time. As a result, our solution can optimize the microprocessor performance while precisely limiting the chip-level power consumption below the desired budget. Empirical results on a 12-core hardware testbed show that our control solution can provide precise power control, as well as 17 and 11 better application performance than two state-of-the-art solutions, on average, for mixed PARSEC and SPEC benchmarks. Furthermore, our extensive simulation results for 32, 64, and 128 cores, as well as overhead analysis for up to 4,096 cores, demonstrate that our solution is highly scalable to many-core architectures.",
"Fine-grained dynamic voltage frequency scaling (DVFS) demonstrates great promise for improving the energy-efficiency of chip-multiprocessors (CMPs), which have emerged as a popular way for designers to exploit growing transistor budgets. We examine the tradeoffs involved in the choice of both DVFS control scheme and method by which the processor is partitioned into voltage frequency islands (VFIs). We simulate real multithreaded commercial and scientific workloads, demonstrating the large real-world potential of DVFS for CMPs. Contrary to the conventional wisdom, we find that the benefits of per-core DVFS are not necessarily large enough to overcome the complexity of having many independent VFIs per chip."
]
} |
1603.06067 | 2305932946 | We present a novel method for jointly learning compositional and non-compositional phrase embeddings by adaptively weighting both types of embeddings using a compositionality scoring function. The scoring function is used to quantify the level of compositionality of each phrase, and the parameters of the function are jointly optimized with the objective for learning phrase embeddings. In experiments, we apply the adaptive joint learning method to the task of learning embeddings of transitive verb phrases, and show that the compositionality scores have strong correlation with human ratings for verb-object compositionality, substantially outperforming the previous state of the art. Moreover, our embeddings improve upon the previous best model on a transitive verb disambiguation task. We also show that a simple ensemble technique further improves the results for both tasks. | Learning embeddings of words and phrases has been widely studied, and the phrase embeddings have proven effective in many language processing tasks, such as machine translation @cite_25 @cite_20 , sentiment analysis and semantic textual similarity @cite_2 . Most of the phrase embeddings are constructed by word-level information via various kinds of composition functions like long short-term memory @cite_0 recurrent neural networks. Such composition functions should be powerful enough to efficiently encode information about all the words into the phrase embeddings. By simultaneously considering the compositionality of the phrases, our method would be helpful in saving the composition models from having to be powerful enough to perfectly encode the non-compositional phrases. As a first step towards this purpose, in this paper we have shown the effectiveness of our method on the task of learning verb phrase embeddings. | {
"cite_N": [
"@cite_0",
"@cite_25",
"@cite_20",
"@cite_2"
],
"mid": [
"",
"2165496929",
"2949888546",
"2104246439"
],
"abstract": [
"",
"We introduce a novel compositional language model that works on PredicateArgument Structures (PASs). Our model jointly learns word representations and their composition functions using bagof-words and dependency-based contexts. Unlike previous word-sequencebased models, our PAS-based model composes arguments into predicates by using the category information from the PAS. This enables our model to capture longrange dependencies between words and to better handle constructs such as verbobject and subject-verb-object relations. We verify this experimentally using two phrase similarity datasets and achieve results comparable to or higher than the previous best results. Our system achieves these results without the need for pretrained word vectors and using a much smaller training corpus; despite this, for the subject-verb-object dataset our model improves upon the state of the art by as much as ∼10 in relative performance.",
"Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.",
"Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank)."
]
} |
1603.06067 | 2305932946 | We present a novel method for jointly learning compositional and non-compositional phrase embeddings by adaptively weighting both types of embeddings using a compositionality scoring function. The scoring function is used to quantify the level of compositionality of each phrase, and the parameters of the function are jointly optimized with the objective for learning phrase embeddings. In experiments, we apply the adaptive joint learning method to the task of learning embeddings of transitive verb phrases, and show that the compositionality scores have strong correlation with human ratings for verb-object compositionality, substantially outperforming the previous state of the art. Moreover, our embeddings improve upon the previous best model on a transitive verb disambiguation task. We also show that a simple ensemble technique further improves the results for both tasks. | Many studies have focused on detecting the compositionality of a variety of phrases @cite_1 , including the ones on verb phrases @cite_21 @cite_29 and compound nouns @cite_8 @cite_14 . Compared to statistical feature-based methods @cite_4 @cite_28 , recent methods use word and phrase embeddings @cite_17 @cite_13 . The embedding-based methods assume that word embeddings are given in advance and as a post-processing step, learn or simply employ composition functions to compute phrase embeddings. In other words, there is no distinction between compositional and non-compositional phrases. further proposed to incorporate latent annotations (binary labels) for the compositionality of the phrases. However, binary judgments cannot consider numerical scores of the compositionality. By contrast, our method adaptively weights the compositional and non-compositional embeddings using the compositionality scoring function. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_8",
"@cite_28",
"@cite_29",
"@cite_21",
"@cite_1",
"@cite_13",
"@cite_17"
],
"mid": [
"25517462",
"1487697660",
"2176363684",
"1976632482",
"2091477393",
"",
"",
"2250977525",
"2251976333"
],
"abstract": [
"A multiword is compositional if its meaning can be expressed in terms of the meaning of its constituents. In this paper, we collect and analyse the compositionality judgments for a range of compound nouns using Mechanical Turk. Unlike existing compositionality datasets, our dataset has judgments on the contribution of constituent words as well as judgments for the phrase as a whole. We use this dataset to study the relation between the judgments at constituent level to that for the whole phrase. We then evaluate two different types of distributional models for compositionality detection – constituent based models and composition function based models. Both the models show competitive performance though the composition function based models perform slightly better. In both types, additive models perform better than their multiplicative counterparts.",
"In this paper we explore the use of selectional preferences for detecting noncompositional verb-object combinations. To characterise the arguments in a given grammatical relationship we experiment with three models of selectional preference. Two use WordNet and one uses the entries from a distributional thesaurus as classes for representation. In previous work on selectional preference acquisition, the classes used for representation are selected according to the coverage of argument tokens rather than being selected according to the coverage of argument types. In our distributional thesaurus models and one of the methods using WordNet we select classes for representing the preferences by virtue of the number of argument types that they cover, and then only tokens under these classes which are representative of the argument head data are used to estimate the probability distribution for the selectional preference model. We demonstrate a highly signicant correlation between measures which use these ‘typebased’ selectional preferences and compositionality judgements from a data set used in previous research. The type-based models perform better than the models which use tokens for selecting the classes. Furthermore, the models which use the automatically acquired thesaurus entries produced the best results. The correlation for the thesaurus models is stronger than any of the individual features used in previous research on the same dataset.",
"Scarcity of multiword expression data sets raises a fundamental challenge to evaluating the systems that deal with these linguistic structures. In this work we attempt to address this problem for a subclass of multiword expressions by producing a large data set annotated by experts and validated by common statistical measures. We present a set of 1048 noun-noun compounds annotated as non-compositional, compositional, conventionalized and not conventionalized. We build this data set following common trends in previous work while trying to address some of the well known issues such as small number of annotated instances, quality of the annotations, and lack of availability of true negative instances.",
"Measuring the relative compositionality of Multi-word Expressions (MWEs) is crucial to Natural Language Processing. Various collocation based measures have been proposed to compute the relative compositionality of MWEs. In this paper, we define novel measures (both collocation based and context based measures) to measure the relative compositionality of MWEs of V-N type. We show that the correlation of these features with the human ranking is much superior to the correlation of the traditional features with the human ranking. We then integrate the proposed features and the traditional features using a SVM based ranking function to rank the collocations of V-N type based on their relative compositionality. We then show that the correlation between the ranks computed by the SVM based ranking function and human ranking is significantly better than the correlation between ranking of individual features and human ranking.",
"We investigate the use of an automatically acquired thesaurus for measures designed to indicate the compositionality of candidate multiword verbs, specifically English phrasal verbs identified automatically using a robust parser. We examine various measures using the nearest neighbours of the phrasal verb, and in some cases the neighbours of the simplex counterpart and show that some of these correlate significantly with human rankings of compositionality on the test set. We also show that whilst the compositionality judgements correlate with some statistics commonly used for extracting multiwords, the relationship is not as strong as that using the automatically constructed thesaurus.",
"",
"",
"Non-compositionality of multiword expressions is an intriguing problem that can be the source of error in a variety of NLP tasks such as language generation, machine translation and word sense disambiguation. We present methods of non-compositionality detection for English noun compounds using the unsupervised learning of a semantic composition function. Compounds which are not well modeled by the learned semantic composition function are considered noncompositional. We explore a range of distributional vector-space models for semantic composition, empirically evaluate these models, and propose additional methods which improve results further. We show that a complex function such as polynomial projection can learn semantic composition and identify non-compositionality in an unsupervised way, beating all other baselines ranging from simple to complex. We show that enforcing sparsity is a useful regularizer in learning complex composition functions. We show further improvements by training a decomposition function in addition to the composition function. Finally, we propose an EM algorithm over latent compositionality annotations that also improves the performance.",
"We present a novel unsupervised approach to detecting the compositionality of multi-word expressions. We compute the compositionality of a phrase through substituting the constituent words with their “neighbours” in a semantic vector space and averaging over the distance between the original phrase and the substituted neighbour phrases. Several methods of obtaining neighbours are presented. The results are compared to existing supervised results and achieve state-of-the-art performance on a verb-object dataset of human compositionality ratings."
]
} |
1603.06035 | 2310459088 | Learning the "blocking" structure is a central challenge for high dimensional data (e.g., gene expression data). Recently, a sparse singular value decomposition (SVD) has been used as a biclustering tool to achieve this goal. However, this model ignores the structural information between variables (e.g., gene interaction graph). Although typical graph-regularized norm can incorporate such prior graph information to get accurate discovery and better interpretability, it fails to consider the opposite effect of variables with different signs. Motivated by the development of sparse coding and graph-regularized norm, we propose a novel sparse graph-regularized SVD as a powerful biclustering tool for analyzing high-dimensional data. The key of this method is to impose two penalties including a novel graph-regularized norm ( @math ) and @math -norm ( @math ) on singular vectors to induce structural sparsity and enhance interpretability. We design an efficient Alternating Iterative Sparse Projection (AISP) algorithm to solve it. Finally, we apply our method and related ones to simulated and real data to show its efficiency in capturing natural blocking structures. | (2) Graph-regularized SVD. Graph-regularized norm has been used in many different techniques such as nonnegative matrix factorization @cite_26 . However, to our knowledge, there is yet no study to incorporate graph-regularized norm into the sparse SVD framework. We believe that sparse graph-regularized SVD is a promising tool as explored in other problems @cite_10 @cite_3 @cite_13 to enforce the smoothness of variable coefficients. In addition, @math is widely used to enforce the structural sparsity at the inter-group level @cite_15 . We can also consider it as a penalty function of SVD in future studies. | {
"cite_N": [
"@cite_26",
"@cite_10",
"@cite_3",
"@cite_15",
"@cite_13"
],
"mid": [
"2108119513",
"1544449255",
"2949935374",
"1970554427",
"2140245639"
],
"abstract": [
"Matrix factorization techniques have been frequently applied in information retrieval, computer vision, and pattern recognition. Among them, Nonnegative Matrix Factorization (NMF) has received considerable attention due to its psychological and physiological interpretation of naturally occurring data whose representation may be parts based in the human brain. On the other hand, from the geometric perspective, the data is usually sampled from a low-dimensional manifold embedded in a high-dimensional ambient space. One then hopes to find a compact representation,which uncovers the hidden semantics and simultaneously respects the intrinsic geometric structure. In this paper, we propose a novel algorithm, called Graph Regularized Nonnegative Matrix Factorization (GNMF), for this purpose. In GNMF, an affinity graph is constructed to encode the geometrical information and we seek a matrix factorization, which respects the graph structure. Our empirical study shows encouraging results of the proposed algorithm in comparison to the state-of-the-art algorithms on real-world problems.",
"In many applications, the data, such as web pages and research papers, contain relation (link) structure among entities in addition to textual content information. Matrix factorization (MF) methods, such as latent semantic indexing (LSI), have been successfully used to map either content information or relation information into a lower-dimensional latent space for subsequent processing. However, how to simultaneously model both the relation information and the content information effectively with an MF framework is still an open research problem. In this paper, we propose a novel MF method called relation regularized matrix factorization (RRMF) for relational data analysis. By using relation information to regularize the content MF procedure, RRMF seamlessly integrates both the relation information and the content information into a principled framework. We propose a linear-time learning algorithm with convergence guarantee to learn the parameters of RRMF. Extensive experiments on real data sets show that RRMF can achieve state-of-the-art performance.",
"We present an extension of sparse PCA, or sparse dictionary learning, where the sparsity patterns of all dictionary elements are structured and constrained to belong to a prespecified set of shapes. This is based on a structured regularization recently introduced by [1]. While classical sparse priors only deal with , the regularization we use encodes higher-order information about the data. We propose an efficient and simple optimization procedure to solve this problem. Experiments with two practical tasks, face recognition and the study of the dynamics of a protein complex, demonstrate the benefits of the proposed structured approach over unstructured approaches.",
"We propose a new penalty function which, when used as regularization for empirical risk minimization procedures, leads to sparse estimators. The support of the sparse vector is typically a union of potentially overlapping groups of co-variates defined a priori, or a set of covariates which tend to be connected to each other when a graph of covariates is given. We study theoretical properties of the estimator, and illustrate its behavior on simulated and breast cancer gene expression data.",
"Sparse coding has received an increasing amount of interest in recent years. It is an unsupervised learning algorithm, which finds a basis set capturing high-level semantics in the data and learns sparse coordinates in terms of the basis set. Originally applied to modeling the human visual cortex, sparse coding has been shown useful for many applications. However, most of the existing approaches to sparse coding fail to consider the geometrical structure of the data space. In many real applications, the data is more likely to reside on a low-dimensional submanifold embedded in the high-dimensional ambient space. It has been shown that the geometrical information of the data is important for discrimination. In this paper, we propose a graph based algorithm, called graph regularized sparse coding, to learn the sparse representations that explicitly take into account the local manifold structure of the data. By using graph Laplacian as a smooth operator, the obtained sparse representations vary smoothly along the geodesics of the data manifold. The extensive experimental results on image classification and clustering have demonstrated the effectiveness of our proposed algorithm."
]
} |
1603.06035 | 2310459088 | Learning the "blocking" structure is a central challenge for high dimensional data (e.g., gene expression data). Recently, a sparse singular value decomposition (SVD) has been used as a biclustering tool to achieve this goal. However, this model ignores the structural information between variables (e.g., gene interaction graph). Although typical graph-regularized norm can incorporate such prior graph information to get accurate discovery and better interpretability, it fails to consider the opposite effect of variables with different signs. Motivated by the development of sparse coding and graph-regularized norm, we propose a novel sparse graph-regularized SVD as a powerful biclustering tool for analyzing high-dimensional data. The key of this method is to impose two penalties including a novel graph-regularized norm ( @math ) and @math -norm ( @math ) on singular vectors to induce structural sparsity and enhance interpretability. We design an efficient Alternating Iterative Sparse Projection (AISP) algorithm to solve it. Finally, we apply our method and related ones to simulated and real data to show its efficiency in capturing natural blocking structures. | (3) The relationship between SVD and PCA. As we all know, principal component analysis (PCA) can be efficiently solved by using SVD. However, the identified non-sparse principal components can sometimes be difficult to interpret. To solve it, recent studies have developed several different sparse PCA models @cite_16 @cite_6 @cite_12 @cite_11 . However, it is difficult to develop effective algorithms for solving these sparse PCA models. There are a kind of commonly used methods based on regularized SVD for solving them @cite_12 . Moreover, Witten proposed another model based on regularized SVD to ensure the orthogonally of their left singular vectors. | {
"cite_N": [
"@cite_16",
"@cite_12",
"@cite_6",
"@cite_11"
],
"mid": [
"2949962875",
"2044809283",
"2101780216",
"2098290597"
],
"abstract": [
"In sparse principal component analysis we are given noisy observations of a low-rank matrix of dimension @math and seek to reconstruct it under additional sparsity assumptions. In particular, we assume here each of the principal components @math has at most @math non-zero entries. We are particularly interested in the high dimensional regime wherein @math is comparable to, or even much larger than @math . In an influential paper, johnstone2004sparse introduced a simple algorithm that estimates the support of the principal vectors @math by the largest entries in the diagonal of the empirical covariance. This method can be shown to identify the correct support with high probability if @math , and to fail with high probability if @math for two constants @math . Despite a considerable amount of work over the last ten years, no practical algorithm exists with provably better support recovery guarantees. Here we analyze a covariance thresholding algorithm that was recently proposed by KrauthgamerSPCA . On the basis of numerical simulations (for the rank-one case), these authors conjectured that covariance thresholding correctly recover the support with high probability for @math (assuming @math of the same order as @math ). We prove this conjecture, and in fact establish a more general guarantee including higher-rank as well as @math much smaller than @math . Recent lower bounds berthet2013computational, ma2015sum suggest that no polynomial time algorithm can do significantly better. The key technical component of our analysis develops new bounds on the norm of kernel random matrices, in regimes that were not considered before.",
"Principal component analysis (PCA) is a widely used tool for data analysis and dimension reduction in applications throughout science and engineering. However, the principal components (PCs) can sometimes be difficult to interpret, because they are linear combinations of all the original variables. To facilitate interpretation, sparse PCA produces modified PCs with sparse loadings, i.e. loadings with very few non-zero elements. In this paper, we propose a new sparse PCA method, namely sparse PCA via regularized SVD (sPCA-rSVD). We use the connection of PCA with singular value decomposition (SVD) of the data matrix and extract the PCs through solving a low rank matrix approximation problem. Regularization penalties are introduced to the corresponding minimization problem to promote sparsity in PC loadings. An efficient iterative algorithm is proposed for computation. Two tuning parameter selection methods are discussed. Some theoretical results are established to justify the use of sPCA-rSVD when only the data covariance matrix is available. In addition, we give a modified definition of variance explained by the sparse PCs. The sPCA-rSVD provides a uniform treatment of both classical multivariate data and high-dimension-low-sample-size (HDLSS) data. Further understanding of sPCA-rSVD and some existing alternatives is gained through simulation studies and real data examples, which suggests that sPCA-rSVD provides competitive results.",
"In this paper, we study the estimation of the k-dimensional sparse principal sub-space of covariance matrix Σ in the high-dimensional setting. We aim to recover the oracle principal subspace solution, i.e., the principal subspace estimator obtained assuming the true support is known a priori. To this end, we propose a family of estimators based on the semidefinite relaxation of sparse PCA with novel regularizations. In particular, under a weak assumption on the magnitude of the population projection matrix, one estimator within this family exactly recovers the true support with high probability, has exact rank-k, and attains a √s n statistical rate of convergence with s being the subspace sparsity level and n the sample size. Compared to existing support recovery results for sparse PCA, our approach does not hinge on the spiked covariance model or the limited correlation condition. As a complement to the first estimator that enjoys the oracle property, we prove that, another estimator within the family achieves a sharper statistical rate of convergence than the standard semidefinite relaxation of sparse PCA, even when the previous assumption on the magnitude of the projection matrix is violated. We validate the theoretical results by numerical experiments on synthetic datasets.",
"SUMMARY We present a penalized matrix decomposition (PMD), a new framework for computing a rank-K approximation for a matrix. We approximate the matrix X as ˆ X = � K=1 dkukv T , where dk, uk, and"
]
} |
1603.06159 | 2306875213 | We analyze a fast incremental aggregated gradient method for optimizing nonconvex problems of the form @math . Specifically, we analyze the SAGA algorithm within an Incremental First-order Oracle framework, and show that it converges to a stationary point provably faster than both gradient descent and stochastic gradient descent. We also discuss a Polyak's special class of nonconvex problems for which SAGA converges at a linear rate to the global optimum. Finally, we analyze the practically valuable regularized and minibatch variants of SAGA. To our knowledge, this paper presents the first analysis of fast convergence for an incremental aggregated gradient method for nonconvex problems. | A concise survey of incremental gradient methods is @cite_19 . An accessible analysis of stochastic convex optimization ( @math ) is @cite_15 . Classically, stems from the seminal work @cite_0 , and has since witnessed many developments @cite_20 , including parallel and distributed variants @cite_5 @cite_9 @cite_23 , though non-asymptotic convergence analysis is limited to convex setups. Faster rates for convex problems in @math are attained by variance reduced stochastic methods, e.g., @cite_13 @cite_22 @cite_14 @cite_6 @cite_21 . Linear convergence of stochastic dual coordinate ascent when @math ( @math ) may be nonconvex but @math is strongly convex is studied in @cite_17 . Lower bounds for convex finite-sum problems are studied in @cite_4 . | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_22",
"@cite_9",
"@cite_21",
"@cite_6",
"@cite_0",
"@cite_19",
"@cite_23",
"@cite_5",
"@cite_15",
"@cite_13",
"@cite_20",
"@cite_17"
],
"mid": [
"1791038712",
"2952664667",
"2107438106",
"2952033860",
"2962798535",
"1939652453",
"1994616650",
"2134130436",
"2138243089",
"1603765807",
"1992208280",
"2135482703",
"",
""
],
"abstract": [
"We propose the stochastic average gradient (SAG) method for optimizing the sum of a finite number of smooth convex functions. Like stochastic gradient (SG) methods, the SAG method's iteration cost is independent of the number of terms in the sum. However, by incorporating a memory of previous gradient values the SAG method achieves a faster convergence rate than black-box SG methods. The convergence rate is improved from O(1 k^ 1 2 ) to O(1 k) in general, and when the sum is strongly-convex the convergence rate is improved from the sub-linear O(1 k) to a linear convergence rate of the form O(p^k) for p 1. Further, in many cases the convergence rate of the new method is also faster than black-box deterministic gradient methods, in terms of the number of gradient evaluations. Numerical experiments indicate that the new algorithm often dramatically outperforms existing SG and deterministic gradient methods, and that the performance may be further improved through the use of non-uniform sampling strategies.",
"This paper presents a lower bound for optimizing a finite sum of @math functions, where each function is @math -smooth and the sum is @math -strongly convex. We show that no algorithm can reach an error @math in minimizing all functions from this class in fewer than @math iterations, where @math is a surrogate condition number. We then compare this lower bound to upper bounds for recently developed methods specializing to this setting. When the functions involved in this sum are not arbitrary, but based on i.i.d. random data, then we further contrast these complexity results with those for optimal first-order methods to directly optimize the sum. The conclusion we draw is that a lot of caution is necessary for an accurate comparison, and identify machine learning scenarios where the new methods help computationally.",
"Stochastic gradient descent is popular for large scale optimization but has slow convergence asymptotically due to the inherent variance. To remedy this problem, we introduce an explicit variance reduction method for stochastic gradient descent which we call stochastic variance reduced gradient (SVRG). For smooth and strongly convex functions, we prove that this method enjoys the same fast convergence rate as those of stochastic dual coordinate ascent (SDCA) and Stochastic Average Gradient (SAG). However, our analysis is significantly simpler and more intuitive. Moreover, unlike SDCA or SAG, our method does not require the storage of gradients, and thus is more easily applicable to complex problems such as some structured prediction problems and neural network learning.",
"We analyze the convergence of gradient-based optimization algorithms that base their updates on delayed stochastic gradient information. The main application of our results is to the development of gradient-based distributed optimization algorithms where a master node performs parameter updates while worker nodes compute stochastic gradients based on local information in parallel, which may give rise to delays due to asynchrony. We take motivation from statistical problems where the size of the data is so large that it cannot fit on one computer; with the advent of huge datasets in biology, astronomy, and the internet, such problems are now common. Our main contribution is to show that for smooth stochastic problems, the delays are asymptotically negligible and we can achieve order-optimal convergence results. In application to distributed optimization, we develop procedures that overcome communication bottlenecks and synchronization requirements. We show @math -node architectures whose optimization error in stochastic problems---in spite of asynchronous delays---scales asymptotically as @math after @math iterations. This rate is known to be optimal for a distributed system with @math nodes even in the absence of delays. We additionally complement our theoretical results with numerical experiments on a statistical machine learning task.",
"We study optimization algorithms based on variance reduction for stochastic gradient descent (SGD). Remarkable recent progress has been made in this direction through development of algorithms like SAG, SVRG, SAGA. These algorithms have been shown to outperform SGD, both theoretically and empirically. However, asynchronous versions of these algorithms—a crucial requirement for modern large-scale applications—have not been studied. We bridge this gap by presenting a unifying framework for many variance reduction techniques. Subsequently, we propose an asynchronous algorithm grounded in our framework, and prove its fast convergence. An important consequence of our general approach is that it yields asynchronous versions of variance reduction algorithms such as SVRG and SAGA as a byproduct. Our method achieves near linear speedup in sparse settings common to machine learning. We demonstrate the empirical performance of our method through a concrete realization of asynchronous SVRG.",
"Stochastic Gradient Descent (SGD) has become popular for solving large scale supervised machine learning optimization problems such as SVM, due to their strong theoretical guarantees. While the closely related Dual Coordinate Ascent (DCA) method has been implemented in various software packages, it has so far lacked good convergence analysis. This paper presents a new analysis of Stochastic Dual Coordinate Ascent (SDCA) showing that this class of methods enjoy strong theoretical guarantees that are comparable or better than SGD. This analysis justifies the effectiveness of SDCA for practical applications.",
"Let M(x) denote the expected value at level x of the response to a certain experiment. M(x) is assumed to be a monotone function of x but is unknown to the experimenter, and it is desired to find the solution x = θ of the equation M(x) = α, where a is a given constant. We give a method for making successive experiments at levels x1, x2, ··· in such a way that xn will tend to θ in probability.",
"We survey incremental methods for minimizing a sum P m=1 fi(x) consisting of a large number of convex component functions fi. Our methods consist of iterations applied to single components, and have proved very effective in practice. We introduce a unified algorithmic framework for a variety of such methods, some involving gradient and subgradient iterations, which are known, and some involving combinations of subgradient and proximal methods, which are new and offer greater flexibility in exploiting the special structure of fi. We provide an analysis of the convergence and rate of convergence properties of these methods, including the advantages offered by randomization in the selection of components. We also survey applications in inference machine learning, signal processing, and large-scale and distributed optimization.",
"Stochastic Gradient Descent (SGD) is a popular algorithm that can achieve state-of-the-art performance on a variety of machine learning tasks. Several researchers have recently proposed schemes to parallelize SGD, but all require performance-destroying memory locking and synchronization. This work aims to show using novel theoretical analysis, algorithms, and implementation that SGD can be implemented without any locking. We present an update scheme called HOGWILD! which allows processors access to shared memory with the possibility of overwriting each other's work. We show that when the associated optimization problem is sparse, meaning most gradient updates only modify small parts of the decision variable, then HOGWILD! achieves a nearly optimal rate of convergence. We demonstrate experimentally that HOGWILD! outperforms alternative schemes that use locking by an order of magnitude.",
"",
"In this paper we consider optimization problems where the objective function is given in a form of the expectation. A basic difficulty of solving such stochastic optimization problems is that the involved multidimensional integrals (expectations) cannot be computed with high accuracy. The aim of this paper is to compare two computational approaches based on Monte Carlo sampling techniques, namely, the stochastic approximation (SA) and the sample average approximation (SAA) methods. Both approaches, the SA and SAA methods, have a long history. Current opinion is that the SAA method can efficiently use a specific (say, linear) structure of the considered problem, while the SA approach is a crude subgradient method, which often performs poorly in practice. We intend to demonstrate that a properly modified SA approach can be competitive and even significantly outperform the SAA method for a certain class of convex stochastic problems. We extend the analysis to the case of convex-concave stochastic saddle point problems and present (in our opinion highly encouraging) results of numerical experiments.",
"In this work we introduce a new optimisation method called SAGA in the spirit of SAG, SDCA, MISO and SVRG, a set of recently proposed incremental gradient algorithms with fast linear convergence rates. SAGA improves on the theory behind SAG and SVRG, with better theoretical convergence rates, and has support for composite objectives where a proximal operator is used on the regulariser. Unlike SDCA, SAGA supports non-strongly convex problems directly, and is adaptive to any inherent strong convexity of the problem. We give experimental results showing the effectiveness of our method.",
"",
""
]
} |
1603.06159 | 2306875213 | We analyze a fast incremental aggregated gradient method for optimizing nonconvex problems of the form @math . Specifically, we analyze the SAGA algorithm within an Incremental First-order Oracle framework, and show that it converges to a stationary point provably faster than both gradient descent and stochastic gradient descent. We also discuss a Polyak's special class of nonconvex problems for which SAGA converges at a linear rate to the global optimum. Finally, we analyze the practically valuable regularized and minibatch variants of SAGA. To our knowledge, this paper presents the first analysis of fast convergence for an incremental aggregated gradient method for nonconvex problems. | For nonconvex nonsmooth problems the first incremental proximal-splitting methods is in @cite_16 , though only asymptotic convergence is studied. Hong @cite_8 studies convergence of a distributed nonconvex incremental ADMM algorithm. The first work to present non-asymptotic convergence rates for is @cite_3 ; this work presents an @math iteration bound for to satisfy approximate stationarity @math , and their convergence criterion is motivated by the gradient descent analysis of Nesterov @cite_7 . The first analysis for nonconvex variance reduced stochastic gradient is due to @cite_10 , who apply it to the specific problem of principal component analysis (PCA). | {
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_3",
"@cite_16",
"@cite_10"
],
"mid": [
"2124541940",
"104095019",
"2963470657",
"2162361955",
"1871665132"
],
"abstract": [
"It was in the middle of the 1980s, when the seminal paper by Kar markar opened a new epoch in nonlinear optimization. The importance of this paper, containing a new polynomial-time algorithm for linear op timization problems, was not only in its complexity bound. At that time, the most surprising feature of this algorithm was that the theoretical pre diction of its high efficiency was supported by excellent computational results. This unusual fact dramatically changed the style and direc tions of the research in nonlinear optimization. Thereafter it became more and more common that the new methods were provided with a complexity analysis, which was considered a better justification of their efficiency than computational experiments. In a new rapidly develop ing field, which got the name \"polynomial-time interior-point methods\", such a justification was obligatory. Afteralmost fifteen years of intensive research, the main results of this development started to appear in monographs [12, 14, 16, 17, 18, 19]. Approximately at that time the author was asked to prepare a new course on nonlinear optimization for graduate students. The idea was to create a course which would reflect the new developments in the field. Actually, this was a major challenge. At the time only the theory of interior-point methods for linear optimization was polished enough to be explained to students. The general theory of self-concordant functions had appeared in print only once in the form of research monograph [12].",
"The alternating direction method of multipliers (ADMM) has been popular for solving many signal processing problems, convex or nonconvex. In this paper, we study an asynchronous implementation of the ADMM for solving a nonconvex nonsmooth optimization problem, whose objective is the sum of a number of component functions. The proposed algorithm allows the problem to be solved in a distributed, asynchronous and incremental manner. First, the component functions can be distributed to different computing nodes, who perform the updates asynchronously without coordinating with each other. Two sources of asynchrony are covered by our algorithm: one is caused by the heterogeneity of the computational nodes, and the other arises from unreliable communication links. Second, the algorithm can be viewed as implementing an incremental algorithm where at each step the (possibly delayed) gradients of only a subset of component functions are update d. We show that when certain bounds are put on the level of asynchrony, the proposed algorithm converges to the set of stationary solutions (resp. optimal solutions) for the nonconvex (resp. convex) problem. To the best of our knowledge, the proposed ADMM implementation can tolerate the highest degree of asynchrony, among all known asynchronous variants of the ADMM. Moreover, it is the first ADMM implementation that can deal with nonconvexity and asynchrony at the same time.",
"In this paper, we introduce a new stochastic approximation type algorithm, namely, the randomized stochastic gradient (RSG) method, for solving an important class of nonlinear (possibly nonconvex) stochastic programming problems. We establish the complexity of this method for computing an approximate stationary point of a nonlinear programming problem. We also show that this method possesses a nearly optimal rate of convergence if the problem is convex. We discuss a variant of the algorithm which consists of applying a postoptimization phase to evaluate a short list of solutions generated by several independent runs of the RSG method, and we show that such modification allows us to improve significantly the large-deviation properties of the algorithm. These methods are then specialized for solving a class of simulation-based optimization problems in which only stochastic zeroth-order information is available.",
"We study a class of large-scale, nonsmooth, and nonconvex optimization problems. In particular, we focus on nonconvex problems with composite objectives. This class includes the extensively studied class of convex composite objective problems as a subclass. To solve composite nonconvex problems we introduce a powerful new framework based on asymptotically nonvanishing errors, avoiding the common stronger assumption of vanishing errors. Within our new framework we derive both batch and incremental proximal splitting algorithms. To our knowledge, our work is first to develop and analyze incremental nonconvex proximal-splitting algorithms, even if we were to disregard the ability to handle nonvanishing errors. We illustrate one instance of our general framework by showing an application to large-scale nonsmooth matrix factorization.",
"We describe and analyze a simple algorithm for principal component analysis and singular value decomposition, VR-PCA, which uses computationally cheap stochastic iterations, yet converges exponentially fast to the optimal solution. In contrast, existing algorithms suffer either from slow convergence, or computationally intensive iterations whose runtime scales with the data size. The algorithm builds on a recent variance-reduced stochastic gradient technique, which was previously analyzed for strongly convex optimization, whereas here we apply it to an inherently non-convex problem, using a very different analysis."
]
} |
1603.05631 | 2952134811 | Current generative frameworks use end-to-end learning and generate images by sampling from uniform noise distribution. However, these approaches ignore the most basic principle of image formation: images are product of: (a) Structure: the underlying 3D model; (b) Style: the texture mapped onto structure. In this paper, we factorize the image generation process and propose Style and Structure Generative Adversarial Network (S^2-GAN). Our S^2-GAN has two components: the Structure-GAN generates a surface normal map; the Style-GAN takes the surface normal map as input and generates the 2D image. Apart from a real vs. generated loss function, we use an additional loss with computed surface normals from generated images. The two GANs are first trained independently, and then merged together via joint learning. We show our S^2-GAN model is interpretable, generates more realistic images and can be used to learn unsupervised RGBD representations. | Unsupervised learning of visual representation is one of the most challenging problems in computer vision. There are two primary approaches to unsupervised learning. The first is the discriminative approach where we use auxiliary tasks such that ground truth can be generated without labeling. Some examples of these auxiliary tasks include predicting: the relative location of two patches @cite_43 , ego-motion in videos @cite_58 @cite_53 , physical signals @cite_11 @cite_20 @cite_7 . | {
"cite_N": [
"@cite_7",
"@cite_53",
"@cite_43",
"@cite_58",
"@cite_20",
"@cite_11"
],
"mid": [
"2338684808",
"2198618282",
"343636949",
"2951590555",
"2949098821",
""
],
"abstract": [
"What is the right supervisory signal to train visual representations? Current approaches in computer vision use category labels from datasets such as ImageNet to train ConvNets. However, in case of biological agents, visual representation learning does not require millions of semantic labels. We argue that biological agents use physical interactions with the world to learn visual representations unlike current vision systems which just use passive observations (images and videos downloaded from web). For example, babies push objects, poke them, put them in their mouth and throw them to learn representations. Towards this goal, we build one of the first systems on a Baxter platform that pushes, pokes, grasps and observes objects in a tabletop environment. It uses four different types of physical interactions to collect more than 130K datapoints, with each datapoint providing supervision to a shared ConvNet architecture allowing us to learn visual representations. We show the quality of learned representations by observing neuron activations and performing nearest neighbor retrieval on this learned representation. Quantitatively, we evaluate our learned ConvNet on image classification tasks and show improvements compared to learning without external data. Finally, on the task of instance retrieval, our network outperforms the ImageNet network on recall@1 by 3",
"Understanding how images of objects and scenes behave in response to specific ego-motions is a crucial aspect of proper visual development, yet existing visual learning methods are conspicuously disconnected from the physical source of their images. We propose to exploit proprioceptive motor signals to provide unsupervised regularization in convolutional neural networks to learn visual representations from egocentric video. Specifically, we enforce that our learned features exhibit equivariance, i.e, they respond predictably to transformations associated with distinct ego-motions. With three datasets, we show that our unsupervised feature learning approach significantly outperforms previous approaches on visual recognition and next-best-view prediction tasks. In the most challenging test, we show that features learned from video captured on an autonomous driving platform improve large-scale scene recognition in static images from a disjoint domain.",
"This work explores the use of spatial context as a source of free and plentiful supervisory signal for training a rich visual representation. Given only a large, unlabeled image collection, we extract random pairs of patches from each image and train a convolutional neural net to predict the position of the second patch relative to the first. We argue that doing well on this task requires the model to learn to recognize objects and their parts. We demonstrate that the feature representation learned using this within-image context indeed captures visual similarity across images. For example, this representation allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Furthermore, we show that the learned ConvNet can be used in the R-CNN framework [19] and provides a significant boost over a randomly-initialized ConvNet, resulting in state-of-the-art performance among algorithms which use only Pascal-provided training set annotations.",
"The dominant paradigm for feature learning in computer vision relies on training neural networks for the task of object recognition using millions of hand labelled images. Is it possible to learn useful features for a diverse set of visual tasks using any other form of supervision? In biology, living organisms developed the ability of visual perception for the purpose of moving and acting in the world. Drawing inspiration from this observation, in this work we investigate if the awareness of egomotion can be used as a supervisory signal for feature learning. As opposed to the knowledge of class labels, information about egomotion is freely available to mobile agents. We show that given the same number of training images, features learnt using egomotion as supervision compare favourably to features learnt using class-label as supervision on visual tasks of scene recognition, object recognition, visual odometry and keypoint matching.",
"Current learning-based robot grasping approaches exploit human-labeled datasets for training the models. However, there are two problems with such a methodology: (a) since each object can be grasped in multiple ways, manually labeling grasp locations is not a trivial task; (b) human labeling is biased by semantics. While there have been attempts to train robots using trial-and-error experiments, the amount of data used in such experiments remains substantially low and hence makes the learner prone to over-fitting. In this paper, we take the leap of increasing the available training data to 40 times more than prior work, leading to a dataset size of 50K data points collected over 700 hours of robot grasping attempts. This allows us to train a Convolutional Neural Network (CNN) for the task of predicting grasp locations without severe overfitting. In our formulation, we recast the regression problem to an 18-way binary classification over image patches. We also present a multi-stage learning approach where a CNN trained in one stage is used to collect hard negatives in subsequent stages. Our experiments clearly show the benefit of using large-scale datasets (and multi-stage training) for the task of grasping. We also compare to several baselines and show state-of-the-art performance on generalization to unseen objects for grasping.",
""
]
} |
1603.05631 | 2952134811 | Current generative frameworks use end-to-end learning and generate images by sampling from uniform noise distribution. However, these approaches ignore the most basic principle of image formation: images are product of: (a) Structure: the underlying 3D model; (b) Style: the texture mapped onto structure. In this paper, we factorize the image generation process and propose Style and Structure Generative Adversarial Network (S^2-GAN). Our S^2-GAN has two components: the Structure-GAN generates a surface normal map; the Style-GAN takes the surface normal map as input and generates the 2D image. Apart from a real vs. generated loss function, we use an additional loss with computed surface normals from generated images. The two GANs are first trained independently, and then merged together via joint learning. We show our S^2-GAN model is interpretable, generates more realistic images and can be used to learn unsupervised RGBD representations. | A more common approach to unsupervised learning is to use a generative framework. Two types of generative frameworks have been used in the past. Non-parametric approaches perform matching of an image or patch with the database for tasks such as texture synthesis @cite_64 or super-resolution @cite_40 . In this paper, we are interested in developing a parametric model of images. One common approach is to learn a low-dimensional representation which can be used to reconstruct an image. Some examples include the deep auto-encoder @cite_60 @cite_47 or Restricted Boltzmann machines (RBMs) @cite_48 @cite_29 @cite_12 @cite_44 @cite_30 . However, in most of the above scenarios it is hard to generate new images since sampling in latent space is not an easy task. The recently proposed Variational auto-encoders (VAE) @cite_50 @cite_36 tackles this problem by generating images with variational sampling approach. However, these approaches are restricted to simple datasets such as MNIST. To generate interpretable images with richer information, the VAE is extended to be conditioned on captions @cite_31 and graphics code @cite_42 . Besides RBMs and auto-encoders, there are also many novel generative models in recent literature @cite_9 @cite_32 @cite_0 @cite_15 . For example, @cite_9 proposed to use CNNs to generate chairs. | {
"cite_N": [
"@cite_30",
"@cite_31",
"@cite_64",
"@cite_60",
"@cite_36",
"@cite_48",
"@cite_29",
"@cite_42",
"@cite_9",
"@cite_32",
"@cite_15",
"@cite_44",
"@cite_0",
"@cite_40",
"@cite_50",
"@cite_47",
"@cite_12"
],
"mid": [
"2158164339",
"2155292833",
"2116013899",
"2110798204",
"1850742715",
"2161000554",
"2134653808",
"",
"1893585201",
"2263714001",
"2953318193",
"2130325614",
"2953250761",
"",
"",
"2950789693",
"2100495367"
],
"abstract": [
"We propose a non-linear generative model for human motion data that uses an undirected model with binary latent variables and real-valued \"visible\" variables that represent joint angles. The latent and visible variables at each time step receive directed connections from the visible variables at the last few time-steps. Such an architecture makes on-line inference efficient and allows us to use a simple approximate learning procedure. After training, the model finds a single set of parameters that simultaneously capture several different kinds of motion. We demonstrate the power of our approach by synthesizing various motion sequences and by performing on-line filling in of data lost during motion capture.",
"Motivated by the recent progress in generative models, we introduce a model that generates images from natural language descriptions. The proposed model iteratively draws patches on a canvas, while attending to the relevant words in the description. After training on Microsoft COCO, we compare our model with several baseline generative models on image generation and retrieval tasks. We demonstrate that our model produces higher quality samples than other approaches and generates images with novel scene compositions corresponding to previously unseen captions in the dataset.",
"A non-parametric method for texture synthesis is proposed. The texture synthesis process grows a new image outward from an initial seed, one pixel at a time. A Markov random field model is assumed, and the conditional distribution of a pixel given all its neighbors synthesized so far is estimated by querying the sample image and finding all similar neighborhoods. The degree of randomness is controlled by a single perceptually intuitive parameter. The method aims at preserving as much local structure as possible and produces good results for a wide variety of synthetic and real-world textures.",
"Complexity theory of circuits strongly suggests that deep architectures can be much more efficient (sometimes exponentially) than shallow architectures, in terms of computational elements required to represent some functions. Deep multi-layer neural networks have many levels of non-linearities allowing them to compactly represent highly non-linear and highly-varying functions. However, until recently it was not clear how to train such deep networks, since gradient-based optimization starting from random initialization appears to often get stuck in poor solutions. recently introduced a greedy layer-wise unsupervised learning algorithm for Deep Belief Networks (DBN), a generative model with many layers of hidden causal variables. In the context of the above optimization problem, we study this algorithm empirically and explore variants to better understand its success and extend it to cases where the inputs are continuous or where the structure of the input distribution is not revealing enough about the variable to be predicted in a supervised task. Our experiments also confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization.",
"This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.",
"Deep belief nets have been successful in modeling handwritten characters, but it has proved more difficult to apply them to real images. The problem lies in the restricted Boltzmann machine (RBM) which is used as a module for learning deep belief nets one layer at a time. The Gaussian-Binary RBMs that have been used to model real-valued data are not a good way to model the covariance structure of natural images. We propose a factored 3-way RBM that uses the states of its hidden units to represent abnormalities in the local covariance structure of an image. This provides a probabilistic framework for the widely used simple complex cell architecture. Our model learns binary features that work very well for object recognition on the “tiny images” data set. Even better features are obtained by then using standard binary RBM’s to learn a deeper model.",
"We describe an efficient learning procedure for multilayer generative models that combine the best aspects of Markov random fields and deep, directed belief nets. The generative models can be learned one layer at a time and when learning is complete they have a very fast inference procedure for computing a good approximation to the posterior distribution in all of the hidden layers. Each hidden layer has its own MRF whose energy function is modulated by the top-down directed connections from the layer above. To generate from the model, each layer in turn must settle to equilibrium given its top-down input. We show that this type of model is good at capturing the statistics of patches of natural images.",
"",
"We train a generative convolutional neural network which is able to generate images of objects given object type, viewpoint, and color. We train the network in a supervised manner on a dataset of rendered 3D chair models. Our experiments show that the network does not merely learn all images by heart, but rather finds a meaningful representation of a 3D chair model allowing it to assess the similarity of different chairs, interpolate between given viewpoints to generate the missing ones, or invent new chair styles by interpolating between chairs from the training set. We show that the network can be used to find correspondences between different chairs from the dataset, outperforming existing approaches on this task.",
"We present a convolutional network capable of generating images of a previously unseen object from arbitrary viewpoints given a single image of this object. The input to the network is a single image and the desired new viewpoint; the output is a view of the object from this desired viewpoint. The network is trained on renderings of synthetic 3D models. It learns an implicit 3D representation of the object class, which allows it to transfer shape knowledge from training instances to a new object instance. Beside the color image, the network can also generate the depth map of an object from arbitrary viewpoints. This allows us to predict 3D point clouds from a single image, which can be fused into a surface mesh. We experimented with cars and chairs. Even though the network is trained on artificial data, it generalizes well to objects in natural images without any modifications.",
"Modeling the distribution of natural images is a landmark problem in unsupervised learning. This task requires an image model that is at once expressive, tractable and scalable. We present a deep neural network that sequentially predicts the pixels in an image along the two spatial dimensions. Our method models the discrete probability of the raw pixel values and encodes the complete set of dependencies in the image. Architectural novelties include fast two-dimensional recurrent layers and an effective use of residual connections in deep recurrent networks. We achieve log-likelihood scores on natural images that are considerably better than the previous state of the art. Our main results also provide benchmarks on the diverse ImageNet dataset. Samples generated from the model appear crisp, varied and globally coherent.",
"There has been much interest in unsupervised learning of hierarchical generative models such as deep belief networks. Scaling such models to full-sized, high-dimensional images remains a difficult problem. To address this problem, we present the convolutional deep belief network, a hierarchical generative model which scales to realistic image sizes. This model is translation-invariant and supports efficient bottom-up and top-down probabilistic inference. Key to our approach is probabilistic max-pooling, a novel technique which shrinks the representations of higher layers in a probabilistically sound way. Our experiments show that the algorithm learns useful high-level visual features, such as object parts, from unlabeled images of objects and natural scenes. We demonstrate excellent performance on several visual recognition tasks and show that our model can perform hierarchical (bottom-up and top-down) inference over full-sized images.",
"Modeling the distribution of natural images is challenging, partly because of strong statistical dependencies which can extend over hundreds of pixels. Recurrent neural networks have been successful in capturing long-range dependencies in a number of problems but only recently have found their way into generative image models. We here introduce a recurrent image model based on multi-dimensional long short-term memory units which are particularly suited for image modeling due to their spatial structure. Our model scales to images of arbitrary size and its likelihood is computationally tractable. We find that it outperforms the state of the art in quantitative comparisons on several image datasets and produces promising results when used for texture synthesis and inpainting.",
"",
"",
"We consider the problem of building high- level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a 9-layered locally connected sparse autoencoder with pooling and local contrast normalization on a large dataset of images (the model has 1 bil- lion connections, the dataset has 10 million 200x200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a clus- ter with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental re- sults reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bod- ies. Starting with these learned features, we trained our network to obtain 15.8 accu- racy in recognizing 20,000 object categories from ImageNet, a leap of 70 relative im- provement over the previous state-of-the-art.",
"High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data."
]
} |
1603.05631 | 2952134811 | Current generative frameworks use end-to-end learning and generate images by sampling from uniform noise distribution. However, these approaches ignore the most basic principle of image formation: images are product of: (a) Structure: the underlying 3D model; (b) Style: the texture mapped onto structure. In this paper, we factorize the image generation process and propose Style and Structure Generative Adversarial Network (S^2-GAN). Our S^2-GAN has two components: the Structure-GAN generates a surface normal map; the Style-GAN takes the surface normal map as input and generates the 2D image. Apart from a real vs. generated loss function, we use an additional loss with computed surface normals from generated images. The two GANs are first trained independently, and then merged together via joint learning. We show our S^2-GAN model is interpretable, generates more realistic images and can be used to learn unsupervised RGBD representations. | In this work, we build our model based on the Generative Adversarial Networks (GANs) framework proposed by @cite_21 . This framework was extended by @cite_22 to generate images. Specifically, they proposed to use a Laplacian pyramid of adversarial networks to generate images in a coarse to fine scheme. However, training these networks is still tricky and unstable. Therefore, an extension DCGAN @cite_39 proposed good practices for training adversarial networks and demonstrated promising results in generating images. There are more extensions include using conditional variables @cite_57 @cite_59 @cite_1 . For instance, @cite_59 introduced to predict future video frames conditioned on the previous frames. In this paper, we further simplify the image generation process by factoring out the generation of 3D structure and style. | {
"cite_N": [
"@cite_22",
"@cite_21",
"@cite_1",
"@cite_39",
"@cite_57",
"@cite_59"
],
"mid": [
"2951523806",
"2099471712",
"2949933669",
"2173520492",
"2125389028",
"2248556341"
],
"abstract": [
"In this paper we introduce a generative parametric model capable of producing high quality samples of natural images. Our approach uses a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion. At each level of the pyramid, a separate generative convnet model is trained using the Generative Adversarial Nets (GAN) approach (). Samples drawn from our model are of significantly higher quality than alternate approaches. In a quantitative assessment by human evaluators, our CIFAR10 samples were mistaken for real images around 40 of the time, compared to 10 for samples drawn from a GAN baseline model. We also show samples from models trained on the higher resolution images of the LSUN scene dataset.",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"(2015) showed that optimizing pixels to match features in a convolutional network with respect reference image features is a way to render images of high visual quality. We show that unrolling this gradient-based optimization yields a recurrent computation that creates images by incrementally adding onto a visual \"canvas\". We propose a recurrent generative model inspired by this view, and show that it can be trained using adversarial training to generate very good image samples. We also propose a way to quantitatively compare adversarial networks by having the generators and discriminators of these networks compete against each other.",
"In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.",
"Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.",
"Learning to predict future images from a video sequence involves the construction of an internal representation that models the image evolution accurately, and therefore, to some degree, its content and dynamics. This is why pixel-space video prediction may be viewed as a promising avenue for unsupervised feature learning. In addition, while optical flow has been a very studied problem in computer vision for a long time, future frame prediction is rarely approached. Still, many vision applications could benefit from the knowledge of the next frames of videos, that does not require the complexity of tracking every pixel trajectories. In this work, we train a convolutional network to generate future frames given an input sequence. To deal with the inherently blurry predictions obtained from the standard Mean Squared Error (MSE) loss function, we propose three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function. We compare our predictions to different published results based on recurrent neural networks on the UCF101 dataset"
]
} |
1603.05631 | 2952134811 | Current generative frameworks use end-to-end learning and generate images by sampling from uniform noise distribution. However, these approaches ignore the most basic principle of image formation: images are product of: (a) Structure: the underlying 3D model; (b) Style: the texture mapped onto structure. In this paper, we factorize the image generation process and propose Style and Structure Generative Adversarial Network (S^2-GAN). Our S^2-GAN has two components: the Structure-GAN generates a surface normal map; the Style-GAN takes the surface normal map as input and generates the 2D image. Apart from a real vs. generated loss function, we use an additional loss with computed surface normals from generated images. The two GANs are first trained independently, and then merged together via joint learning. We show our S^2-GAN model is interpretable, generates more realistic images and can be used to learn unsupervised RGBD representations. | In order to train our @math -GAN we combine adversarial loss with 3D surface normal prediction loss @cite_38 @cite_41 @cite_33 @cite_25 to provide extra constraints during learning. This is also related to the idea of combining multiple losses for better generative modeling @cite_65 @cite_63 @cite_6 . For example, @cite_65 proposed an adversarial auto-encoder which takes the adversarial loss as an extra constraint for the latent code during training the auto-encoder. Finally, the idea of factorizing image into two separate phenomena has been well studied in @cite_49 @cite_62 @cite_2 @cite_13 , which motivates us to decompose the generative process to structure and style. We use the RGBD data from NYUv2 to factorize and learn a @math -GAN model. | {
"cite_N": [
"@cite_38",
"@cite_62",
"@cite_33",
"@cite_41",
"@cite_65",
"@cite_2",
"@cite_6",
"@cite_63",
"@cite_49",
"@cite_13",
"@cite_25"
],
"mid": [
"2952623155",
"",
"2146814781",
"2951713345",
"",
"2259631822",
"2259643685",
"2202109488",
"",
"",
"337610345"
],
"abstract": [
"In the past few years, convolutional neural nets (CNN) have shown incredible promise for learning visual representations. In this paper, we use CNNs for the task of predicting surface normals from a single image. But what is the right architecture we should use? We propose to build upon the decades of hard work in 3D scene understanding, to design new CNN architecture for the task of surface normal estimation. We show by incorporating several constraints (man-made, manhattan world) and meaningful intermediate representations (room layout, edge labels) in the architecture leads to state of the art performance on surface normal estimation. We also show that our network is quite robust and show state of the art results on other datasets as well without any fine-tuning.",
"",
"What primitives should we use to infer the rich 3D world behind an image? We argue that these primitives should be both visually discriminative and geometrically informative and we present a technique for discovering such primitives. We demonstrate the utility of our primitives by using them to infer 3D surface normals given a single image. Our technique substantially outperforms the state-of-the-art and shows improved cross-dataset performance.",
"In this paper we address three different computer vision tasks using a single basic architecture: depth prediction, surface normal estimation, and semantic labeling. We use a multiscale convolutional network that is able to adapt easily to each task using only small modifications, regressing from the input image to the output map directly. Our method progressively refines predictions using a sequence of scales, and captures many image details without any superpixels or low-level segmentation. We achieve state-of-the-art performance on benchmarks for all three tasks.",
"",
"Do we really need 3D labels in order to learn how to predict 3D? In this paper, we show that one can learn a mapping from appearance to 3D properties without ever seeing a single explicit 3D label. Rather than use explicit supervision, we use the regularity of indoor scenes to learn the mapping in a completely unsupervised manner. We demonstrate this on both a standard 3D scene understanding dataset as well as Internet images for which 3D is unavailable, precluding supervised learning. Despite never seeing a 3D label, our method produces competitive results.",
"Image-generating machine learning models are typically trained with loss functions based on distance in the image space. This often leads to over-smoothed results. We propose a class of loss functions, which we call deep perceptual similarity metrics (DeePSiM), that mitigate this problem. Instead of computing distances in the image space, we compute distances between image features extracted by deep neural networks. This metric better reflects perceptually similarity of images and thus leads to better results. We show three applications: autoencoder training, a modification of a variational autoencoder, and inversion of deep convolutional networks. In all cases, the generated images look sharp and resemble natural images.",
"We present an autoencoder that leverages learned representations to better measure similarities in data space. By combining a variational autoencoder with a generative adversarial network we can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective. Thereby, we replace element-wise errors with feature-wise errors to better capture the data distribution while offering invariance towards e.g. translation. We apply our method to images of faces and show that it outperforms VAEs with element-wise similarity measures in terms of visual fidelity. Moreover, we show that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic.",
"",
"",
"In this work we propose the method for a rather unexplored problem of computer vision - discriminatively trained dense surface normal estimation from a single image. Our method combines contextual and segment-based cues and builds a regressor in a boosting framework by transforming the problem into the regression of coefficients of a local coding. We apply our method to two challenging data sets containing images of man-made environments, the indoor NYU2 data set and the outdoor KITTI data set. Our surface normal predictor achieves results better than initially expected, significantly outperforming state-of-the-art."
]
} |
1603.05614 | 2300836882 | Submodular maximization problems belong to the family of combinatorial optimization problems and enjoy wide applications. In this paper, we focus on the problem of maximizing a monotone submodular function subject to a @math -knapsack constraint, for which we propose a streaming algorithm that achieves a @math -approximation of the optimal value, while it only needs one single pass through the dataset without storing all the data in the memory. In our experiments, we extensively evaluate the effectiveness of our proposed algorithm via two applications: news recommendation and scientific literature recommendation. It is observed that the proposed streaming algorithm achieves both execution speedup and memory saving by several orders of magnitude, compared with existing approaches. | Further, the authors in @cite_4 dealt with the case when @math and each entry of @math can take any positive values. Maximizing a monotone submodular function under a single knapsack constraint is also called a budgeted submodular maximization problem. This problem is also NP-hard, and the authors in @cite_0 suggested a greedy algorithm, which produces a @math -approximation of the optimal value with @math computation complexity. Specifically, it first enumerates all the subsets of cardinalities at most three, then greedily adds the elements with maximum marginal values per weight to every subset starting with three elements, and finally outputs the suboptimal subset. Although the solution has a @math -approximation guarantee, the @math computation cost prevents this greedy algorithm from being widely used in practice. Hence some modified versions of the greedy algorithm have been developed. The authors in @cite_4 applied it to document summarization with a @math performance guarantee. In @cite_9 , the so-called cost effective forward (CEF) algorithm for outbreak detection was proposed, which produces a solution with a @math -approximation guarantee and requires only @math computation complexity, where @math is the knapsack budget when @math . | {
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_4"
],
"mid": [
"2033885045",
"",
"1962684803"
],
"abstract": [
"In this paper, we obtain an (1-e^-^1)-approximation algorithm for maximizing a nondecreasing submodular set function subject to a knapsack constraint. This algorithm requires O(n^5) function value computations.",
"",
"We treat the text summarization problem as maximizing a submodular function under a budget constraint. We show, both theoretically and empirically, a modified greedy algorithm can efficiently solve the budgeted submodular maximization problem near-optimally, and we derive new approximation bounds in doing so. Experiments on DUC'04 task show that our approach is superior to the best-performing method from the DUC'04 evaluation on ROUGE-1 scores."
]
} |
1603.05544 | 2949615858 | SGD is the widely adopted method to train CNN. Conceptually it approximates the population with a randomly sampled batch; then it evenly trains batches by conducting a gradient update on every batch in an epoch. In this paper, we demonstrate Sampling Bias, Intrinsic Image Difference and Fixed Cycle Pseudo Random Sampling differentiate batches in training, which then affect learning speeds on them. Because of this, the unbiased treatment of batches involved in SGD creates improper load balancing. To address this issue, we present Inconsistent Stochastic Gradient Descent (ISGD) to dynamically vary training effort according to learning statuses on batches. Specifically ISGD leverages techniques in Statistical Process Control to identify a undertrained batch. Once a batch is undertrained, ISGD solves a new subproblem, a chasing logic plus a conservative constraint, to accelerate the training on the batch while avoid drastic parameter changes. Extensive experiments on a variety of datasets demonstrate ISGD converges faster than SGD. In training AlexNet, ISGD is 21.05 faster than SGD to reach 56 top1 accuracy under the exactly same experiment setup. We also extend ISGD to work on multiGPU or heterogeneous distributed system based on data parallelism, enabling the batch size to be the key to scalability. Then we present the study of ISGD batch size to the learning rate, parallelism, synchronization cost, system saturation and scalability. We conclude the optimal ISGD batch size is machine dependent. Various experiments on a multiGPU system validate our claim. In particular, ISGD trains AlexNet to 56.3 top1 and 80.1 top5 accuracy in 11.5 hours with 4 NVIDIA TITAN X at the batch size of 1536. | The stochastic sampling in SGD introduces the gradient variance, which slows down the convergence rate @cite_32 . The problem motivates researchers to apply various variance reduction techniques on SGD to improve the convergence rate. Stochastic Variance Reduced Gradient (SVRG) @cite_22 keeps network historical parameters and gradients to explicitly reduce the variance of update rule, but the authors indicate SVRG only works well for the fine-tuning of non-convex neural network. @cite_23 explore the control variates on SGD, while Zhao and Tong @cite_28 explore the importance sampling. These variance reduction techniques, however, are rarely used in the large scale neural networks, as they consume the huge RAM space to store the intermediate variables. ISGD adjusts to the negative effect of gradient variances, and it does not construct auxiliary variables being much more memory efficient and practical than the variance reduction ones. | {
"cite_N": [
"@cite_28",
"@cite_22",
"@cite_32",
"@cite_23"
],
"mid": [
"1512309675",
"2107438106",
"",
"2145832734"
],
"abstract": [
"Uniform sampling of training data has been commonly used in traditional stochastic optimization algorithms such as Proximal Stochastic Gradient Descent (prox-SGD) and Proximal Stochastic Dual Coordinate Ascent (prox-SDCA). Although uniform sampling can guarantee that the sampled stochastic quantity is an unbiased estimate of the corresponding true quantity, the resulting estimator may have a rather high variance, which negatively affects the convergence of the underlying optimization procedure. In this paper we study stochastic optimization with importance sampling, which improves the convergence rate by reducing the stochastic variance. Specifically, we study prox-SGD (actually, stochastic mirror descent) with importance sampling and prox-SDCA with importance sampling. For prox-SGD, instead of adopting uniform sampling throughout the training process, the proposed algorithm employs importance sampling to minimize the variance of the stochastic gradient. For prox-SDCA, the proposed importance sampling scheme aims to achieve higher expected dual value at each dual coordinate ascent step. We provide extensive theoretical analysis to show that the convergence rates with the proposed importance sampling methods can be significantly improved under suitable conditions both for prox-SGD and for prox-SDCA. Experiments are provided to verify the theoretical analysis.",
"Stochastic gradient descent is popular for large scale optimization but has slow convergence asymptotically due to the inherent variance. To remedy this problem, we introduce an explicit variance reduction method for stochastic gradient descent which we call stochastic variance reduced gradient (SVRG). For smooth and strongly convex functions, we prove that this method enjoys the same fast convergence rate as those of stochastic dual coordinate ascent (SDCA) and Stochastic Average Gradient (SAG). However, our analysis is significantly simpler and more intuitive. Moreover, unlike SDCA or SAG, our method does not require the storage of gradients, and thus is more easily applicable to complex problems such as some structured prediction problems and neural network learning.",
"",
"Stochastic gradient optimization is a class of widely used algorithms for training machine learning models. To optimize an objective, it uses the noisy gradient computed from the random data samples instead of the true gradient computed from the entire dataset. However, when the variance of the noisy gradient is large, the algorithm might spend much time bouncing around, leading to slower convergence and worse performance. In this paper, we develop a general approach of using control variate for variance reduction in stochastic gradient. Data statistics such as low-order moments (pre-computed or estimated online) is used to form the control variate. We demonstrate how to construct the control variate for two practical problems using stochastic gradient optimization. One is convex—the MAP estimation for logistic regression, and the other is non-convex—stochastic variational inference for latent Dirichlet allocation. On both problems, our approach shows faster convergence and better performance than the classical approach."
]
} |
1603.05544 | 2949615858 | SGD is the widely adopted method to train CNN. Conceptually it approximates the population with a randomly sampled batch; then it evenly trains batches by conducting a gradient update on every batch in an epoch. In this paper, we demonstrate Sampling Bias, Intrinsic Image Difference and Fixed Cycle Pseudo Random Sampling differentiate batches in training, which then affect learning speeds on them. Because of this, the unbiased treatment of batches involved in SGD creates improper load balancing. To address this issue, we present Inconsistent Stochastic Gradient Descent (ISGD) to dynamically vary training effort according to learning statuses on batches. Specifically ISGD leverages techniques in Statistical Process Control to identify a undertrained batch. Once a batch is undertrained, ISGD solves a new subproblem, a chasing logic plus a conservative constraint, to accelerate the training on the batch while avoid drastic parameter changes. Extensive experiments on a variety of datasets demonstrate ISGD converges faster than SGD. In training AlexNet, ISGD is 21.05 faster than SGD to reach 56 top1 accuracy under the exactly same experiment setup. We also extend ISGD to work on multiGPU or heterogeneous distributed system based on data parallelism, enabling the batch size to be the key to scalability. Then we present the study of ISGD batch size to the learning rate, parallelism, synchronization cost, system saturation and scalability. We conclude the optimal ISGD batch size is machine dependent. Various experiments on a multiGPU system validate our claim. In particular, ISGD trains AlexNet to 56.3 top1 and 80.1 top5 accuracy in 11.5 hours with 4 NVIDIA TITAN X at the batch size of 1536. | @cite_33 is a widely recognized heuristic to boost SGD. SGD oscillates across the narrow ravine as the gradient always points to the other side instead of along the ravine toward the optimal. As a result, it tends to bounce around leading to the slow convergence. damps oscillations in directions of high curvature by combining gradients with opposite signs, and it builds up speed toward a direction that is consistent with the previously accumulated gradients @cite_2 . The update rule of is similar to @cite_30 , but the minor different update mechanism for building the velocity results in important behavior differences. Momentum strikes in the direction of the accumulated gradient plus the current gradient. In contrast, strikes along the previous accumulated gradient, then it measures the gradient before making a correction. This prevents the update from descending fast, thereby increases the responsiveness. ISGD is fundamentally different from these approaches by considering the training dynamics on batches. ISGD rebalances the training effort across batches, while and leverage the curvature tricks. Therefore, the inconsistent training is expected to be compatible with both methods. | {
"cite_N": [
"@cite_30",
"@cite_33",
"@cite_2"
],
"mid": [
"2252143850",
"2061863621",
"104184427"
],
"abstract": [
"Recurrent Neural Networks (RNNs) are powerful sequence models that were believed to be difficult to train, and as a result they were rarely used in machine learning applications. This thesis presents methods that overcome the difficulty of training RNNs, and applications of RNNs to challenging problems. We first describe a new probabilistic sequence model that combines Restricted Boltzmann Machines and RNNs. The new model is more powerful than similar models while being less difficult to train. Next, we present a new variant of the Hessian-free (HF) optimizer and show that it can train RNNs on tasks that have extreme long-range temporal dependencies, which were previously considered to be impossibly hard. We then apply HF to character-level language modelling and get excellent results. We also apply HF to optimal control and obtain RNN control laws that can successfully operate under conditions of delayed feedback and unknown disturbances. Finally, we describe a random parameter initialization scheme that allows gradient descent with momentum to train RNNs on problems with long-term dependencies. This directly contradicts widespread beliefs about the inability of first-order methods to do so, and suggests that previous attempts at training RNNs failed partly due to flaws in the random initialization.",
"We consider an incremental gradient method with momentum term for minimizing the sum of continuously differentiable functions. This method uses a new adaptive stepsize rule that decreases the stepsize whenever sufficient progress is not made. We show that if the gradients of the functions are bounded and Lipschitz continuous over a certain level set, then every cluster point of the iterates generated by the method is a stationary point. In addition, if the gradient of the functions have a certain growth property, then the method is either linearly convergent in some sense or the stepsizes are bounded away from zero. The new stepsize rule is much in the spirit of heuristic learning rules used in practice for training neural networks via backpropagation. As such, the new stepsize rule may suggest improvements on existing learning rules. Finally, extension of the method and the convergence results to constrained minimization is discussed, as are some implementation issues and numerical experience.",
"Deep and recurrent neural networks (DNNs and RNNs respectively) are powerful models that were considered to be almost impossible to train using stochastic gradient descent with momentum. In this paper, we show that when stochastic gradient descent with momentum uses a well-designed random initialization and a particular type of slowly increasing schedule for the momentum parameter, it can train both DNNs and RNNs (on datasets with long-term dependencies) to levels of performance that were previously achievable only with Hessian-Free optimization. We find that both the initialization and the momentum are crucial since poorly initialized networks cannot be trained with momentum and well-initialized networks perform markedly worse when the momentum is absent or poorly tuned. Our success training these models suggests that previous attempts to train deep and recurrent neural networks from random initializations have likely failed due to poor initialization schemes. Furthermore, carefully tuned momentum methods suffice for dealing with the curvature issues in deep and recurrent network training objectives without the need for sophisticated second-order methods."
]
} |
1603.05544 | 2949615858 | SGD is the widely adopted method to train CNN. Conceptually it approximates the population with a randomly sampled batch; then it evenly trains batches by conducting a gradient update on every batch in an epoch. In this paper, we demonstrate Sampling Bias, Intrinsic Image Difference and Fixed Cycle Pseudo Random Sampling differentiate batches in training, which then affect learning speeds on them. Because of this, the unbiased treatment of batches involved in SGD creates improper load balancing. To address this issue, we present Inconsistent Stochastic Gradient Descent (ISGD) to dynamically vary training effort according to learning statuses on batches. Specifically ISGD leverages techniques in Statistical Process Control to identify a undertrained batch. Once a batch is undertrained, ISGD solves a new subproblem, a chasing logic plus a conservative constraint, to accelerate the training on the batch while avoid drastic parameter changes. Extensive experiments on a variety of datasets demonstrate ISGD converges faster than SGD. In training AlexNet, ISGD is 21.05 faster than SGD to reach 56 top1 accuracy under the exactly same experiment setup. We also extend ISGD to work on multiGPU or heterogeneous distributed system based on data parallelism, enabling the batch size to be the key to scalability. Then we present the study of ISGD batch size to the learning rate, parallelism, synchronization cost, system saturation and scalability. We conclude the optimal ISGD batch size is machine dependent. Various experiments on a multiGPU system validate our claim. In particular, ISGD trains AlexNet to 56.3 top1 and 80.1 top5 accuracy in 11.5 hours with 4 NVIDIA TITAN X at the batch size of 1536. | @cite_20 adapts the learning rate to the parameters, performing larger updates for infrequent parameters, and smaller updates for frequent parameters. It accumulates the squared gradients in the denominator, which will drastically shrink the learning rate. Subsequently, and have been developed to resolve the issue. These adaptive learning rate approaches adjust the extent of parameter updates w.r.t the parameter's update frequency to increase the robustness of training, while ISGD adjusts the frequency of a batch's gradient updates w.r.t the loss to improve the training efficiency. From this perspective, ISGD is different from the adaptive learning rate approaches. | {
"cite_N": [
"@cite_20"
],
"mid": [
"2146502635"
],
"abstract": [
"We present a new family of subgradient methods that dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradient-based learning. Metaphorically, the adaptation allows us to find needles in haystacks in the form of very predictive but rarely seen features. Our paradigm stems from recent advances in stochastic optimization and online learning which employ proximal functions to control the gradient steps of the algorithm. We describe and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal function that can be chosen in hindsight. We give several efficient algorithms for empirical risk minimization problems with common and important regularization functions and domain constraints. We experimentally study our theoretical analysis and show that adaptive subgradient methods outperform state-of-the-art, yet non-adaptive, subgradient algorithms."
]
} |
1603.05310 | 2300609662 | In this paper, we propose a novel framework for dynamical analysis of human actions from 3D motion capture data using topological data analysis. We model human actions using the topological features of the attractor of the dynamical system. We reconstruct the phase-space of time series corresponding to actions using time-delay embedding, and compute the persistent homology of the phase-space reconstruction. In order to better represent the topological properties of the phase-space, we incorporate the temporal adjacency information when computing the homology groups. The persistence of these homology groups encoded using persistence diagrams are used as features for the actions. Our experiments with action recognition using these features demonstrate that the proposed approach outperforms other baseline methods. | In section , we introduce the theoretical concepts of phase-space reconstruction and persistent homology. The feature which encodes the temporal evolution information in the persistence diagrams will be introduced in section . In section , we present our experimental results on the motion capture dataset @cite_28 . | {
"cite_N": [
"@cite_28"
],
"mid": [
"2116931983"
],
"abstract": [
"The paper introduces an action recognition framework that uses concepts from the theory of chaotic systems to model and analyze nonlinear dynamics of human actions. Trajectories of reference joints are used as the representation of the non-linear dynamical system that is generating the action. Each trajectory is then used to reconstruct a phase space of appropriate dimension by employing a delay-embedding scheme. The properties of the reconstructed phase space are captured in terms of dynamical and metric invariants that include Lyapunov exponent, correlation integral and correlation dimension. Finally, the action is represented by a feature vector which is a combination of these invariants over all the reference trajectories. Our contributions in this paper include :1) investigation of the appropriateness of theory of chaotic systems for human action modelling and recognition, 2) a new set of features to characterize nonlinear dynamics of human actions, 3) experimental validation of the feasibility and potential merits of carrying out action recognition using methods from theory of chaotic systems."
]
} |
1603.04882 | 2298460580 | We propose an approach to reduce the bias of ridge regression and regularization kernel network. When applied to a single data set the new algorithms have comparable learning performance with the original ones. When applied to incremental learning with block wise streaming data the new algorithms are more efficient due to bias reduction. Both theoretical characterizations and simulation studies are used to verify the effectiveness of these new algorithms. | The idea of bias correction has long history in statistics. For instance, bias correction to maximum likelihood estimation dates at least back to 1950s @cite_39 and a variety method were proposed later on; see e.g. @cite_29 @cite_5 @cite_19 @cite_23 . Bias reduction to kernel density estimators was studied in @cite_35 @cite_37 @cite_42 @cite_7 . Bias correction to nonparametric estimation was studied in @cite_38 @cite_26 @cite_24 . | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_38",
"@cite_26",
"@cite_7",
"@cite_29",
"@cite_42",
"@cite_39",
"@cite_24",
"@cite_19",
"@cite_23",
"@cite_5"
],
"mid": [
"2008572210",
"",
"2047314038",
"",
"",
"2070256050",
"",
"2076237237",
"",
"2078129874",
"2164374634",
"2017441956"
],
"abstract": [
"A class of density estimates using a superposition of kernels where the kernel parameter can depend on the nearest neighbor distances is studied by the use of simulated data. Their performance using several measures of error is superior to that of the usual Parzen estimators. A tentative solution is given to the problem of calibrating the kernel peakedness when faced with a finite sample set.",
"",
"SUMMARY A major difficulty in understanding the properties of variable bandwidth methods (Breiman, Meisel & Purcell, 1977; Abramson, 1982) is that extremely lengthy and complex algebra is needed to assess the influence of bias. Indeed, the complexity is so great that it has forced investigators to use computer algebraic manipulation to determine formulae for bias. In this note we completely eliminate these algebraic obstacles by presenting a simple easy-to-use formula which gives explicitly the bias of a variable bandwidth estimator, to arbitrarily high order, in very general problems.",
"",
"",
"The bias correction to the maximum likelihood extimates of the parameters for logistic discrimination is examined under mixture and separate sampling schemes. An existing adjustment developed under mixture sampling and based on higher derivatives of the log likelihood is modified slightly for use under separate sampling. The effect of this bias correction on the sampling properties of the misclassification rates and of the estimated posterior probabilities is discussed.",
"",
"",
"",
"SUMMARY It is shown how, in regular parametric problems, the first-order term is removed from the asymptotic bias of maximum likelihood estimates by a suitable modification of the score function. In exponential families with canonical parameterization the effect is to penalize the likelihood by the Jeffreys invariant prior. In binomial logistic models, Poisson log linear models and certain other generalized linear models, the Jeffreys prior penalty function can be imposed in standard regression software using a scheme of iterative adjustments to the data.",
"We analyze the finite-sample behavior of three second-order bias-corrected alternatives to the maximum likelihood estimator of the parameters that index the beta distribution. The three finite-sample corrections we consider are the conventional second-order bias corrected estimator ( ., 1997), the alternative approach introduced by Firth (1993) and the bootstrap bias correction. We present numerical results comparing the performance of these estimators for thirty-six different values of the parameter vector. Our results reveal that analytical bias corrections considerably outperform numerical bias corrections obtained from bootstrapping schemes.",
"This paper provides an expression for bias of the maximum likelihood logistic regression estimates for use with small sample sizes. This bias correction is based on an expansion of the maximum likelihood equation. Simulation results show these corrections to be highly effective in small samples."
]
} |
1603.04992 | 2949634581 | A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manu- ally labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth predic- tion, without requiring a pre-training stage or annotated ground truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photomet- ric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset (without any further augmentation) gives com- parable performance to that of the state of art supervised methods for single view depth estimation. | In this work we have proposed a geometry-inspired unsupervised setup for visual learning, in particular addressing the problem of single view depth estimation. Our main objective was to address the downsides of training deep networks with large amount of labeled data. Another body of work which attempts to address this issue is the set of methods like @cite_27 @cite_33 @cite_4 which rely mainly on generating synthetic semi-synthetic training data with the aim to mimic the real world and use it to train deep network in a fashion. For example, in @cite_33 , CNN is used to discriminate a set of surrogate classes where the data for each class is generated automatically from unlabeled images. The network thus learned is shown to perform well on the task image classification. Handa @cite_27 learn a network for semantic segmentation using synthetic data of indoor scenes and show that the network can generalize well on the real-world scenes. Similarly, @cite_4 employs a CNN to learn local image descriptors where the correspondences between the patches are obtained using a multi-view stereo algorithm. | {
"cite_N": [
"@cite_27",
"@cite_4",
"@cite_33"
],
"mid": [
"2283234189",
"2219193941",
"2148349024"
],
"abstract": [
"Scene understanding is a prerequisite to many high level tasks for any automated intelligent machine operating in real world environments. Recent attempts with supervised learning have shown promise in this direction but also highlighted the need for enormous quantity of supervised data --- performance increases in proportion to the amount of data used. However, this quickly becomes prohibitive when considering the manual labour needed to collect such data. In this work, we focus our attention on depth based semantic per-pixel labelling as a scene understanding problem and show the potential of computer graphics to generate virtually unlimited labelled data from synthetic 3D scenes. By carefully synthesizing training data with appropriate noise models we show comparable performance to state-of-the-art RGBD systems on NYUv2 dataset despite using only depth data as input and set a benchmark on depth-based segmentation on SUN RGB-D dataset. Additionally, we offer a route to generating synthesized frame or video data, and understanding of different factors influencing performance gains.",
"Recent innovations in training deep convolutional neural network (ConvNet) models have motivated the design of new methods to automatically learn local image descriptors. The latest deep ConvNets proposed for this task consist of a siamese network that is trained by penalising misclassification of pairs of local image patches. Current results from machine learning show that replacing this siamese by a triplet network can improve the classification accuracy in several problems, but this has yet to be demonstrated for local image descriptor learning. Moreover, current siamese and triplet networks have been trained with stochastic gradient descent that computes the gradient from individual pairs or triplets of local image patches, which can make them prone to overfitting. In this paper, we first propose the use of triplet networks for the problem of local image descriptor learning. Furthermore, we also propose the use of a global loss that minimises the overall classification error in the training set, which can improve the generalisation capability of the model. Using the UBC benchmark dataset for comparing local image descriptors, we show that the triplet network produces a more accurate embedding than the siamese network in terms of the UBC dataset errors. Moreover, we also demonstrate that a combination of the triplet and global losses produces the best embedding in the field, using this triplet network. Finally, we also show that the use of the central-surround siamese network trained with the global loss produces the best result of the field on the UBC dataset. Pre-trained models are available online at this https URL",
"Current methods for training convolutional neural networks depend on large amounts of labeled samples for supervised training. In this paper we present an approach for training a convolutional neural network using only unlabeled data. We train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. We find that this simple feature learning algorithm is surprisingly successful when applied to visual object recognition. The feature representation learned by our algorithm achieves classification results matching or outperforming the current state-of-the-art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101)."
]
} |
1603.04992 | 2949634581 | A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manu- ally labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth predic- tion, without requiring a pre-training stage or annotated ground truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photomet- ric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset (without any further augmentation) gives com- parable performance to that of the state of art supervised methods for single view depth estimation. | Recently, many methods have used CNN to learn good visual features for matching patches which are sampled from stereo datasets like KITTI @cite_13 @cite_1 , and match these features while doing classical stereo to achieve state-of-the-art depth estimation. These methods are reliant on local matching and lose global information about the scene; furthermore they use ground-truth. But their success is already an indicator that a joint visual learning and depth estimation approach like ours could be extended at the test time to use a pair of images. | {
"cite_N": [
"@cite_1",
"@cite_13"
],
"mid": [
"2214868166",
"2144041313"
],
"abstract": [
"This paper presents a data-driven matching cost for stereo matching. A novel deep visual correspondence embedding model is trained via Convolutional Neural Network on a large set of stereo images with ground truth disparities. This deep embedding model leverages appearance data to learn visual similarity relationships between corresponding image patches, and explicitly maps intensity values into an embedding feature space to measure pixel dissimilarities. Experimental results on KITTI and Middlebury data sets demonstrate the effectiveness of our model. First, we prove that the new measure of pixel dissimilarity outperforms traditional matching costs. Furthermore, when integrated with a global stereo framework, our method ranks top 3 among all two-frame algorithms on the KITTI benchmark. Finally, cross-validation results show that our model is able to make correct predictions for unseen data which are outside of its labeled training set.",
"We present a method for extracting depth information from a rectified image pair. We train a convolutional neural network to predict how well two image patches match and use it to compute the stereo matching cost. The cost is refined by cross-based cost aggregation and semiglobal matching, followed by a left-right consistency check to eliminate errors in the occluded regions. Our stereo method achieves an error rate of 2.61 on the KITTI stereo dataset and is currently (August 2014) the top performing method on this dataset."
]
} |
1603.04992 | 2949634581 | A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manu- ally labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth predic- tion, without requiring a pre-training stage or annotated ground truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photomet- ric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset (without any further augmentation) gives com- parable performance to that of the state of art supervised methods for single view depth estimation. | There have been few works recently that approach the problem of novel view synthesis with CNN @cite_32 @cite_23 . Deep stereo @cite_23 uses a large set of posed images to learn a CNN that can interpolate between the set of input views that are separated by a wide baseline. A concurrent work with ours, @cite_32 addresses the problem of generating 3D stereo pairs from 2D images. It employs a CNN to infer a soft disparity map from a single view image which in turn is used to render the second view. Although, these methods generate depth-like maps as an intermediate step in the pipeline, their goal however is to generate new views and hence do not evaluate the computed depth maps | {
"cite_N": [
"@cite_32",
"@cite_23"
],
"mid": [
"2949407715",
"2952809312"
],
"abstract": [
"As 3D movie viewing becomes mainstream and Virtual Reality (VR) market emerges, the demand for 3D contents is growing rapidly. Producing 3D videos, however, remains challenging. In this paper we propose to use deep neural networks for automatically converting 2D videos and images to stereoscopic 3D format. In contrast to previous automatic 2D-to-3D conversion algorithms, which have separate stages and need ground truth depth map as supervision, our approach is trained end-to-end directly on stereo pairs extracted from 3D movies. This novel training scheme makes it possible to exploit orders of magnitude more data and significantly increases performance. Indeed, Deep3D outperforms baselines in both quantitative and human subject evaluations.",
"Deep networks have recently enjoyed enormous success when applied to recognition and classification problems in computer vision, but their use in graphics problems has been limited. In this work, we present a novel deep architecture that performs new view synthesis directly from pixels, trained from a large number of posed image sets. In contrast to traditional approaches which consist of multiple complex stages of processing, each of which require careful tuning and can fail in unexpected ways, our system is trained end-to-end. The pixels from neighboring views of a scene are presented to the network which then directly produces the pixels of the unseen view. The benefits of our approach include generality (we only require posed image sets and can easily apply our method to different domains), and high quality results on traditionally difficult scenes. We believe this is due to the end-to-end nature of our system which is able to plausibly generate pixels according to color, depth, and texture priors learnt automatically from the training data. To verify our method we show that it can convincingly reproduce known test views from nearby imagery. Additionally we show images rendered from novel viewpoints. To our knowledge, our work is the first to apply deep learning to the problem of new view synthesis from sets of real-world, natural imagery."
]
} |
1603.04992 | 2949634581 | A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manu- ally labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth predic- tion, without requiring a pre-training stage or annotated ground truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photomet- ric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset (without any further augmentation) gives com- parable performance to that of the state of art supervised methods for single view depth estimation. | Using camera motion as the information for visual learning is also explored in the works like @cite_19 @cite_7 which directly regress over the 6DOF camera poses to learn a deep network which performs well on various visual tasks. In contrast to that work, we train our CNN for a more generic task of synthesizing image and get the state-of-the-art single view depth estimation. It will be of immense interest to evaluate the quality of the features learned with our framework on other semantic scene understand tasks. | {
"cite_N": [
"@cite_19",
"@cite_7"
],
"mid": [
"2951590555",
"2305401973"
],
"abstract": [
"The dominant paradigm for feature learning in computer vision relies on training neural networks for the task of object recognition using millions of hand labelled images. Is it possible to learn useful features for a diverse set of visual tasks using any other form of supervision? In biology, living organisms developed the ability of visual perception for the purpose of moving and acting in the world. Drawing inspiration from this observation, in this work we investigate if the awareness of egomotion can be used as a supervisory signal for feature learning. As opposed to the knowledge of class labels, information about egomotion is freely available to mobile agents. We show that given the same number of training images, features learnt using egomotion as supervision compare favourably to features learnt using class-label as supervision on visual tasks of scene recognition, object recognition, visual odometry and keypoint matching.",
"This work presents an unsupervised learning based approach to the ubiquitous computer vision problem of image matching. We start from the insight that the problem of frame interpolation implicitly solves for inter-frame correspondences. This permits the application of analysis-by-synthesis: we first train and apply a Convolutional Neural Network for frame interpolation, then obtain correspondences by inverting the learned CNN. The key benefit behind this strategy is that the CNN for frame interpolation can be trained in an unsupervised manner by exploiting the temporal coherence that is naturally contained in real-world video sequences. The present model therefore learns image matching by simply “watching videos”. Besides a promise to be more generally applicable, the presented approach achieves surprising performance comparable to traditional empirically designed methods."
]
} |
1603.04908 | 2296893412 | Unlike traditional third-person cameras mounted on robots, a first-person camera, captures a person's visual sensorimotor object interactions from up close. In this paper, we study the tight interplay between our momentary visual attention and motor action with objects from a first-person camera. We propose a concept of action-objects---the objects that capture person's conscious visual (watching a TV) or tactile (taking a cup) interactions. Action-objects may be task-dependent but since many tasks share common person-object spatial configurations, action-objects exhibit a characteristic 3D spatial distance and orientation with respect to the person. We design a predictive model that detects action-objects using EgoNet, a joint two-stream network that holistically integrates visual appearance (RGB) and 3D spatial layout (depth and height) cues to predict per-pixel likelihood of action-objects. Our network also incorporates a first-person coordinate embedding, which is designed to learn a spatial distribution of the action-objects in the first-person data. We demonstrate EgoNet's predictive power, by showing that it consistently outperforms previous baseline approaches. Furthermore, EgoNet also exhibits a strong generalization ability, i.e., it predicts semantically meaningful objects in novel first-person datasets. Our method's ability to effectively detect action-objects could be used to improve robots' understanding of human-object interactions. | Actions are performed in the context of objects. This coupling provides a complementary cue to recognize actions. @cite_39 leveraged object information to classify fine-grained activities. Yao and Fei-Fei @cite_24 @cite_3 have presented a spatial model between human pose and objects for activity recognition. Some approaches also used low level bag-of-feature models to learn the spatial relationship between objects and activities from a single third-person image @cite_16 . Conversely, the activity can provide a functional cue to recognize objects @cite_0 @cite_28 @cite_53 . Such a model becomes even more powerful when incorporating the cues of how the object is physically manipulated @cite_17 @cite_27 @cite_36 . In addition, object affordance can be learned by simulating human motion in the 3D space @cite_4 @cite_43 . Furthermore, @cite_37 proposed to recognize activities from a third-person robot's view using people detections. | {
"cite_N": [
"@cite_37",
"@cite_4",
"@cite_28",
"@cite_36",
"@cite_53",
"@cite_3",
"@cite_39",
"@cite_24",
"@cite_0",
"@cite_27",
"@cite_43",
"@cite_16",
"@cite_17"
],
"mid": [
"",
"2030358157",
"2153690652",
"2081293863",
"2149173366",
"2046589395",
"2155983176",
"2050964073",
"1711926650",
"2032293070",
"2215238295",
"2158234032",
"2106833577"
],
"abstract": [
"",
"We present an approach which exploits the coupling between human actions and scene geometry to use human pose as a cue for single-view 3D scene understanding. Our method builds upon recent advances in still-image pose estimation to extract functional and geometric constraints on the scene. These constraints are then used to improve single-view 3D scene understanding approaches. The proposed method is validated on monocular time-lapse sequences from YouTube and still images of indoor scenes gathered from the Internet. We demonstrate that observing people performing different actions can significantly improve estimates of 3D scene geometry.",
"Unsupervised categorization of objects is a fundamental problem in computer vision. While appearance-based methods have become popular recently, other important cues like functionality are largely neglected. Motivated by psychological studies giving evidence that human demonstration has a facilitative effect on categorization in infancy, we propose an approach for object categorization from depth video streams. To this end, we have developed a method for capturing human motion in real-time. The captured data is then used to temporally segment the depth streams into actions. The set of segmented actions are then categorized in an un-supervised manner, through a novel descriptor for motion capture data that is robust to subject variations. Furthermore, we automatically localize the object that is manipulated within a video segment, and categorize it using the corresponding action. For evaluation, we have recorded a dataset that comprises depth data with registered video sequences for 6 subjects, 13 action classes, and 174 object manipulations.",
"In the task of visual object categorization, semantic context can play the very important role of reducing ambiguity in objects' visual appearance. In this work we propose to incorporate semantic object context as a post-processing step into any off-the-shelf object categorization model. Using a conditional random field (CRF) framework, our approach maximizes object label agreement according to contextual relevance. We compare two sources of context: one learned from training data and another queried from Google Sets. The overall performance of the proposed framework is evaluated on the PASCAL and MSRC datasets. Our findings conclude that incorporating context into object categorization greatly improves categorization accuracy.",
"This paper investigates object categorization according to function, i.e., learning the affordances of objects from human demonstration. Object affordances (functionality) are inferred from observations of humans using the objects in different types of actions. The intended application is learning from demonstration, in which a robot learns to employ objects in household tasks, from observing a human performing the same tasks with the objects. We present a method for categorizing manipulated objects and human manipulation actions in context of each other. The method is able to simultaneously segment and classify human hand actions, and detect and classify the objects involved in the action. This can serve as an initial step in a learning from demonstration method. Experiments show that the contextual information improves the classification of both objects and actions.",
"Detecting objects in cluttered scenes and estimating articulated human body parts are two challenging problems in computer vision. The difficulty is particularly pronounced in activities involving human-object interactions (e.g. playing tennis), where the relevant object tends to be small or only partially visible, and the human body parts are often self-occluded. We observe, however, that objects and human poses can serve as mutual context to each other – recognizing one facilitates the recognition of the other. In this paper we propose a new random field model to encode the mutual context of objects and human poses in human-object interaction activities. We then cast the model learning task as a structure learning problem, of which the structural connectivity between the object, the overall human pose, and different body parts are estimated through a structure search approach, and the parameters of the model are estimated by a new max-margin algorithm. On a sports data set of six classes of human-object interactions [12], we show that our mutual context model significantly outperforms state-of-the-art in detecting very difficult objects and human poses.",
"We propose an approach to activity recognition based on detecting and analyzing the sequence of objects that are being manipulated by the user. In domains such as cooking, where many activities involve similar actions, object-use information can be a valuable cue. In order for this approach to scale to many activities and objects, however, it is necessary to minimize the amount of human-labeled data that is required for modeling. We describe a method for automatically acquiring object models from video without any explicit human supervision. Our approach leverages sparse and noisy readings from RFID tagged objects, along with common-sense knowledge about which objects are likely to be used during a given activity, to bootstrap the learning process. We present a dynamic Bayesian network model which combines RFID and video data to jointly infer the most likely activity and object labels. We demonstrate that our approach can achieve activity recognition rates of more than 80 on a real-world dataset consisting of 16 household activities involving 33 objects with significant background clutter. We show that the combination of visual object recognition with RFID data is significantly more effective than the RFID sensor alone. Our work demonstrates that it is possible to automatically learn object models from video of household activities and employ these models for activity recognition, without requiring any explicit human labeling.",
"Psychologists have proposed that many human-object interaction activities form unique classes of scenes. Recognizing these scenes is important for many social functions. To enable a computer to do this is however a challenging task. Take people-playing-musical-instrument (PPMI) as an example; to distinguish a person playing violin from a person just holding a violin requires subtle distinction of characteristic image features and feature arrangements that differentiate these two scenes. Most of the existing image representation methods are either too coarse (e.g. BoW) or too sparse (e.g. constellation models) for performing this task. In this paper, we propose a new image feature representation called “grouplet”. The grouplet captures the structured information of an image by encoding a number of discriminative visual features and their spatial configurations. Using a dataset of 7 different PPMI activities, we show that grouplets are more effective in classifying and detecting human-object interactions than other state-of-the-art methods. In particular, our method can make a robust distinction between humans playing the instruments and humans co-occurring with the instruments without playing.",
"Existing methods for video scene analysis are primarily concerned with learning motion patterns or models for anomaly detection. We present a novel form of video scene analysis where scene element categories such as roads, parking areas, sidewalks and entrances, can be segmented and categorized based on the behaviors of moving objects in and around them. We view the problem from the perspective of categorical object recognition, and present an approach for unsupervised learning of functional scene element categories. Our approach identifies functional regions with similar behaviors in the same scene and or across scenes, by clustering histograms based on a trajectory-level, behavioral codebook. Experiments are conducted on two outdoor webcam video scenes with low frame rates and poor quality. Unsupervised classification results are presented for each scene independently, and also jointly where models learned on one scene are applied to the other.",
"We present a human-centric paradigm for scene understanding. Our approach goes beyond estimating 3D scene geometry and predicts the \"workspace\" of a human which is represented by a data-driven vocabulary of human interactions. Our method builds upon the recent work in indoor scene understanding and the availability of motion capture data to create a joint space of human poses and scene geometry by modeling the physical interactions between the two. This joint space can then be used to predict potential human poses and joint locations from a single image. In a way, this work revisits the principle of Gibsonian affor-dances, reinterpreting it for the modern, data-driven era.",
"The visual perception of object affordances has emerged as a useful ingredient for building powerful computer vision and robotic applications [31]. In this paper we introduce a novel approach to reason about liquid containability - the affordance of containing liquid. Our approach analyzes container objects based on two simple physical processes: the Fill and Transfer of liquid. First, it reasons about whether a given 3D object is a liquid container and its best filling direction. Second, it proposes directions to transfer its contained liquid to the outside while avoiding spillage. We compare our simplified model with a common fluid dynamics simulation and demonstrate that our algorithm makes human-like choices about the best directions to fill containers and transfer liquid from them. We apply our approach to reason about the containability of several real-world objects acquired using a consumer-grade depth camera.",
"We investigate a discriminatively trained model of person-object interactions for recognizing common human actions in still images. We build on the locally order-less spatial pyramid bag-of-features model, which was shown to perform extremely well on a range of object, scene and human action recognition tasks. We introduce three principal contributions. First, we replace the standard quantized local HOG SIFT features with stronger discriminatively trained body part and object detectors. Second, we introduce new person-object interaction features based on spatial co-occurrences of individual body parts and objects. Third, we address the combinatorial problem of a large number of possible interaction pairs and propose a discriminative selection procedure using a linear support vector machine (SVM) with a sparsity inducing regularizer. Learning of action-specific body part and object interactions bypasses the difficult problem of estimating the complete human body pose configuration. Benefits of the proposed model are shown on human action recognition in consumer photographs, outperforming the strong bag-of-features baseline.",
"Analysis of videos of human-object interactions involves understanding human movements, locating and recognizing objects and observing the effects of human movements on those objects. While each of these can be conducted independently, recognition improves when interactions between these elements are considered. Motivated by psychological studies of human perception, we present a Bayesian approach which unifies the inference processes involved in object classification and localization, action understanding and perception of object reaction. Traditional approaches for object classification and action understanding have relied on shape features and movement analysis respectively. By placing object classification and localization in a video interpretation framework, we can localize and classify objects which are either hard to localize due to clutter or hard to recognize due to lack of discriminative features. Similarly, by applying context on human movements from the objects on which these movements impinge and the effects of these movements, we can segment and recognize actions which are either too subtle to perceive or too hard to recognize using motion features alone."
]
} |
1603.04908 | 2296893412 | Unlike traditional third-person cameras mounted on robots, a first-person camera, captures a person's visual sensorimotor object interactions from up close. In this paper, we study the tight interplay between our momentary visual attention and motor action with objects from a first-person camera. We propose a concept of action-objects---the objects that capture person's conscious visual (watching a TV) or tactile (taking a cup) interactions. Action-objects may be task-dependent but since many tasks share common person-object spatial configurations, action-objects exhibit a characteristic 3D spatial distance and orientation with respect to the person. We design a predictive model that detects action-objects using EgoNet, a joint two-stream network that holistically integrates visual appearance (RGB) and 3D spatial layout (depth and height) cues to predict per-pixel likelihood of action-objects. Our network also incorporates a first-person coordinate embedding, which is designed to learn a spatial distribution of the action-objects in the first-person data. We demonstrate EgoNet's predictive power, by showing that it consistently outperforms previous baseline approaches. Furthermore, EgoNet also exhibits a strong generalization ability, i.e., it predicts semantically meaningful objects in novel first-person datasets. Our method's ability to effectively detect action-objects could be used to improve robots' understanding of human-object interactions. | The work in @cite_20 @cite_31 attempts to predict gaze from the first-person images and use it for activity recognition. However, we know that a person's gaze direction does not always correspond to action-objects but instead capture noisy eye movement patterns, which may not be useful for activity recognition. In the context of our problem, the camera wearer who was performing the task and who can disambiguate conscious visual attention and subconscious gaze activities provides per-pixel binary labels of the first-person images, which we then use to build our action-object model. | {
"cite_N": [
"@cite_31",
"@cite_20"
],
"mid": [
"2136668269",
"1947050545"
],
"abstract": [
"We present a model for gaze prediction in egocentric video by leveraging the implicit cues that exist in camera wearer's behaviors. Specifically, we compute the camera wearer's head motion and hand location from the video and combine them to estimate where the eyes look. We further model the dynamic behavior of the gaze, in particular fixations, as latent variables to improve the gaze prediction. Our gaze prediction results outperform the state-of-the-art algorithms by a large margin on publicly available egocentric vision datasets. In addition, we demonstrate that we get a significant performance boost in recognizing daily actions and segmenting foreground objects by plugging in our gaze predictions into state-of-the-art methods.",
"We address the challenging problem of recognizing the camera wearer's actions from videos captured by an egocentric camera. Egocentric videos encode a rich set of signals regarding the camera wearer, including head movement, hand pose and gaze information. We propose to utilize these mid-level egocentric cues for egocentric action recognition. We present a novel set of egocentric features and show how they can be combined with motion and object features. The result is a compact representation with superior performance. In addition, we provide the first systematic evaluation of motion, object and egocentric cues in egocentric action recognition. Our benchmark leads to several surprising findings. These findings uncover the best practices for egocentric actions, with a significant performance boost over all previous state-of-the-art methods on three publicly available datasets."
]
} |
1603.04908 | 2296893412 | Unlike traditional third-person cameras mounted on robots, a first-person camera, captures a person's visual sensorimotor object interactions from up close. In this paper, we study the tight interplay between our momentary visual attention and motor action with objects from a first-person camera. We propose a concept of action-objects---the objects that capture person's conscious visual (watching a TV) or tactile (taking a cup) interactions. Action-objects may be task-dependent but since many tasks share common person-object spatial configurations, action-objects exhibit a characteristic 3D spatial distance and orientation with respect to the person. We design a predictive model that detects action-objects using EgoNet, a joint two-stream network that holistically integrates visual appearance (RGB) and 3D spatial layout (depth and height) cues to predict per-pixel likelihood of action-objects. Our network also incorporates a first-person coordinate embedding, which is designed to learn a spatial distribution of the action-objects in the first-person data. We demonstrate EgoNet's predictive power, by showing that it consistently outperforms previous baseline approaches. Furthermore, EgoNet also exhibits a strong generalization ability, i.e., it predicts semantically meaningful objects in novel first-person datasets. Our method's ability to effectively detect action-objects could be used to improve robots' understanding of human-object interactions. | The methods in @cite_44 @cite_20 perform object detection and activity recognition disjointly: first an object detector is applied to find all objects in the scene, and then those detections are used for activity recognition without necessarily knowing, which objects the person may be interacting with. Furthermore, these methods employ a set of predefined object classes. However, many object categories can correspond to the same action, e.g., TV and a mirror both afford a seeing action, and thus, an object class specific model may not be able to represent the action-objects accurately. | {
"cite_N": [
"@cite_44",
"@cite_20"
],
"mid": [
"1573991794",
"1947050545"
],
"abstract": [
"We propose a system for detecting bids for eye contact directed from a child to an adult who is wearing a point-of-view camera. The camera captures an egocentric view of the child-adult interaction from the adult's perspective. We detect and analyze the child's face in the egocentric video in order to automatically identify moments in which the child is trying to make eye contact with the adult. We present a learning-based method that couples a pose-dependent appearance model with a temporal Conditional Random Field (CRF). We present encouraging findings from an experimental evaluation using a newly collected dataset of 12 children. Our method outperforms state-of-the-art approaches and enables measuring gaze behavior in naturalistic social interactions.",
"We address the challenging problem of recognizing the camera wearer's actions from videos captured by an egocentric camera. Egocentric videos encode a rich set of signals regarding the camera wearer, including head movement, hand pose and gaze information. We propose to utilize these mid-level egocentric cues for egocentric action recognition. We present a novel set of egocentric features and show how they can be combined with motion and object features. The result is a compact representation with superior performance. In addition, we provide the first systematic evaluation of motion, object and egocentric cues in egocentric action recognition. Our benchmark leads to several surprising findings. These findings uncover the best practices for egocentric actions, with a significant performance boost over all previous state-of-the-art methods on three publicly available datasets."
]
} |
1603.04908 | 2296893412 | Unlike traditional third-person cameras mounted on robots, a first-person camera, captures a person's visual sensorimotor object interactions from up close. In this paper, we study the tight interplay between our momentary visual attention and motor action with objects from a first-person camera. We propose a concept of action-objects---the objects that capture person's conscious visual (watching a TV) or tactile (taking a cup) interactions. Action-objects may be task-dependent but since many tasks share common person-object spatial configurations, action-objects exhibit a characteristic 3D spatial distance and orientation with respect to the person. We design a predictive model that detects action-objects using EgoNet, a joint two-stream network that holistically integrates visual appearance (RGB) and 3D spatial layout (depth and height) cues to predict per-pixel likelihood of action-objects. Our network also incorporates a first-person coordinate embedding, which is designed to learn a spatial distribution of the action-objects in the first-person data. We demonstrate EgoNet's predictive power, by showing that it consistently outperforms previous baseline approaches. Furthermore, EgoNet also exhibits a strong generalization ability, i.e., it predicts semantically meaningful objects in novel first-person datasets. Our method's ability to effectively detect action-objects could be used to improve robots' understanding of human-object interactions. | Some prior work focused specifically on handled object detection @cite_48 @cite_6 . However, action-object detection task also requires detecting conscious visual interactions that do not necessarily involve hand manipulation (e.g. watching a TV). Furthermore, from a development point of view, conscious visual attention is one way for a person to interact. For instance, for the babies who lack motor skills, their conscious visual attention is the only thing that indicates their action-objects, and thus detecting only handled objects is not enough. | {
"cite_N": [
"@cite_48",
"@cite_6"
],
"mid": [
"2033639255",
"2412074020"
],
"abstract": [
"Identifying handled objects, i.e. objects being manipulated by a user, is essential for recognizing the person's activities. An egocentric camera as worn on the body enjoys many advantages such as having a natural first-person view and not needing to instrument the environment. It is also a challenging setting, where background clutter is known to be a major source of problems and is difficult to handle with the camera constantly and arbitrarily moving. In this work we develop a bottom-up motion-based approach to robustly segment out foreground objects in egocentric video and show that it greatly improves object recognition accuracy. Our key insight is that egocentric video of object manipulation is a special domain and many domain-specific cues can readily help. We compute dense optical flow and fit it into multiple affine layers. We then use a max-margin classifier to combine motion with empirical knowledge of object location and background movement as well as temporal cues of support region and color appearance. We evaluate our segmentation algorithm on the large Intel Egocentric Object Recognition dataset with 42 objects and 100K frames. We show that, when combined with temporal integration, figure-ground segmentation improves the accuracy of a SIFT-based recognition system from 33 to 60 , and that of a latent-HOG system from 64 to 86 .",
"Our goal is to automate the understanding of natural hand-object manipulation by developing computer visionbased techniques. Our hypothesis is that it is necessary to model the grasp types of hands and the attributes of manipulated objects in order to accurately recognize manipulation actions. Specifically, we focus on recognizing hand grasp types, object attributes and actions from a single image within an unified model. First, we explore the contextual relationship between grasp types and object attributes, and show how that context can be used to boost the recognition of both grasp types and object attributes. Second, we propose to model actions with grasp types and object attributes based on the hypothesis that grasp types and object attributes contain complementary information for characterizing different actions. Our proposed action model outperforms traditional appearance-based models which are not designed to take into account semantic constraints such as grasp types or object attributes. Experiment results on public egocentric activities datasets strongly support our hypothesis."
]
} |
1603.04908 | 2296893412 | Unlike traditional third-person cameras mounted on robots, a first-person camera, captures a person's visual sensorimotor object interactions from up close. In this paper, we study the tight interplay between our momentary visual attention and motor action with objects from a first-person camera. We propose a concept of action-objects---the objects that capture person's conscious visual (watching a TV) or tactile (taking a cup) interactions. Action-objects may be task-dependent but since many tasks share common person-object spatial configurations, action-objects exhibit a characteristic 3D spatial distance and orientation with respect to the person. We design a predictive model that detects action-objects using EgoNet, a joint two-stream network that holistically integrates visual appearance (RGB) and 3D spatial layout (depth and height) cues to predict per-pixel likelihood of action-objects. Our network also incorporates a first-person coordinate embedding, which is designed to learn a spatial distribution of the action-objects in the first-person data. We demonstrate EgoNet's predictive power, by showing that it consistently outperforms previous baseline approaches. Furthermore, EgoNet also exhibits a strong generalization ability, i.e., it predicts semantically meaningful objects in novel first-person datasets. Our method's ability to effectively detect action-objects could be used to improve robots' understanding of human-object interactions. | We acknowledge that our defined concept of action-objects overlaps with several concepts from prior work such as object-action complexes (OAC) @cite_1 , handled-objects @cite_48 @cite_6 , objects-in-action @cite_17 , or object affordances @cite_53 . However, we point out that these prior methods typically focus exclusively on physically manipulated objects, that are specific to certain tasks (e.g. cooking). Instead, the concept of action-objects requires detecting not only tactile but also conscious visual interactions with the objects (e.g. watching a TV), without making any a-priori assumptions about the task that the person will be performing as is commonly done in prior work @cite_48 @cite_6 . | {
"cite_N": [
"@cite_48",
"@cite_53",
"@cite_1",
"@cite_6",
"@cite_17"
],
"mid": [
"2033639255",
"2149173366",
"2098910624",
"2412074020",
"2106833577"
],
"abstract": [
"Identifying handled objects, i.e. objects being manipulated by a user, is essential for recognizing the person's activities. An egocentric camera as worn on the body enjoys many advantages such as having a natural first-person view and not needing to instrument the environment. It is also a challenging setting, where background clutter is known to be a major source of problems and is difficult to handle with the camera constantly and arbitrarily moving. In this work we develop a bottom-up motion-based approach to robustly segment out foreground objects in egocentric video and show that it greatly improves object recognition accuracy. Our key insight is that egocentric video of object manipulation is a special domain and many domain-specific cues can readily help. We compute dense optical flow and fit it into multiple affine layers. We then use a max-margin classifier to combine motion with empirical knowledge of object location and background movement as well as temporal cues of support region and color appearance. We evaluate our segmentation algorithm on the large Intel Egocentric Object Recognition dataset with 42 objects and 100K frames. We show that, when combined with temporal integration, figure-ground segmentation improves the accuracy of a SIFT-based recognition system from 33 to 60 , and that of a latent-HOG system from 64 to 86 .",
"This paper investigates object categorization according to function, i.e., learning the affordances of objects from human demonstration. Object affordances (functionality) are inferred from observations of humans using the objects in different types of actions. The intended application is learning from demonstration, in which a robot learns to employ objects in household tasks, from observing a human performing the same tasks with the objects. We present a method for categorizing manipulated objects and human manipulation actions in context of each other. The method is able to simultaneously segment and classify human hand actions, and detect and classify the objects involved in the action. This can serve as an initial step in a learning from demonstration method. Experiments show that the contextual information improves the classification of both objects and actions.",
"Lifelogging devices are spreading faster everyday. This growth can represent great benefits to develop methods for extraction of meaningful information about the user wearing the device and his her environment. In this paper, we propose a semi-supervised strategy for easily discovering objects relevant to the person wearing a first-person camera. Given an egocentric video images sequence acquired by the camera, our algorithm uses both the appearance extracted by means of a convolutional neural network and an object refill methodology that allows to discover objects even in case of small amount of object appearance in the collection of images. An SVM filtering strategy is applied to deal with the great part of the False Positive object candidates found by most of the state of the art object detectors. We validate our method on a new egocentric dataset of 4912 daily images acquired by 4 persons as well as on both PASCAL 2012 and MSRC datasets. We obtain for all of them results that largely outperform the state of the art approach. We make public both the EDUB dataset and the algorithm code.",
"Our goal is to automate the understanding of natural hand-object manipulation by developing computer visionbased techniques. Our hypothesis is that it is necessary to model the grasp types of hands and the attributes of manipulated objects in order to accurately recognize manipulation actions. Specifically, we focus on recognizing hand grasp types, object attributes and actions from a single image within an unified model. First, we explore the contextual relationship between grasp types and object attributes, and show how that context can be used to boost the recognition of both grasp types and object attributes. Second, we propose to model actions with grasp types and object attributes based on the hypothesis that grasp types and object attributes contain complementary information for characterizing different actions. Our proposed action model outperforms traditional appearance-based models which are not designed to take into account semantic constraints such as grasp types or object attributes. Experiment results on public egocentric activities datasets strongly support our hypothesis.",
"Analysis of videos of human-object interactions involves understanding human movements, locating and recognizing objects and observing the effects of human movements on those objects. While each of these can be conducted independently, recognition improves when interactions between these elements are considered. Motivated by psychological studies of human perception, we present a Bayesian approach which unifies the inference processes involved in object classification and localization, action understanding and perception of object reaction. Traditional approaches for object classification and action understanding have relied on shape features and movement analysis respectively. By placing object classification and localization in a video interpretation framework, we can localize and classify objects which are either hard to localize due to clutter or hard to recognize due to lack of discriminative features. Similarly, by applying context on human movements from the objects on which these movements impinge and the effects of these movements, we can segment and recognize actions which are either too subtle to perceive or too hard to recognize using motion features alone."
]
} |
1603.04908 | 2296893412 | Unlike traditional third-person cameras mounted on robots, a first-person camera, captures a person's visual sensorimotor object interactions from up close. In this paper, we study the tight interplay between our momentary visual attention and motor action with objects from a first-person camera. We propose a concept of action-objects---the objects that capture person's conscious visual (watching a TV) or tactile (taking a cup) interactions. Action-objects may be task-dependent but since many tasks share common person-object spatial configurations, action-objects exhibit a characteristic 3D spatial distance and orientation with respect to the person. We design a predictive model that detects action-objects using EgoNet, a joint two-stream network that holistically integrates visual appearance (RGB) and 3D spatial layout (depth and height) cues to predict per-pixel likelihood of action-objects. Our network also incorporates a first-person coordinate embedding, which is designed to learn a spatial distribution of the action-objects in the first-person data. We demonstrate EgoNet's predictive power, by showing that it consistently outperforms previous baseline approaches. Furthermore, EgoNet also exhibits a strong generalization ability, i.e., it predicts semantically meaningful objects in novel first-person datasets. Our method's ability to effectively detect action-objects could be used to improve robots' understanding of human-object interactions. | A task such as action-object detection or visual saliency prediction requires producing a dense probability output for every pixel. To achieve this goal most prior first-person methods employed a set of hand-crafted features combined with a probabilistic or discriminative classifier. For instance, the work in @cite_49 uses manually engineered set of egocentric features with a linear regression classifier to assign probabilities to each region in a segmented image. The method in @cite_21 exploits the combination of geometric and egocentric cues and trains random forest classifier to predict saliency in first-person images. The work in @cite_48 uses optical flow cues and Graph Cuts @cite_13 to compute handled-object segmentations, whereas @cite_38 employs transductive SVM to compute foreground segmentation in an unsupervised manner. Finally, some prior work @cite_31 integrates a set of hand-crafted features in the graphical model to predict per pixel probabilities of camera wearer's gaze. | {
"cite_N": [
"@cite_38",
"@cite_48",
"@cite_21",
"@cite_49",
"@cite_31",
"@cite_13"
],
"mid": [
"2031688197",
"2033639255",
"2217598536",
"2071711566",
"2136668269",
"2143516773"
],
"abstract": [
"This paper addresses the problem of learning object models from egocentric video of household activities, using extremely weak supervision. For each activity sequence, we know only the names of the objects which are present within it, and have no other knowledge regarding the appearance or location of objects. The key to our approach is a robust, unsupervised bottom up segmentation method, which exploits the structure of the egocentric domain to partition each frame into hand, object, and background categories. By using Multiple Instance Learning to match object instances across sequences, we discover and localize object occurrences. Object representations are refined through transduction and object-level classifiers are trained. We demonstrate encouraging results in detecting novel object instances using models produced by weakly-supervised learning.",
"Identifying handled objects, i.e. objects being manipulated by a user, is essential for recognizing the person's activities. An egocentric camera as worn on the body enjoys many advantages such as having a natural first-person view and not needing to instrument the environment. It is also a challenging setting, where background clutter is known to be a major source of problems and is difficult to handle with the camera constantly and arbitrarily moving. In this work we develop a bottom-up motion-based approach to robustly segment out foreground objects in egocentric video and show that it greatly improves object recognition accuracy. Our key insight is that egocentric video of object manipulation is a special domain and many domain-specific cues can readily help. We compute dense optical flow and fit it into multiple affine layers. We then use a max-margin classifier to combine motion with empirical knowledge of object location and background movement as well as temporal cues of support region and color appearance. We evaluate our segmentation algorithm on the large Intel Egocentric Object Recognition dataset with 42 objects and 100K frames. We show that, when combined with temporal integration, figure-ground segmentation improves the accuracy of a SIFT-based recognition system from 33 to 60 , and that of a latent-HOG system from 64 to 86 .",
"On a minute-to-minute basis people undergo numerous fluid interactions with objects that barely register on a conscious level. Recent neuroscientific research demonstrates that humans have a fixed size prior for salient objects. This suggests that a salient object in 3D undergoes a consistent transformation such that people's visual system perceives it with an approximately fixed size. This finding indicates that there exists a consistent egocentric object prior that can be characterized by shape, size, depth, and location in the first person view. In this paper, we develop an EgoObject Representation, which encodes these characteristics by incorporating shape, location, size and depth features from an egocentric RGBD image. We empirically show that this representation can accurately characterize the egocentric object prior by testing it on an egocentric RGBD dataset for three tasks: the 3D saliency detection, future saliency prediction, and interaction classification. This representation is evaluated on our new Egocentric RGBD Saliency dataset that includes various activities such as cooking, dining, and shopping. By using our EgoObject representation, we outperform previously proposed models for saliency detection (relative 30 improvement for 3D saliency detection task) on our dataset. Additionally, we demonstrate that this representation allows us to predict future salient objects based on the gaze cue and classify people's interactions with objects.",
"We present a video summarization approach for egocentric or \"wearable\" camera data. Given hours of video, the proposed method produces a compact storyboard summary of the camera wearer's day. In contrast to traditional keyframe selection techniques, the resulting summary focuses on the most important objects and people with which the camera wearer interacts. To accomplish this, we develop region cues indicative of high-level saliency in egocentric video--such as the nearness to hands, gaze, and frequency of occurrence--and learn a regressor to predict the relative importance of any new region based on these cues. Using these predictions and a simple form of temporal event detection, our method selects frames for the storyboard that reflect the key object-driven happenings. We adjust the compactness of the final summary given either an importance selection criterion or a length budget; for the latter, we design an efficient dynamic programming solution that accounts for importance, visual uniqueness, and temporal displacement. Critically, the approach is neither camera-wearer-specific nor object-specific; that means the learned importance metric need not be trained for a given user or context, and it can predict the importance of objects and people that have never been seen previously. Our results on two egocentric video datasets show the method's promise relative to existing techniques for saliency and summarization.",
"We present a model for gaze prediction in egocentric video by leveraging the implicit cues that exist in camera wearer's behaviors. Specifically, we compute the camera wearer's head motion and hand location from the video and combine them to estimate where the eyes look. We further model the dynamic behavior of the gaze, in particular fixations, as latent variables to improve the gaze prediction. Our gaze prediction results outperform the state-of-the-art algorithms by a large margin on publicly available egocentric vision datasets. In addition, we demonstrate that we get a significant performance boost in recognizing daily actions and segmenting foreground objects by plugging in our gaze predictions into state-of-the-art methods.",
"Many tasks in computer vision involve assigning a label (such as disparity) to every pixel. A common constraint is that the labels should vary smoothly almost everywhere while preserving sharp discontinuities that may exist, e.g., at object boundaries. These tasks are naturally stated in terms of energy minimization. The authors consider a wide class of energies with various smoothness constraints. Global minimization of these energy functions is NP-hard even in the simplest discontinuity-preserving case. Therefore, our focus is on efficient approximation algorithms. We present two algorithms based on graph cuts that efficiently find a local minimum with respect to two types of large moves, namely expansion moves and swap moves. These moves can simultaneously change the labels of arbitrarily large sets of pixels. In contrast, many standard algorithms (including simulated annealing) use small moves where only one pixel changes its label at a time. Our expansion algorithm finds a labeling within a known factor of the global minimum, while our swap algorithm handles more general energy functions. Both of these algorithms allow important cases of discontinuity preserving energies. We experimentally demonstrate the effectiveness of our approach for image restoration, stereo and motion. On real data with ground truth, we achieve 98 percent accuracy."
]
} |
1603.04908 | 2296893412 | Unlike traditional third-person cameras mounted on robots, a first-person camera, captures a person's visual sensorimotor object interactions from up close. In this paper, we study the tight interplay between our momentary visual attention and motor action with objects from a first-person camera. We propose a concept of action-objects---the objects that capture person's conscious visual (watching a TV) or tactile (taking a cup) interactions. Action-objects may be task-dependent but since many tasks share common person-object spatial configurations, action-objects exhibit a characteristic 3D spatial distance and orientation with respect to the person. We design a predictive model that detects action-objects using EgoNet, a joint two-stream network that holistically integrates visual appearance (RGB) and 3D spatial layout (depth and height) cues to predict per-pixel likelihood of action-objects. Our network also incorporates a first-person coordinate embedding, which is designed to learn a spatial distribution of the action-objects in the first-person data. We demonstrate EgoNet's predictive power, by showing that it consistently outperforms previous baseline approaches. Furthermore, EgoNet also exhibits a strong generalization ability, i.e., it predicts semantically meaningful objects in novel first-person datasets. Our method's ability to effectively detect action-objects could be used to improve robots' understanding of human-object interactions. | We note that the recent introduction of the fully convolutional networks (FCNs) @cite_11 has led to remarkable results in a variety of structured prediction tasks such as edge detection @cite_25 @cite_47 @cite_34 and semantic image segmentation @cite_46 @cite_8 @cite_33 @cite_45 @cite_5 @cite_51 @cite_35 @cite_50 . Following this line of work, a recent method @cite_10 , used FCNs for joint object segmentation and activity recognition in first person images using a two stream appearance and optical flow network with a multi-loss objective function. | {
"cite_N": [
"@cite_35",
"@cite_47",
"@cite_33",
"@cite_8",
"@cite_10",
"@cite_45",
"@cite_50",
"@cite_5",
"@cite_46",
"@cite_34",
"@cite_51",
"@cite_25",
"@cite_11"
],
"mid": [
"2962891704",
"",
"2113325037",
"",
"",
"",
"2962872526",
"2952637581",
"1923697677",
"2963183706",
"2949847866",
"1539790486",
"1903029394"
],
"abstract": [
"Incorporating multi-scale features in fully convolutional neural networks (FCNs) has been a key element to achieving state-of-the-art performance on semantic image segmentation. One common way to extract multi-scale features is to feed multiple resized input images to a shared deep network and then merge the resulting features for pixelwise classification. In this work, we propose an attention mechanism that learns to softly weight the multi-scale features at each pixel location. We adapt a state-of-the-art semantic image segmentation model, which we jointly train with multi-scale input images and the attention model. The proposed attention model not only outperforms averageand max-pooling, but allows us to diagnostically visualize the importance of features at different positions and scales. Moreover, we show that adding extra supervision to the output at each scale is essential to achieving excellent performance when merging multi-scale features. We demonstrate the effectiveness of our model with extensive experiments on three challenging datasets, including PASCAL-Person-Part, PASCAL VOC 2012 and a subset of MS-COCO 2014.",
"",
"We propose a method for human pose estimation based on Deep Neural Networks (DNNs). The pose estimation is formulated as a DNN-based regression problem towards body joints. We present a cascade of such DNN regres- sors which results in high precision pose estimates. The approach has the advantage of reasoning about pose in a holistic fashion and has a simple but yet powerful formula- tion which capitalizes on recent advances in Deep Learn- ing. We present a detailed empirical analysis with state-of- art or better performance on four academic benchmarks of diverse real-world images.",
"",
"",
"",
"Deep convolutional neural networks (CNNs) are the backbone of state-of-art semantic image segmentation systems. Recent work has shown that complementing CNNs with fully-connected conditional random fields (CRFs) can significantly enhance their object localization accuracy, yet dense CRF inference is computationally expensive. We propose replacing the fully-connected CRF with domain transform (DT), a modern edge-preserving filtering method in which the amount of smoothing is controlled by a reference edge map. Domain transform filtering is several times faster than dense CRF inference and we show that it yields comparable semantic segmentation results, accurately capturing object boundaries. Importantly, our formulation allows learning the reference edge map from intermediate CNN features instead of using the image gradient magnitude as in standard DT filtering. This produces task-specific edges in an end-to-end trainable system optimizing the target semantic segmentation quality.",
"We propose a novel semantic segmentation algorithm by learning a deconvolution network. We learn the network on top of the convolutional layers adopted from VGG 16-layer net. The deconvolution network is composed of deconvolution and unpooling layers, which identify pixel-wise class labels and predict segmentation masks. We apply the trained network to each proposal in an input image, and construct the final semantic segmentation map by combining the results from all proposals in a simple manner. The proposed algorithm mitigates the limitations of the existing methods based on fully convolutional networks by integrating deep deconvolution network and proposal-wise prediction; our segmentation method typically identifies detailed structures and handles objects in multiple scales naturally. Our network demonstrates outstanding performance in PASCAL VOC 2012 dataset, and we achieve the best accuracy (72.5 ) among the methods trained with no external data through ensemble with the fully convolutional network.",
"Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.",
"",
"We propose a novel deep neural network architecture for semi-supervised semantic segmentation using heterogeneous annotations. Contrary to existing approaches posing semantic segmentation as a single task of region-based classification, our algorithm decouples classification and segmentation, and learns a separate network for each task. In this architecture, labels associated with an image are identified by classification network, and binary segmentation is subsequently performed for each identified label in segmentation network. The decoupled architecture enables us to learn classification and segmentation networks separately based on the training data with image-level and pixel-wise class labels, respectively. It facilitates to reduce search space for segmentation effectively by exploiting class-specific activation maps obtained from bridging layers. Our algorithm shows outstanding performance compared to other semi-supervised approaches even with much less training images with strong annotations in PASCAL VOC dataset.",
"Most of the current boundary detection systems rely exclusively on low-level features, such as color and texture. However, perception studies suggest that humans employ object-level reasoning when judging if a particular pixel is a boundary. Inspired by this observation, in this work we show how to predict boundaries by exploiting object-level features from a pretrained object-classification network. Our method can be viewed as a \"High-for-Low\" approach where high-level object features inform the low-level boundary detection process. Our model achieves state-of-the-art performance on an established boundary detection benchmark and it is efficient to run. Additionally, we show that due to the semantic nature of our boundaries we can use them to aid a number of high-level vision tasks. We demonstrate that using our boundaries we improve the performance of state-of-the-art methods on the problems of semantic boundary labeling, semantic segmentation and object proposal generation. We can view this process as a \"Low-for-High'\" scheme, where low-level boundaries aid high-level vision tasks. Thus, our contributions include a boundary detection system that is accurate, efficient, generalizes well to multiple datasets, and is also shown to improve existing state-of-the-art high-level vision methods on three distinct tasks.",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image."
]
} |
1603.04908 | 2296893412 | Unlike traditional third-person cameras mounted on robots, a first-person camera, captures a person's visual sensorimotor object interactions from up close. In this paper, we study the tight interplay between our momentary visual attention and motor action with objects from a first-person camera. We propose a concept of action-objects---the objects that capture person's conscious visual (watching a TV) or tactile (taking a cup) interactions. Action-objects may be task-dependent but since many tasks share common person-object spatial configurations, action-objects exhibit a characteristic 3D spatial distance and orientation with respect to the person. We design a predictive model that detects action-objects using EgoNet, a joint two-stream network that holistically integrates visual appearance (RGB) and 3D spatial layout (depth and height) cues to predict per-pixel likelihood of action-objects. Our network also incorporates a first-person coordinate embedding, which is designed to learn a spatial distribution of the action-objects in the first-person data. We demonstrate EgoNet's predictive power, by showing that it consistently outperforms previous baseline approaches. Furthermore, EgoNet also exhibits a strong generalization ability, i.e., it predicts semantically meaningful objects in novel first-person datasets. Our method's ability to effectively detect action-objects could be used to improve robots' understanding of human-object interactions. | We point out that these prior methods @cite_10 @cite_31 @cite_38 @cite_48 @cite_13 focus mainly on the RGB or motion cues, which is a very limiting assumption for an action-object task. When interacting with an object, people typically position themselves at a certain distance and orientation relative to that object. Thus, 3D information plays an important role in action-object detection task. Unlike prior work, we integrate such 3D cues into our model for a more effective action-object detection. | {
"cite_N": [
"@cite_38",
"@cite_10",
"@cite_48",
"@cite_31",
"@cite_13"
],
"mid": [
"2031688197",
"",
"2033639255",
"2136668269",
"2143516773"
],
"abstract": [
"This paper addresses the problem of learning object models from egocentric video of household activities, using extremely weak supervision. For each activity sequence, we know only the names of the objects which are present within it, and have no other knowledge regarding the appearance or location of objects. The key to our approach is a robust, unsupervised bottom up segmentation method, which exploits the structure of the egocentric domain to partition each frame into hand, object, and background categories. By using Multiple Instance Learning to match object instances across sequences, we discover and localize object occurrences. Object representations are refined through transduction and object-level classifiers are trained. We demonstrate encouraging results in detecting novel object instances using models produced by weakly-supervised learning.",
"",
"Identifying handled objects, i.e. objects being manipulated by a user, is essential for recognizing the person's activities. An egocentric camera as worn on the body enjoys many advantages such as having a natural first-person view and not needing to instrument the environment. It is also a challenging setting, where background clutter is known to be a major source of problems and is difficult to handle with the camera constantly and arbitrarily moving. In this work we develop a bottom-up motion-based approach to robustly segment out foreground objects in egocentric video and show that it greatly improves object recognition accuracy. Our key insight is that egocentric video of object manipulation is a special domain and many domain-specific cues can readily help. We compute dense optical flow and fit it into multiple affine layers. We then use a max-margin classifier to combine motion with empirical knowledge of object location and background movement as well as temporal cues of support region and color appearance. We evaluate our segmentation algorithm on the large Intel Egocentric Object Recognition dataset with 42 objects and 100K frames. We show that, when combined with temporal integration, figure-ground segmentation improves the accuracy of a SIFT-based recognition system from 33 to 60 , and that of a latent-HOG system from 64 to 86 .",
"We present a model for gaze prediction in egocentric video by leveraging the implicit cues that exist in camera wearer's behaviors. Specifically, we compute the camera wearer's head motion and hand location from the video and combine them to estimate where the eyes look. We further model the dynamic behavior of the gaze, in particular fixations, as latent variables to improve the gaze prediction. Our gaze prediction results outperform the state-of-the-art algorithms by a large margin on publicly available egocentric vision datasets. In addition, we demonstrate that we get a significant performance boost in recognizing daily actions and segmenting foreground objects by plugging in our gaze predictions into state-of-the-art methods.",
"Many tasks in computer vision involve assigning a label (such as disparity) to every pixel. A common constraint is that the labels should vary smoothly almost everywhere while preserving sharp discontinuities that may exist, e.g., at object boundaries. These tasks are naturally stated in terms of energy minimization. The authors consider a wide class of energies with various smoothness constraints. Global minimization of these energy functions is NP-hard even in the simplest discontinuity-preserving case. Therefore, our focus is on efficient approximation algorithms. We present two algorithms based on graph cuts that efficiently find a local minimum with respect to two types of large moves, namely expansion moves and swap moves. These moves can simultaneously change the labels of arbitrarily large sets of pixels. In contrast, many standard algorithms (including simulated annealing) use small moves where only one pixel changes its label at a time. Our expansion algorithm finds a labeling within a known factor of the global minimum, while our swap algorithm handles more general energy functions. Both of these algorithms allow important cases of discontinuity preserving energies. We experimentally demonstrate the effectiveness of our approach for image restoration, stereo and motion. On real data with ground truth, we achieve 98 percent accuracy."
]
} |
1603.04908 | 2296893412 | Unlike traditional third-person cameras mounted on robots, a first-person camera, captures a person's visual sensorimotor object interactions from up close. In this paper, we study the tight interplay between our momentary visual attention and motor action with objects from a first-person camera. We propose a concept of action-objects---the objects that capture person's conscious visual (watching a TV) or tactile (taking a cup) interactions. Action-objects may be task-dependent but since many tasks share common person-object spatial configurations, action-objects exhibit a characteristic 3D spatial distance and orientation with respect to the person. We design a predictive model that detects action-objects using EgoNet, a joint two-stream network that holistically integrates visual appearance (RGB) and 3D spatial layout (depth and height) cues to predict per-pixel likelihood of action-objects. Our network also incorporates a first-person coordinate embedding, which is designed to learn a spatial distribution of the action-objects in the first-person data. We demonstrate EgoNet's predictive power, by showing that it consistently outperforms previous baseline approaches. Furthermore, EgoNet also exhibits a strong generalization ability, i.e., it predicts semantically meaningful objects in novel first-person datasets. Our method's ability to effectively detect action-objects could be used to improve robots' understanding of human-object interactions. | Additionally, the way a person positions himself during an interaction with an object, affects where the object will be mapped in a first-person image. Prior methods @cite_49 @cite_21 assume that this will most likely be a center location in the image, which is a very general assumption. Instead, in this work, we introduce the first-person coordinate embedding to learn an action-object specific spatial distribution. | {
"cite_N": [
"@cite_21",
"@cite_49"
],
"mid": [
"2217598536",
"2071711566"
],
"abstract": [
"On a minute-to-minute basis people undergo numerous fluid interactions with objects that barely register on a conscious level. Recent neuroscientific research demonstrates that humans have a fixed size prior for salient objects. This suggests that a salient object in 3D undergoes a consistent transformation such that people's visual system perceives it with an approximately fixed size. This finding indicates that there exists a consistent egocentric object prior that can be characterized by shape, size, depth, and location in the first person view. In this paper, we develop an EgoObject Representation, which encodes these characteristics by incorporating shape, location, size and depth features from an egocentric RGBD image. We empirically show that this representation can accurately characterize the egocentric object prior by testing it on an egocentric RGBD dataset for three tasks: the 3D saliency detection, future saliency prediction, and interaction classification. This representation is evaluated on our new Egocentric RGBD Saliency dataset that includes various activities such as cooking, dining, and shopping. By using our EgoObject representation, we outperform previously proposed models for saliency detection (relative 30 improvement for 3D saliency detection task) on our dataset. Additionally, we demonstrate that this representation allows us to predict future salient objects based on the gaze cue and classify people's interactions with objects.",
"We present a video summarization approach for egocentric or \"wearable\" camera data. Given hours of video, the proposed method produces a compact storyboard summary of the camera wearer's day. In contrast to traditional keyframe selection techniques, the resulting summary focuses on the most important objects and people with which the camera wearer interacts. To accomplish this, we develop region cues indicative of high-level saliency in egocentric video--such as the nearness to hands, gaze, and frequency of occurrence--and learn a regressor to predict the relative importance of any new region based on these cues. Using these predictions and a simple form of temporal event detection, our method selects frames for the storyboard that reflect the key object-driven happenings. We adjust the compactness of the final summary given either an importance selection criterion or a length budget; for the latter, we design an efficient dynamic programming solution that accounts for importance, visual uniqueness, and temporal displacement. Critically, the approach is neither camera-wearer-specific nor object-specific; that means the learned importance metric need not be trained for a given user or context, and it can predict the importance of objects and people that have never been seen previously. Our results on two egocentric video datasets show the method's promise relative to existing techniques for saliency and summarization."
]
} |
1603.04871 | 2298838696 | State-of-the-art results of semantic segmentation are established by Fully Convolutional neural Networks (FCNs). FCNs rely on cascaded convolutional and pooling layers to gradually enlarge the receptive fields of neurons, resulting in an indirect way of modeling the distant contextual dependence. In this work, we advocate the use of spatially recurrent layers (i.e. ReNet layers) which directly capture global contexts and lead to improved feature representations. We demonstrate the effectiveness of ReNet layers by building a Naive deep ReNet (N-ReNet), which achieves competitive performance on Stanford Background dataset. Furthermore, we integrate ReNet layers with FCNs, and develop a novel Hybrid deep ReNet (H-ReNet). It enjoys a few remarkable properties, including full-image receptive fields, end-to-end training, and efficient network execution. On the PASCAL VOC 2012 benchmark, the H-ReNet improves the results of state-of-the-art approaches Piecewise, CRFasRNN and DeepParsing by 3.6 , 2.3 and 0.2 , respectively, and achieves the highest IoUs for 13 out of the 20 object classes. | . Nonparametric methods have achieved remarkable performance in semantic segmentation @cite_29 @cite_27 @cite_38 @cite_17 @cite_31 . The core idea is retrieving similar patches from a database of fully annotated images, and transferring the labels from the annotated images to the query image. Specifically, the query image is matched against the annotated database using both holistic image representations as well as superpixels. Probabilistic graphical models (e.g. MRF, CRF) are then introduced to model the semantic context and obtain a spatially coherent semantic label map @cite_28 @cite_24 @cite_11 @cite_32 . Nonparametric methods divide the segmentation task into individual steps. Each step requires a careful design, and the entire process is not amenable to joint optimization. | {
"cite_N": [
"@cite_38",
"@cite_11",
"@cite_28",
"@cite_29",
"@cite_32",
"@cite_24",
"@cite_27",
"@cite_31",
"@cite_17"
],
"mid": [
"2158305599",
"2158097779",
"2535516436",
"",
"2110306576",
"2116877738",
"1542723449",
"2051179318",
"2154083146"
],
"abstract": [
"While there has been a lot of recent work on object recognition and image understanding, the focus has been on carefully establishing mathematical models for images, scenes, and objects. In this paper, we propose a novel, nonparametric approach for object recognition and scene parsing using a new technology we name label transfer. For an input image, our system first retrieves its nearest neighbors from a large database containing fully annotated images. Then, the system establishes dense correspondences between the input image and each of the nearest neighbors using the dense SIFT flow algorithm [28], which aligns two images based on local image structures. Finally, based on the dense scene correspondences obtained from SIFT flow, our system warps the existing annotations and integrates multiple cues in a Markov random field framework to segment and recognize the query image. Promising experimental results have been achieved by our nonparametric scene parsing system on challenging databases. Compared to existing object recognition approaches that require training classifiers or appearance models for each object category, our system is easy to implement, has few parameters, and embeds contextual information naturally in the retrieval alignment procedure.",
"When modeling structured outputs such as image segmentations, prediction can be improved by accurately modeling structure present in the labels. A key challenge is developing tractable models that are able to capture complex high level structure like shape. In this work, we study the learning of a general class of pattern-like high order potential, which we call Compositional High Order Pattern Potentials (CHOPPs). We show that CHOPPs include the linear deviation pattern potentials of [26] and also Restricted Boltzmann Machines (RBMs), we also establish the near equivalence of these two models. Experimentally, we show that performance is affected significantly by the degree of variability present in the datasets, and we define a quantitative variability measure to aid in studying this. We then improve CHOPPs performance in high variability datasets with two primary contributions: (a) developing a loss-sensitive joint learning procedure, so that internal pattern parameters can be learned in conjunction with other model potentials to minimize expected loss, and (b) learning an image-dependent mapping that encourages or inhibits patterns depending on image features. We also explore varying how multiple patterns are composed, and learning convolutional patterns. Quantitative results on challenging highly variable datasets show that the joint learning and image-dependent high order potentials can improve performance.",
"Most methods for object class segmentation are formulated as a labelling problem over a single choice of quantisation of an image space - pixels, segments or group of segments. It is well known that each quantisation has its fair share of pros and cons; and the existence of a common optimal quantisation level suitable for all object categories is highly unlikely. Motivated by this observation, we propose a hierarchical random field model, that allows integration of features computed at different levels of the quantisation hierarchy. MAP inference in this model can be performed efficiently using powerful graph cut based move making algorithms. Our framework generalises much of the previous work based on pixels or segments. We evaluate its efficiency on some of the most challenging data-sets for object class segmentation, and show it obtains state-of-the-art results.",
"",
"This paper addresses the problem of exactly inferring the maximum a posteriori solutions of discrete multi-label MRFs or CRFs with higher order cliques. We present a framework to transform special classes of multi-label higher order functions to submodular second order Boolean functions (referred to as Fs 2), which can be minimized exactly using graph cuts and we characterize those classes. The basic idea is to use two or more Boolean variables to encode the states of a single multi-label variable. There are many ways in which this can be done and much interesting research lies in finding ways which are optimal or minimal in some sense. We study the space of possible encodings and find the ones that can transform the most general class of functions to Fs 2. Our main contributions are two-fold. First, we extend the subclass of submodular energy functions that can be minimized exactly using graph cuts. Second, we show how higher order potentials can be used to improve single view 3D reconstruction results. We believe that our work on exact minimization of higher order energy functions will lead to similar improvements in solutions of other labelling problems.",
"This paper proposes a novel framework for labelling problems which is able to combine multiple segmentations in a principled manner. Our method is based on higher order conditional random fields and uses potentials defined on sets of pixels (image segments) generated using unsupervised segmentation algorithms. These potentials enforce label consistency in image regions and can be seen as a strict generalization of the commonly used pairwise contrast sensitive smoothness potentials. The higher order potential functions used in our framework take the form of the robust Pn model. This enables the use of powerful graph cut based move making algorithms for performing inference in the framework [14 ]. We test our method on the problem of multi-class object segmentation by augmenting the conventional CRF used for object segmentation with higher order potentials defined on image regions. Experiments on challenging data sets show that integration of higher order potentials quantitatively and qualitatively improves results leading to much better definition of object boundaries. We believe that this method can be used to yield similar improvements for many other labelling problems.",
"This paper presents a simple and effective nonparametric approach to the problem of image parsing, or labeling image regions (in our case, superpixels produced by bottom-up segmentation) with their categories. This approach requires no training, and it can easily scale to datasets with tens of thousands of images and hundreds of labels. It works by scene-level matching with global image descriptors, followed by superpixel-level matching with local features and efficient Markov random field (MRF) optimization for incorporating neighborhood context. Our MRF setup can also compute a simultaneous labeling of image regions into semantic classes (e.g., tree, building, car) and geometric classes (sky, vertical, ground). Our system outperforms the state-of-the-art non-parametric method based on SIFT Flow on a dataset of 2,688 images and 33 labels. In addition, we report per-pixel rates on a larger dataset of 15,150 images and 170 labels. To our knowledge, this is the first complete evaluation of image parsing on a dataset of this size, and it establishes a new benchmark for the problem.",
"This paper proposes a non-parametric approach to scene parsing inspired by the work of Tighe and Lazebnik [22]. In their approach, a simple kNN scheme with multiple descriptor types is used to classify super-pixels. We add two novel mechanisms: (i) a principled and efficient method for learning per-descriptor weights that minimizes classification error, and (ii) a context-driven adaptation of the training set used for each query, which conditions on common classes (which are relatively easy to classify) to improve performance on rare ones. The first technique helps to remove extraneous descriptors that result from the imperfect distance metrics representations of each super-pixel. The second contribution re-balances the class frequencies, away from the highly-skewed distribution found in real-world scenes. Both methods give a significant performance boost over [22] and the overall system achieves state-of-the-art performance on the SIFT-Flow dataset.",
"This paper presents a nonparametric approach to semantic parsing using small patches and simple gradient, color and location features. We learn the relevance of individual feature channels at test time using a locally adaptive distance metric. To further improve the accuracy of the nonparametric approach, we examine the importance of the retrieval set used to compute the nearest neighbours using a novel semantic descriptor to retrieve better candidates. The approach is validated by experiments on several datasets used for semantic parsing demonstrating the superiority of the method compared to the state of art approaches."
]
} |
1603.04871 | 2298838696 | State-of-the-art results of semantic segmentation are established by Fully Convolutional neural Networks (FCNs). FCNs rely on cascaded convolutional and pooling layers to gradually enlarge the receptive fields of neurons, resulting in an indirect way of modeling the distant contextual dependence. In this work, we advocate the use of spatially recurrent layers (i.e. ReNet layers) which directly capture global contexts and lead to improved feature representations. We demonstrate the effectiveness of ReNet layers by building a Naive deep ReNet (N-ReNet), which achieves competitive performance on Stanford Background dataset. Furthermore, we integrate ReNet layers with FCNs, and develop a novel Hybrid deep ReNet (H-ReNet). It enjoys a few remarkable properties, including full-image receptive fields, end-to-end training, and efficient network execution. On the PASCAL VOC 2012 benchmark, the H-ReNet improves the results of state-of-the-art approaches Piecewise, CRFasRNN and DeepParsing by 3.6 , 2.3 and 0.2 , respectively, and achieves the highest IoUs for 13 out of the 20 object classes. | . Parametric methods have been dominated by FCN-based models, which can be classified into two lines. In the first line, the FCN takes input of bounding boxes which encompass image regions with high objectness @cite_19 , and outputs a segmentation mask for each bounding box. The final segmentation map is obtained by merging individual ones for the bounding boxes @cite_4 @cite_5 @cite_47 @cite_37 @cite_12 @cite_36 . By contrast, in the second line, the whole image is directly fed into the segmentation net, and a complete segmentation mask is generated at once @cite_44 . Due to the pooling layers of CNNs, the output mask is usually not sufficiently sharp, and region boundaries are not clearly localized. An additional graphical model layer (e.g. MRFs and CRFs) is thus introduced to capture pixel interactions and respect region boundaries. The graphical model can either be applied as a separate post-processing step @cite_7 or be plugged into a deep neural net with joint optimization @cite_26 @cite_20 , both at a high cost of extra computations. Besides FCN-based methods, @cite_33 propose to label the superpixels using zoom-out features, which include pixel-level, region-level and global features extracted from a deep neural network. | {
"cite_N": [
"@cite_37",
"@cite_26",
"@cite_4",
"@cite_33",
"@cite_7",
"@cite_36",
"@cite_44",
"@cite_19",
"@cite_5",
"@cite_47",
"@cite_20",
"@cite_12"
],
"mid": [
"",
"",
"",
"1938976761",
"2964288706",
"2183182206",
"2951277909",
"7746136",
"2949086864",
"",
"2102492119",
"2952637581"
],
"abstract": [
"",
"",
"",
"We introduce a purely feed-forward architecture for semantic segmentation. We map small image elements (superpixels) to rich feature representations extracted from a sequence of nested regions of increasing extent. These regions are obtained by “zooming out” from the superpixel all the way to scene-level resolution. This approach exploits statistical structure in the image and in the label space without setting up explicit structured prediction mechanisms, and thus avoids complex and expensive inference. Instead superpixels are classified by a feedforward multilayer network. Our architecture achieves 69.6 average accuracy on the PASCAL VOC 2012 test set.",
"Abstract: Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.",
"Object detection performance, as measured on the canonical PASCAL VOC Challenge datasets, plateaued in the final years of the competition. The best-performing methods were complex ensemble systems that typically combined multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 50 percent relative to the previous best result on VOC 2012—achieving a mAP of 62.4 percent. Our approach combines two ideas: (1) one can apply high-capacity convolutional networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data are scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, boosts performance significantly. Since we combine region proposals with CNNs, we call the resulting model an R-CNN or Region-based Convolutional Network . Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"Scene parsing is a technique that consist on giving a label to all pixels in an image according to the class they belong to. To ensure a good visual coherence and a high class accuracy, it is essential for a scene parser to capture image long range dependencies. In a feed-forward architecture, this can be simply achieved by considering a sufficiently large input context patch, around each pixel to be labeled. We propose an approach consisting of a recurrent convolutional neural network which allows us to consider a large input context, while limiting the capacity of the model. Contrary to most standard approaches, our method does not rely on any segmentation methods, nor any task-specific features. The system is trained in an end-to-end manner over raw pixels, and models complex spatial dependencies with low inference cost. As the context size increases with the built-in recurrence, the system identifies and corrects its own errors. Our approach yields state-of-the-art performance on both the Stanford Background Dataset and the SIFT Flow Dataset, while remaining very fast at test time.",
"The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy.",
"Recent leading approaches to semantic segmentation rely on deep convolutional networks trained with human-annotated, pixel-level segmentation masks. Such pixel-accurate supervision demands expensive labeling effort and limits the performance of deep networks that usually benefit from more training data. In this paper, we propose a method that achieves competitive accuracy but only requires easily obtained bounding box annotations. The basic idea is to iterate between automatically generating region proposals and training convolutional networks. These two steps gradually recover segmentation masks for improving the networks, and vise versa. Our method, called BoxSup, produces competitive results supervised by boxes only, on par with strong baselines fully supervised by masks under the same setting. By leveraging a large amount of bounding boxes, BoxSup further unleashes the power of deep convolutional networks and yields state-of-the-art results on PASCAL VOC 2012 and PASCAL-CONTEXT.",
"",
"Convolutional neural networks with many layers have recently been shown to achieve excellent results on many high-level tasks such as image classification, object detection and more recently also semantic segmentation. Particularly for semantic segmentation, a two-stage procedure is often employed. Hereby, convolutional networks are trained to provide good local pixel-wise features for the second step being traditionally a more global graphical model. In this work we unify this two-stage process into a single joint training algorithm. We demonstrate our method on the semantic image segmentation task and show encouraging results on the challenging PASCAL VOC 2012 dataset.",
"We propose a novel semantic segmentation algorithm by learning a deconvolution network. We learn the network on top of the convolutional layers adopted from VGG 16-layer net. The deconvolution network is composed of deconvolution and unpooling layers, which identify pixel-wise class labels and predict segmentation masks. We apply the trained network to each proposal in an input image, and construct the final semantic segmentation map by combining the results from all proposals in a simple manner. The proposed algorithm mitigates the limitations of the existing methods based on fully convolutional networks by integrating deep deconvolution network and proposal-wise prediction; our segmentation method typically identifies detailed structures and handles objects in multiple scales naturally. Our network demonstrates outstanding performance in PASCAL VOC 2012 dataset, and we achieve the best accuracy (72.5 ) among the methods trained with no external data through ensemble with the fully convolutional network."
]
} |
1603.04871 | 2298838696 | State-of-the-art results of semantic segmentation are established by Fully Convolutional neural Networks (FCNs). FCNs rely on cascaded convolutional and pooling layers to gradually enlarge the receptive fields of neurons, resulting in an indirect way of modeling the distant contextual dependence. In this work, we advocate the use of spatially recurrent layers (i.e. ReNet layers) which directly capture global contexts and lead to improved feature representations. We demonstrate the effectiveness of ReNet layers by building a Naive deep ReNet (N-ReNet), which achieves competitive performance on Stanford Background dataset. Furthermore, we integrate ReNet layers with FCNs, and develop a novel Hybrid deep ReNet (H-ReNet). It enjoys a few remarkable properties, including full-image receptive fields, end-to-end training, and efficient network execution. On the PASCAL VOC 2012 benchmark, the H-ReNet improves the results of state-of-the-art approaches Piecewise, CRFasRNN and DeepParsing by 3.6 , 2.3 and 0.2 , respectively, and achieves the highest IoUs for 13 out of the 20 object classes. | . Exploiting recurrent neural networks for visual recognition is an active field of research. propose a cascaded structure consisting of alternating 2D Long Short Term Memory (LSTM) and convolutional layers, and report comparable results to state-of-the-art on both and datasets @cite_27 . develop the IRNN layer for object detection to generate features that are not limited to the bounding box of an object proposal @cite_9 . The recently proposed ReNet architecture is a scalable alternative to CNNs for image recognition @cite_30 . We build our models upon ReNet layers, to capture the global contexts as well as enjoy its property of efficient parallelization. In contrast to the IRNN layer where a naive ReLU RNN is implemented, we employ sophisticated LSTM with various gating units to adaptively forget, memorize and expose the memory contents. Empirically, we observe better performance by the ReNet LSTM layer for our task. | {
"cite_N": [
"@cite_30",
"@cite_9",
"@cite_27"
],
"mid": [
"1664573881",
"2951829713",
"1542723449"
],
"abstract": [
"In this paper, we propose a deep neural network architecture for object recognition based on recurrent neural networks. The proposed network, called ReNet, replaces the ubiquitous convolution+pooling layer of the deep convolutional neural network with four recurrent neural networks that sweep horizontally and vertically in both directions across the image. We evaluate the proposed ReNet on three widely-used benchmark datasets; MNIST, CIFAR-10 and SVHN. The result suggests that ReNet is a viable alternative to the deep convolutional neural network, and that further investigation is needed.",
"It is well known that contextual and multi-scale representations are important for accurate visual recognition. In this paper we present the Inside-Outside Net (ION), an object detector that exploits information both inside and outside the region of interest. Contextual information outside the region of interest is integrated using spatial recurrent neural networks. Inside, we use skip pooling to extract information at multiple scales and levels of abstraction. Through extensive experiments we evaluate the design space and provide readers with an overview of what tricks of the trade are important. ION improves state-of-the-art on PASCAL VOC 2012 object detection from 73.9 to 76.4 mAP. On the new and more challenging MS COCO dataset, we improve state-of-art-the from 19.7 to 33.1 mAP. In the 2015 MS COCO Detection Challenge, our ION model won the Best Student Entry and finished 3rd place overall. As intuition suggests, our detection results provide strong evidence that context and multi-scale representations improve small object detection.",
"This paper presents a simple and effective nonparametric approach to the problem of image parsing, or labeling image regions (in our case, superpixels produced by bottom-up segmentation) with their categories. This approach requires no training, and it can easily scale to datasets with tens of thousands of images and hundreds of labels. It works by scene-level matching with global image descriptors, followed by superpixel-level matching with local features and efficient Markov random field (MRF) optimization for incorporating neighborhood context. Our MRF setup can also compute a simultaneous labeling of image regions into semantic classes (e.g., tree, building, car) and geometric classes (sky, vertical, ground). Our system outperforms the state-of-the-art non-parametric method based on SIFT Flow on a dataset of 2,688 images and 33 labels. In addition, we report per-pixel rates on a larger dataset of 15,150 images and 170 labels. To our knowledge, this is the first complete evaluation of image parsing on a dataset of this size, and it establishes a new benchmark for the problem."
]
} |
1603.05060 | 2299409020 | Linear autoregressive models serve as basic representations of discrete time stochastic processes. Different attempts have been made to provide non-linear versions of the basic autoregressive process, including different versions based on kernel methods. Motivated by the powerful framework of Hilbert space embeddings of distributions, in this paper we apply this methodology for the kernel embedding of an autoregressive process of order @math . By doing so, we provide a non-linear version of an autoregressive process, that shows increased performance over the linear model in highly complex time series. We use the method proposed for one-step ahead forecasting of different time-series, and compare its performance against other non-linear methods. | In @cite_13 , the author use kernel mean embeddings to provide one step ahead distribution prediction. In particular, distributions at any time @math are represented by kernel mean maps. A mean map at time @math can be obtained as a mean map at time @math , linearly transformed by a bounded linear operator. In fact, this corresponds to a ARH(1), where the functions in @math correspond to kernel mean embeddings. The distribution at time @math in the input space is approximated by a weighted sum of historic input samples. The weigths in the approximation are computed from particular kernel expressions @cite_13 . Our method considers models of order @math , and our predictions are point estimates in contrast to @cite_13 . Also, we use embeddings of joint probability distributions, @math instead of embeddings of marginal distributions, @math (mean maps). | {
"cite_N": [
"@cite_13"
],
"mid": [
"2949692310"
],
"abstract": [
"We study the problem of predicting the future, though only in the probabilistic sense of estimating a future state of a time-varying probability distribution. This is not only an interesting academic problem, but solving this extrapolation problem also has many practical application, e.g. for training classifiers that have to operate under time-varying conditions. Our main contribution is a method for predicting the next step of the time-varying distribution from a given sequence of sample sets from earlier time steps. For this we rely on two recent machine learning techniques: embedding probability distributions into a reproducing kernel Hilbert space, and learning operators by vector-valued regression. We illustrate the working principles and the practical usefulness of our method by experiments on synthetic and real data. We also highlight an exemplary application: training a classifier in a domain adaptation setting without having access to examples from the test time distribution at training time."
]
} |
1603.04982 | 2301605490 | A database-assisted TV white space network can achieve the goal of green cognitive communication by effectively reducing the energy consumption in cognitive communications. The success of such a novel network relies on a proper business model that provides incentives for all parties involved. In this paper, we propose an integrated spectrum and information market for a database-assisted TV white space network, where the geo-location database serves as both the spectrum market platform and the information market platform. We study the interactions among the database, the spectrum licensee, and unlicensed users by modelling the system as a three-stage sequential decision process. In Stage I, the database and the licensee negotiate regarding the commission for the licensee to use the spectrum market platform. In Stage II, the database and the licensee compete for selling information or channels to unlicensed users. In Stage III, unlicensed users determine whether they should buy exclusive usage right of licensed channels from the licensee or information regarding unlicensed channels from the database. Analyzing such a three-stage model is challenging due to the co-existence of both positive and negative network externalities in the information market. Despite of this, we are able to characterize how the network externalities affect the equilibrium behaviors of all parties involved. We analytically show that in this integrated market, the licensee can never get a market share more than half. Our numerical results further show that the proposed integrated market can improve the network profit up to 87 , compared with a pure information market. | Most of the existing studies on green cognitive communications aimed at addressing the technical issues. For example, Hafeez and Elmirghani in @cite_32 presented a new licensed shared access spectrum sharing scheme to increase the energy efficiency in a network. Palicot in @cite_14 demonstrated how to achieve green radio communications by employing cognitive radio technology. Ji in @cite_19 proposed a platform to explore TV white space in order to achieve green communications in cognitive radio network. Successful commercialization of new green cognitive technology, however, not only relies on sound engineering, but also depends on the proper design of a business model that provides sufficient incentives to the involved parties such as spectrum licensees and the network operators. The joint study of technology and business issues is relatively under explored in the current green cognitive radio literature. | {
"cite_N": [
"@cite_19",
"@cite_14",
"@cite_32"
],
"mid": [
"1588756061",
"2059637032",
""
],
"abstract": [
"This paper considers the realisation of a broadcast out-of-band cognitive pilot channels (CPC) piggybacked on a Terrestrial Digital Multimedia Broadcasting (T-DMB) platform. The solution satisfies the requirements of coverage-based CPC schemes. The main goal of these is the enabling of the information transfer to mobile terminals of available knowledge of the wireless network environment, including available radio access networks, frequency bands, network policies and such like information. The proposed CPC solution is first contextualised within the range of CPC concepts, which have been and are being researched. Then, a three-layer ‘CPC over T-DMB’ system architectural design is set out. A prototype of the scheme is realised and implemented on a testbed, the design of which is outlined, and through which the key technological concepts are validated and their technical performance evaluated. The results of these system evaluations are presented and discussed. Copyright © 2013 John Wiley & Sons, Ltd.",
"In this paper, Cognitive Radio (CR) is proposed as an efficient technology to meet the Green communications concept. First of all, the concept of \"green communications\" is extended to the radio communications world. The main topics, described for example in the call for papers of the first Greencom09 Workshop include energy-efficient network, protocols, devices and energy management. But to reduce the global CO2 emission to protect our environment is not the sole way to address this green concept in wireless communications. The proper use and the optimal sharing of spectrum resources is also a very important topic. In this paper the electromagnetic waves can be considered as a pollution for other users and we shall also deal with this problem. Sustainable development should also address the human aspects both from the social and the health point of views. This paper demonstrates that CR may be a very good technology for dealing with the green radio communication, by describing in detail several examples.",
""
]
} |
1603.04982 | 2301605490 | A database-assisted TV white space network can achieve the goal of green cognitive communication by effectively reducing the energy consumption in cognitive communications. The success of such a novel network relies on a proper business model that provides incentives for all parties involved. In this paper, we propose an integrated spectrum and information market for a database-assisted TV white space network, where the geo-location database serves as both the spectrum market platform and the information market platform. We study the interactions among the database, the spectrum licensee, and unlicensed users by modelling the system as a three-stage sequential decision process. In Stage I, the database and the licensee negotiate regarding the commission for the licensee to use the spectrum market platform. In Stage II, the database and the licensee compete for selling information or channels to unlicensed users. In Stage III, unlicensed users determine whether they should buy exclusive usage right of licensed channels from the licensee or information regarding unlicensed channels from the database. Analyzing such a three-stage model is challenging due to the co-existence of both positive and negative network externalities in the information market. Despite of this, we are able to characterize how the network externalities affect the equilibrium behaviors of all parties involved. We analytically show that in this integrated market, the licensee can never get a market share more than half. Our numerical results further show that the proposed integrated market can improve the network profit up to 87 , compared with a pure information market. | A common approach for studying market price competition is to model and analyze it as a non-cooperative game. For example, Niyato in @cite_28 proposed an iterative algorithm to achieve the Nash equilibrium in the competitive spectrum trading market. Min in @cite_7 studied two wireless service providers' pricing competition by considering spectrum heterogeneity. Zhu in @cite_12 studied pricing competition among macrocell service providers via a two-stage multi-leader-follow game. In the above literature, the market is assumed to be associated with the negative network externality or non-externality. Luo in @cite_33 studied the price competition in the information market of TV white space, where the information market is only associated with the positive network externality. In our work, the integrated market is associated with both the positive and negative network externality. Our numerical results show that the database benefits from the positive network externality, while the licensee benefits from the negative network externality. Furthermore, which commission charging scheme is better for the database or the licensee depends on what kind of network externality is dominant in the network. This makes our market analysis quite different with the above works. | {
"cite_N": [
"@cite_28",
"@cite_33",
"@cite_12",
"@cite_7"
],
"mid": [
"2139766131",
"2016276503",
"2094667486",
"2137663561"
],
"abstract": [
"We consider the problem of spectrum trading with multiple licensed users (i.e., primary users) selling spectrum opportunities to multiple unlicensed users (i.e., secondary users). The secondary users can adapt the spectrum buying behavior (i.e., evolve) by observing the variations in price and quality of spectrum offered by the different primary users or primary service providers. The primary users or primary service providers can adjust their behavior in selling the spectrum opportunities to secondary users to achieve the highest utility. In this paper, we model the evolution and the dynamic behavior of secondary users using the theory of evolutionary game. An algorithm for the implementation of the evolution process of a secondary user is also presented. To model the competition among the primary users, a noncooperative game is formulated where the Nash equilibrium is considered as the solution (in terms of size of offered spectrum to the secondary users and spectrum price). For a primary user, an iterative algorithm for strategy adaptation to achieve the solution is presented. The proposed game-theoretic framework for modeling the interactions among multiple primary users (or service providers) and multiple secondary users is used to investigate network dynamics under different system parameter settings and under system perturbation.",
"",
"Small cells overlaid with macrocells can increase the capacity of two-tier cellular wireless networks by offloading traffic from macrocells. To motivate the small cell service providers (SSPs) to open portion of the access opportunities to macro users (i.e., to operate in a hybrid access mode), we design an incentive mechanism in which the macrocell service provider (MSP) could pay to the SSPs. According to the price offered by the MSP, the SSPs decide on the open access ratio , which is the ratio of shared radio resource for macro users and the total amount of radio resource in a small cell. The users in this two-tier network can make service selection decisions dynamically according to the performance satisfaction level and cost, which again depend on the pricing and spectrum sharing between the MSP and SSPs. To model this dynamic interactive decision problem, we propose a hierarchical dynamic game framework. In the lower level, we formulate an evolutionary game to model and analyze the adaptive service selection of users. An evolutionary stable strategy (ESS) is considered to be the solution of this game. In the upper level, the MSP and SSPs sequentially determine the pricing strategy and the open access ratio, respectively, taking into account the distribution of dynamic service selection at the lower-level evolutionary game. A Stackelberg differential game is formulated where the MSP and SSPs act as the leader and followers, respectively. An open-loop Stackelberg equilibrium is considered to be the solution of this game. We also extend the hierarchical dynamic game framework and investigate the impact of information delays on the equilibrium solutions. Numerical results show the effectiveness and advantages of dynamic control of the open access ratio and pricing.",
"The dynamic spectrum market (DSM) is a key economic vehicle for realizing the opportunistic spectrum access that will mitigate the anticipated spectrum-scarcity problem. DSM allows legacy spectrum owners to lease their channels to unlicensed spectrum consumers (or secondary users) in order to increase their revenue and improve spectrum utilization. In DSM, determining the optimal spectrum leasing price is an important yet challenging problem that requires a comprehensive understanding of market participants' interests and interactions. In this paper, we study spectrum pricing competition in a duopoly DSM, where two wireless service providers (WSPs) lease spectrum access rights, and secondary users (SUs) purchase the spectrum use to maximize their utility. We identify two essential, but previously overlooked, properties of DSM: 1) heterogeneous spectrum resources at WSPs and 2) spectrum sharing among SUs. We demonstrate the impact of spectrum heterogeneity via an in-depth measurement study using a software-defined radio (SDR) testbed. We then study the impacts of spectrum heterogeneity on WSPs' optimal pricing and SUs' WSP selection strategies using a systematic three-step approach. First, we study how spectrum sharing among SUs subscribed to the same WSP affects the SUs' achievable utility. Then, we derive the SUs' optimal WSP selection strategy that maximizes their payoff, given the heterogeneous spectrum propagation characteristics and prices. We analyze how individual SU preferences affect market evolution and prove the market convergence to a mean-field limit, even though SUs make local decisions. Finally, given the market evolution, we formulate the WSPs' pricing strategies in a duopoly DSM as a noncooperative game and identify its Nash equilibrium points. We find that the equilibrium price and its uniqueness depend on the SUs' geographical density and spectrum propagation characteristics. Our analytical framework reveals the impact of spectrum heterogeneity in a real-world DSM, and can be used as a guideline for the WSPs' pricing strategies."
]
} |
1603.04466 | 2302483559 | One goal of online social recommendation systems is to harness the wisdom of crowds in order to identify high quality content. Yet the sequential voting mechanisms that are commonly used by these systems are at odds with existing theoretical and empirical literature on optimal aggregation. This literature suggests that sequential voting will promote herding---the tendency for individuals to copy the decisions of others around them---and hence lead to suboptimal content recommendation. Is there a problem with our practice, or a problem with our theory? Previous attempts at answering this question have been limited by a lack of objective measurements of content quality. Quality is typically defined endogenously as the popularity of content in absence of social influence. The flaw of this metric is its presupposition that the preferences of the crowd are aligned with underlying quality. Domains in which content quality can be defined exogenously and measured objectively are thus needed in order to better assess the design choices of social recommendation systems. In this work, we look to the domain of education, where content quality can be measured via how well students are able to learn from the material presented to them. Through a behavioral experiment involving a simulated massive open online course (MOOC) run on Amazon Mechanical Turk, we show that sequential voting systems can surface better content than systems that elicit independent votes. | There are a number of avenues of research in computational social science related to our study. Our work is related to the large literature on identifying content quality, which primarily strives to develop automated techniques for quality prediction in online settings. One way to identify the quality of content is based on the contributor's reputation and past work @cite_20 @cite_12 . Specific content features can also be used, such as the inclusion of references to external resources, the length, the utility or the verifiability @cite_9 @cite_13 @cite_5 . Another growing related area involves popularity prediction @cite_2 @cite_28 @cite_14 , which also strives to quantify the effect of social influence on popularity @cite_7 @cite_15 @cite_26 . Other work has examined the impact of social influence on various online user behaviors more generally @cite_0 @cite_18 @cite_22 @cite_24 @cite_3 . Our work contributes to these areas by identifying the effect that social influence has on the discovery of high quality content in a domain where quality can be objectively defined. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_26",
"@cite_22",
"@cite_7",
"@cite_28",
"@cite_9",
"@cite_3",
"@cite_0",
"@cite_24",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_20",
"@cite_13",
"@cite_12"
],
"mid": [
"1976889376",
"1996263819",
"970275373",
"",
"2147453867",
"2070366435",
"2129251351",
"",
"",
"2059345393",
"2124587771",
"2087637255",
"",
"2007842529",
"2037933327",
"1599813212"
],
"abstract": [
"Previous studies on informational cascades have stressed the importance of informational social influences in decision-making. When people use the product evaluations of others to indicate product quality on the Internet, online herd behavior occurs. This work presents four studies examining herd behavior of online book purchasing. The first two studies addressed how two cues frequently found on the Internet, i.e., star ratings and sales volume, influence consumer online product choices. The last two studies investigated the relative effectiveness of different recommendation sources. The experimental results demonstrated that subjects use the product evaluations and choices of others as cues in making purchasing book decisions on the Internet bookstore. Additionally, recommendations of other consumers exerted a greater influence on subject choices than recommendations of an expert. Finally, recommendations from recommender system influenced online consumer choices more than those from website owners. The results and implications of this research are discussed.",
"On many social networking web sites such as Facebook and Twitter, resharing or reposting functionality allows users to share others' content with their own friends or followers. As content is reshared from user to user, large cascades of reshares can form. While a growing body of research has focused on analyzing and characterizing such cascades, a recent, parallel line of work has argued that the future trajectory of a cascade may be inherently unpredictable. In this work, we develop a framework for addressing cascade prediction problems. On a large sample of photo reshare cascades on Facebook, we find strong performance in predicting whether a cascade will continue to grow in the future. We find that the relative growth of a cascade becomes more predictable as we observe more of its reshares, that temporal and structural features are key predictors of cascade size, and that initially, breadth, rather than depth in a cascade is a better indicator of larger cascades. This prediction performance is robust in the sense that multiple distinct classes of features all achieve similar performance. We also discover that temporal features are predictive of a cascade's eventual shape. Observing independent cascades of the same content, we find that while these cascades differ greatly in size, we are still able to predict which ends up the largest.",
"In this paper we seek to understand the relationship between the online popularity of an article and its intrinsic quality. Prior experimental work suggests that the relationship between quality and popularity can be very distorted due to factors like social influence bias and inequality in visibility. We conduct a study of popularity on two different social news aggregators, Reddit and Hacker News. We define quality as the number of votes an article would have received if each article was shown, in a bias-free way, to an equal number of users. We propose a simple Poisson regression method to estimate this quality metric from time-series voting data. We validate our methods on data from Reddit and Hacker News, as well the experimental data from prior work. Using these estimates, we find that popularity on Reddit and Hacker News is a relatively strong reflection of intrinsic quality.",
"",
"Hit songs, books, and movies are many times more successful than average, suggesting that “the best” alternatives are qualitatively different from “the rest”; yet experts routinely fail to predict which products will succeed. We investigated this paradox experimentally, by creating an artificial “music market” in which 14,341 participants downloaded previously unknown songs either with or without knowledge of previous participants9 choices. Increasing the strength of social influence increased both inequality and unpredictability of success. Success was also only partly determined by quality: The best songs rarely did poorly, and the worst rarely did well, but any other result was possible.",
"We present a method for accurately predicting the long time popularity of online content from early measurements of user's access. Using two content sharing portals, Youtube and Digg, we show that by modeling the accrual of views and votes on content offered by these services we can predict the long-term dynamics of individual submissions from initial data. In the case of Digg, measuring access to given stories during the first two hours allows us to forecast their popularity 30 days ahead with remarkable accuracy, while downloads of Youtube videos need to be followed for 10 days to attain the same performance. The differing time scales of the predictions are shown to be due to differences in how content is consumed on the two portals: Digg stories quickly become outdated, while Youtube videos are still found long after they are initially submitted to the portal. We show that predictions are more accurate for submissions for which attention decays quickly, whereas predictions for evergreen content will be prone to larger errors.",
"Yahoo Answers (YA) is a large and diverse question-answer forum, acting not only as a medium for sharing technical knowledge, but as a place where one can seek advice, gather opinions, and satisfy one's curiosity about a countless number of things. In this paper, we seek to understand YA's knowledge sharing and activity. We analyze the forum categories and cluster them according to content characteristics and patterns of interaction among the users. While interactions in some categories resemble expertise sharing forums, others incorporate discussion, everyday advice, and support. With such a diversity of categories in which one can participate, we find that some users focus narrowly on specific topics, while others participate across categories. This not only allows us to map related categories, but to characterize the entropy of the users' interests. We find that lower entropy correlates with receiving higher answer ratings, but only for categories where factual expertise is primarily sought after. We combine both user attributes and answer characteristics to predict, within a given category, whether a particular answer will be chosen as the best answer by the asker.",
"",
"",
"Seemingly similar individuals often experience drastically different success trajectories, with some repeatedly failing and others consistently succeeding. One explanation is preexisting variability along unobserved fitness dimensions that is revealed gradually through differential achievement. Alternatively, positive feedback operating on arbitrary initial advantages may increasingly set apart winners from losers, producing runaway inequality. To identify social feedback in human reward systems, we conducted randomized experiments by intervening in live social environments across the domains of funding, status, endorsement, and reputation. In each system we consistently found that early success bestowed upon arbitrarily selected recipients produced significant improvements in subsequent rates of success compared with the control group of nonrecipients. However, success exhibited decreasing marginal returns, with larger initial advantages failing to produce much further differentiation. These findings suggest a lesser degree of vulnerability of reward systems to incidental or fabricated advantages and a more modest role for cumulative advantage in the explanation of social inequality than previously thought.",
"We describe a general stochastic processes-based approach to modeling user-contributory web sites, where users create, rate and share content. These models describe aggregate measures of activity and how they arise from simple models of individual users. This approach provides a tractable method to understand user activity on the web site and how this activity depends on web site design choices, especially the choice of what information about other users' behaviors is shown to each user. We illustrate this modeling approach in the context of user-created content on the news rating site Digg.",
"This study examines the criteria questioners use to select the best answers in a social QA Answers) within the theoretical framework of relevance research. A social QA Answers, the questioner selects the answer that best satisfies his or her question and leaves comments on it. Under the assumption that the comments reflect the reasons why questioners select particular answers as the best, this study analyzed 2,140 comments collected from Yahoo! Answers during December 2007. The content analysis identified 23 individual relevance criteria in six classes: Content, Cognitive, Utility, Information Sources, Extrinsic, and Socioemotional. A major finding is that the selection criteria used in a social Q&A site have considerable overlap with many relevance criteria uncovered in previous relevance studies, but that the scope of socio-emotional criteria has been expanded to include the social aspect of this environment. Another significant finding is that the relative importance of individual criteria varies according to topic categories. Socioemotional criteria are popular in discussion-oriented categories, content-oriented criteria in topic-oriented categories, and utility criteria in self-help categories. This study generalizes previous relevance studies to a new environment by going beyond an academic setting. © 2009 Wiley Periodicals, Inc. The authors contributed equally to this work.",
"",
"In this paper, we propose a user reputation model and apply it to a user-interactive question answering system. It combines the social network analysis approach and the user rating approach. Social network analysis is applied to analyze the impact of participant users' relations to their reputations. User rating is used to acquire direct judgment of a user's reputation based on other users' experiences with this user. Preliminary experiments show that the computed reputations based on our proposed reputation model can reflect the actual reputations of the simulated roles and therefore can fit in well with our user-interactive question answering system. Copyright © 2006 John Wiley & Sons, Ltd.",
"The purpose of this work is to identify potential evaluation criteria for interactive, analytical question-answering (QA) systems by analyzing evaluative comments made by users of such a system. Qualitative data collected from intelligence analysts during interviews and focus groups were analyzed to identify common themes related to performance, use, and usability. These data were collected as part of an intensive, three-day evaluation workshop of the High-Quality Interactive Question Answering (HITIQA) system. Inductive coding and memoing were used to identify and categorize these data. Results suggest potential evaluation criteria for interactive, analytical QA systems, which can be used to guide the development and design of future systems and evaluations. This work contributes to studies of QA systems, information seeking and use behaviors, and interactive searching. © 2007 Wiley Periodicals, Inc.",
"We address the problem of ranking question answerers according to their credibility, characterized here by the probability that a given question answerer (user) will be awarded a best answer on a question given the answerer’s question-answering history. This probability (represented by θ ) is considered to be a hidden variable that can only be estimated statistically from specific observations associated with the user, namely the number b of best answers awarded, associated with the number n of questions answered. The more specific problem addressed is the potentially high degree of uncertainty associated with such credibility estimates when they are based on small numbers of answers. We address this problem by a kind of Bayesian smoothing. The credibility estimate will consist of a mixture of the overall population statistics and those of the specific user. The greater the number of questions asked, the greater will be the contribution of the specific user statistics relative to those of the overall population. We use the Predictive Stochastic Complexity (PSC) as an accuracy measure to evaluate several methods that can be used for the estimation. We compare our technique (Bayesian Smoothing (BS)) with maximum a priori (MAP) estimation, maximum likelihood (ML) estimation and Laplace smoothing."
]
} |
1603.04466 | 2302483559 | One goal of online social recommendation systems is to harness the wisdom of crowds in order to identify high quality content. Yet the sequential voting mechanisms that are commonly used by these systems are at odds with existing theoretical and empirical literature on optimal aggregation. This literature suggests that sequential voting will promote herding---the tendency for individuals to copy the decisions of others around them---and hence lead to suboptimal content recommendation. Is there a problem with our practice, or a problem with our theory? Previous attempts at answering this question have been limited by a lack of objective measurements of content quality. Quality is typically defined endogenously as the popularity of content in absence of social influence. The flaw of this metric is its presupposition that the preferences of the crowd are aligned with underlying quality. Domains in which content quality can be defined exogenously and measured objectively are thus needed in order to better assess the design choices of social recommendation systems. In this work, we look to the domain of education, where content quality can be measured via how well students are able to learn from the material presented to them. Through a behavioral experiment involving a simulated massive open online course (MOOC) run on Amazon Mechanical Turk, we show that sequential voting systems can surface better content than systems that elicit independent votes. | There are many mechanisms used for social recommendation systems. Variations, both on the allowable user input and on the form of aggregation, exist. These variations have been studied widely in theory and practice, and an overview is outside the scope of this work (see, e.g., @cite_4 and references cited within). We select a specific form of of user input (upvotes on ranked content) and the simplest and most common form of aggregation (sorting by number of upvotes) since these choices are prevalent in the education context. Most popular MOOC forums, including Coursera, MIT-X, Harvard-X, and Stanford Online, as well as Q &A websites such as Yahoo! Answers, Stack Overflow, Baidu Knows and Quora use variations of this type of sequential voting mechanism. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2189316604"
],
"abstract": [
"To deal with the huge amount of potentially interesting content on the web today, users seek the help of curators to recommend which content to consume. The two most common forms of curation are expert-based (the editor of a newspaper decides which articles to place on the front page), and algorithmic-based (a search algorithm determines the ranking of websites for a given query). In recent years, content aggregators which use explicit vote-based feedback to curate content for future users have grown exponentially in popularity. The goal of this paper is to provide a descriptive analysis of these crowdsourced curation mechanisms. In particular, we study crowd-curation mechanisms that rank articles according to a score which is a function of userfeedback. We precisely quantify the dynamics of which articles become popular in such these systems. While crowdcuration can be relatively eective for cardinal objectives like discovering and promoting content of high quality, they do not perform well for ordinal objectives such as nding the best articles. Our analysis suggests that user preferences and behavior are a far greater determinant of curation quality than the actual details of the curation mechanism. Finally, we show that certain shifts in user voting behavior can have positive impacts on these systems, suggesting that active moderation of user behavior is important for high quality curation in crowd-sourced systems."
]
} |
1603.04467 | 2271840356 | TensorFlow is an interface for expressing machine learning algorithms, and an implementation for executing such algorithms. A computation expressed using TensorFlow can be executed with little or no change on a wide variety of heterogeneous systems, ranging from mobile devices such as phones and tablets up to large-scale distributed systems of hundreds of machines and thousands of computational devices such as GPU cards. The system is flexible and can be used to express a wide variety of algorithms, including training and inference algorithms for deep neural network models, and it has been used for conducting research and for deploying machine learning systems into production across more than a dozen areas of computer science and other fields, including speech recognition, computer vision, robotics, information retrieval, natural language processing, geographic information extraction, and computational drug discovery. This paper describes the TensorFlow interface and an implementation of that interface that we have built at Google. The TensorFlow API and a reference implementation were released as an open-source package under the Apache 2.0 license in November, 2015 and are available at www.tensorflow.org. | The TensorFlow system shares some design characteristics with its predecessor system, DistBelief @cite_33 , and with later systems with similar designs like Project Adam @cite_16 and the Parameter Server project @cite_24 . Like DistBelief and Project Adam, TensorFlow allows computations to be spread out across many computational devices across many machines, and allows users to specify machine learning models using relatively high-level descriptions. Unlike DistBelief and Project Adam, though, the general-purpose dataflow graph model in TensorFlow is more flexible and more amenable to expressing a wider variety of machine learning models and optimization algorithms. It also permits a significant simplification by allowing the expression of stateful parameter nodes as variables, and variable update operations that are just additional nodes in the graph; in contrast, DistBelief, Project Adam and the Parameter Server systems all have whole separate parameter server subsystems devoted to communicating and updating parameter values. | {
"cite_N": [
"@cite_24",
"@cite_16",
"@cite_33"
],
"mid": [
"",
"1442374986",
"2168231600"
],
"abstract": [
"",
"Large deep neural network models have recently demonstrated state-of-the-art accuracy on hard visual recognition tasks. Unfortunately such models are extremely time consuming to train and require large amount of compute cycles. We describe the design and implementation of a distributed system called Adam comprised of commodity server machines to train such models that exhibits world-class performance, scaling and task accuracy on visual recognition tasks. Adam achieves high efficiency and scalability through whole system co-design that optimizes and balances workload computation and communication. We exploit asynchrony throughout the system to improve performance and show that it additionally improves the accuracy of trained models. Adam is significantly more efficient and scalable than was previously thought possible and used 30x fewer machines to train a large 2 billion connection model to 2x higher accuracy in comparable time on the ImageNet 22,000 category image classification task than the system that previously held the record for this benchmark. We also show that task accuracy improves with larger models. Our results provide compelling evidence that a distributed systems-driven approach to deep learning using current training algorithms is worth pursuing.",
"Recent work in unsupervised feature learning and deep learning has shown that being able to train large models can dramatically improve performance. In this paper, we consider the problem of training a deep network with billions of parameters using tens of thousands of CPU cores. We have developed a software framework called DistBelief that can utilize computing clusters with thousands of machines to train large models. Within this framework, we have developed two algorithms for large-scale distributed training: (i) Downpour SGD, an asynchronous stochastic gradient descent procedure supporting a large number of model replicas, and (ii) Sandblaster, a framework that supports a variety of distributed batch optimization procedures, including a distributed implementation of L-BFGS. Downpour SGD and Sandblaster L-BFGS both increase the scale and speed of deep network training. We have successfully used our system to train a deep network 30x larger than previously reported in the literature, and achieves state-of-the-art performance on ImageNet, a visual object recognition task with 16 million images and 21k categories. We show that these same techniques dramatically accelerate the training of a more modestly- sized deep network for a commercial speech recognition service. Although we focus on and report performance of these methods as applied to training large neural networks, the underlying algorithms are applicable to any gradient-based machine learning algorithm."
]
} |
1603.04467 | 2271840356 | TensorFlow is an interface for expressing machine learning algorithms, and an implementation for executing such algorithms. A computation expressed using TensorFlow can be executed with little or no change on a wide variety of heterogeneous systems, ranging from mobile devices such as phones and tablets up to large-scale distributed systems of hundreds of machines and thousands of computational devices such as GPU cards. The system is flexible and can be used to express a wide variety of algorithms, including training and inference algorithms for deep neural network models, and it has been used for conducting research and for deploying machine learning systems into production across more than a dozen areas of computer science and other fields, including speech recognition, computer vision, robotics, information retrieval, natural language processing, geographic information extraction, and computational drug discovery. This paper describes the TensorFlow interface and an implementation of that interface that we have built at Google. The TensorFlow API and a reference implementation were released as an open-source package under the Apache 2.0 license in November, 2015 and are available at www.tensorflow.org. | The Halide system @cite_17 for expressing image processing pipelines uses a similar intermediate representation to the TensorFlow dataflow graph. Unlike TensorFlow, though, the Halide system actually has higher-level knowledge of the semantics of its operations and uses this knowledge to generate highly optimized pieces of code that combine multiple operations, taking into account parallelism and locality. Halide runs the resulting computations only on a single machine, and not in a distributed setting. In future work we are hoping to extend TensorFlow with a similar cross-operation dynamic compilation framework. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2055312318"
],
"abstract": [
"Image processing pipelines combine the challenges of stencil computations and stream programs. They are composed of large graphs of different stencil stages, as well as complex reductions, and stages with global or data-dependent access patterns. Because of their complex structure, the performance difference between a naive implementation of a pipeline and an optimized one is often an order of magnitude. Efficient implementations require optimization of both parallelism and locality, but due to the nature of stencils, there is a fundamental tension between parallelism, locality, and introducing redundant recomputation of shared values. We present a systematic model of the tradeoff space fundamental to stencil pipelines, a schedule representation which describes concrete points in this space for each stage in an image processing pipeline, and an optimizing compiler for the Halide image processing language that synthesizes high performance implementations from a Halide algorithm and a schedule. Combining this compiler with stochastic search over the space of schedules enables terse, composable programs to achieve state-of-the-art performance on a wide range of real image processing pipelines, and across different hardware architectures, including multicores with SIMD, and heterogeneous CPU+GPU execution. From simple Halide programs written in a few hours, we demonstrate performance up to 5x faster than hand-tuned C, intrinsics, and CUDA implementations optimized by experts over weeks or months, for image processing applications beyond the reach of past automatic compilers."
]
} |
1603.04234 | 2058122117 | A set of identical, mobile agents is deployed in a weighted network. Each agent has a battery--a power source allowing it to move along network edges. An agent uses its battery proportionally to the distance traveled. We consider two tasks: convergecast, in which at the beginning, each agent has some initial piece of information, and information of all agents has to be collected by some agent; and broadcast in which information of one specified agent has to be made available to all other agents. In both tasks, the agents exchange the currently possessed information when they meet. The objective of this paper is to investigate what is the minimal value of power, initially available to all agents, so that convergecast or broadcast can be achieved. We study this question in the centralized and the distributed settings. In the centralized setting, there is a central monitor that schedules the moves of all agents. In the distributed setting every agent has to perform an algorithm being unaware of the network. In the centralized setting, we give a linear-time algorithm to compute the optimal battery power and the strategy using it, both for convergecast and for broadcast, when agents are on the line. We also show that finding the optimal battery power for convergecast or for broadcast is NP-hard for the class of trees. On the other hand, we give a polynomial algorithm that finds a 2-approximation for convergecast and a 4-approximation for broadcast, for arbitrary graphs.In the distributed setting, we give a 2-competitive algorithm for convergecast in trees and a 4-competitive algorithm for broadcast in trees. The competitive ratio of 2 is proved to be the best for the problem of convergecast, even if we only consider line networks. Indeed, we show that there is no ( @math 2-∈)-competitive algorithm for convergecast or for broadcast in the class of lines, for any @math ∈>0. | In many applications the involved mobile agents are small and have to be produced at low cost in massive numbers. Consequently, in many papers, the computational power of mobile agents is assumed to be very limited and feasibility of some important distributed tasks for such collections of agents is investigated. For example @cite_8 introduced population protocols , modeling wireless sensor networks by extremely limited finite-state computational devices. The agents of population protocols move according to some mobility pattern totally out of their control and they interact randomly in pairs. This is called passive mobility , intended to model, e.g., some unstable environment, like a flow of water, chemical solution, human blood, wind or unpredictable mobility of agents' carriers (e.g. vehicles or flocks of birds). On the other hand, @cite_32 introduced anonymous, oblivious, asynchronous, mobile agents which cannot directly communicate, but they can occasionally observe the environment. Gathering and convergence @cite_17 @cite_33 @cite_10 @cite_34 , as well as pattern formation @cite_18 @cite_37 @cite_32 @cite_36 were studied for such agents. | {
"cite_N": [
"@cite_18",
"@cite_37",
"@cite_33",
"@cite_8",
"@cite_36",
"@cite_32",
"@cite_34",
"@cite_10",
"@cite_17"
],
"mid": [
"1983586118",
"2010017329",
"1592126212",
"2706788079",
"2089118805",
"2044484214",
"119243154",
"2041571902",
"2097569568"
],
"abstract": [
"We study the computational power of a distributed system consisting of simple autonomous robots moving on the plane. The robots are endowed with visual perception but do not have any means of explicit communication with each other, and have no memory of the past. In the extensive literature it has been shown how such simple robots can form a single geometric pattern (e.g., a line, a circle, etc), however arbitrary, in spite of their obliviousness. This brings to the front the natural research question: what are the real computational limits imposed by the robots being oblivious? In particular, since obliviousness limits what can be remembered, under what conditions can oblivious robots form a series of geometric patterns? Notice that a series of patterns would create some form of memory in an otherwise memory-less system. In this paper we examine and answer this question showing that, under particular conditions, oblivious robot systems can indeed form series of geometric patterns starting from any arbitrary configuration. More precisely, we study the series of patterns that can be formed by robot systems under various restrictions such as anonymity, asynchrony and lack of common orientation. These results are the first strong indication that oblivious solutions may be obtained also for tasks that intuitively seem to require memory.",
"In this paper we study the problem of gathering a collection of identical oblivious mobile robots in the same location of the plane. Previous investigations have focused mostly on the unlimited visibility setting, where each robot can always see all the others regardless of their distance.In the more difficult and realistic setting where the robots have limited visibility, the existing algorithmic results are only for convergence (towards a common point, without ever reaching it) and only for semi-synchronous environments, where robots' movements are assumed to be performed instantaneously.In contrast, we study this problem in a totally asynchronous setting, where robots' actions, computations, and movements require a finite but otherwise unpredictable amount of time. We present a protocol that allows anonymous oblivious robots with limited visibility to gather in the same location in finite time, provided they have orientation (i.e., agreement on a coordinate system).Our result indicates that, with respect to gathering, orientation is at least as powerful as instantaneous movements.",
"Consider a set of n > 2 simple autonomous mobile robots (decentralized, asynchronous, no common coordinate system, no identities, no central coordination, no direct communication, no memory of the past, deterministic) moving freely in the plane and able to sense the positions of the other robots. We study the primitive task of gathering them at a point not fixed in advance (GATHERING PROBLEM). In the literature, most contributions are simulation-validated heuristics. The existing algorithmic contributions for such robots are limited to solutions for n ≤ 4 or for restricted sets of initial configurations of the robots. In this paper, we present the first algorithm that solves the GATHERING PROBLEM for any initial configuration of the robots.",
"The computational power of networks of small resource-limited mobile agents is explored. Two new models of computation based on pairwise interactions of finite-state agents in populations of finite but unbounded size are defined. With a fairness condition on interactions, the concept of stable computation of a function or predicate is defined. Protocols are given that stably compute any predicate in the class definable by formulas of Presburger arithmetic, which includes Boolean combinations of threshold-k, majority, and equivalence modulo m. All stably computable predicates are shown to be in NL. Assuming uniform random sampling of interacting pairs yields the model of conjugating automata. Any counter machine with O (1) counters of capacity O (n) can be simulated with high probability by a conjugating automaton in a population of size n. All predicates computable with high probability in this model are shown to be in P; they can also be computed by a randomized logspace machine in exponential time. Several open problems and promising future directions are discussed.",
"In a system in which anonymous mobile robots repeatedly execute a ''Look-Compute-Move'' cycle, a robot is said to be oblivious if it has no memory to store its observations in the past, and hence its move depends only on the current observation. This paper considers the pattern formation problem in such a system, and shows that oblivious robots can form any pattern that non-oblivious robots can form, except that two oblivious robots cannot form a point while two non-oblivious robots can. Therefore, memory does not help in forming a pattern, except for the case in which two robots attempt to form a point. Related results on the pattern convergence problem are also presented.",
"In this note we make a minor correction to a scheme for robots to broadcast their private information. All major results of the paper [I. Suzuki and M. Yamashita, SIAM J. Comput., 28 (1999), pp. 1347-1363] hold with this correction.",
"Given a set of n mobile robots in the d-dimensional Euclidean space, the goal is to let them converge to a single not predefined point. The challenge is that the robots are limited in their capabilities. Robots can, upon activation, compute the positions of all other robots using an individual affine coordinate system. The robots are indistinguishable, oblivious and may have different affine coordinate systems. A very general discrete time model assumes that robots are activated in arbitrary order. Further, the computation of a new target point may happen much earlier than the movement, so that the movement is based on outdated information about other robot's positions. Time is measured as the number of rounds, where a round ends as soon as each robot has moved at least once. In [6], the Center of Gravity is considered as target function, convergence was proven, and the number of rounds needed for halving the diameter of the convex hull of the robot's positions was shown to be O(n2) and Ω(n). We present an easy-to-check property of target functions that guarantee convergence and yields upper time bounds. This property intuitively says that when a robot computes a new target point, this point is significantly within the current axes aligned minimal box containing all robots. This property holds, e.g., for the above-mentioned target function, and improves the above O(n2) to an asymptotically optimal O(n) upper bound. Our technique also yields a constant time bound for a target function that requires all robots having identical coordinate axes.",
"This paper considers the convergence problem in autonomous mobile robot systems. A natural algorithm for the problem requires the robots to move towards their center of gravity. This paper proves the correctness of the gravitational algorithm in the fully asynchronous model. It also analyzes its convergence rate and establishes its convergence in the presence of crash faults.",
"We present a distributed algorithm for converging autonomous mobile robots with limited visibility toward a single point. Each robot is an omnidirectional mobile processor that repeatedly: 1) observes the relative positions of those robots that are visible; 2) computes its new position based on the observation using the given algorithm; 3) moves to that position. The robots' visibility is limited so that two robots can see each other if and only if they are within distance V of each other and there are no other robots between them. Our algorithm is memoryless in the sense that the next position of a robot is determined entirely from the positions of the robots that it can see at that moment. The correctness of the algorithm is proved formally under an abstract model of the robot system in which: 1) each robot is represented by a point that does not obstruct the view of other robots; 2) the robots' motion is instantaneous; 3) there are no sensor and control error; 4) the issue of collision is ignored. The results of computer simulation under a more realistic model give convincing indication that the algorithm, if implemented on physical robots, will be robust against sensor and control error."
]
} |
1603.04234 | 2058122117 | A set of identical, mobile agents is deployed in a weighted network. Each agent has a battery--a power source allowing it to move along network edges. An agent uses its battery proportionally to the distance traveled. We consider two tasks: convergecast, in which at the beginning, each agent has some initial piece of information, and information of all agents has to be collected by some agent; and broadcast in which information of one specified agent has to be made available to all other agents. In both tasks, the agents exchange the currently possessed information when they meet. The objective of this paper is to investigate what is the minimal value of power, initially available to all agents, so that convergecast or broadcast can be achieved. We study this question in the centralized and the distributed settings. In the centralized setting, there is a central monitor that schedules the moves of all agents. In the distributed setting every agent has to perform an algorithm being unaware of the network. In the centralized setting, we give a linear-time algorithm to compute the optimal battery power and the strategy using it, both for convergecast and for broadcast, when agents are on the line. We also show that finding the optimal battery power for convergecast or for broadcast is NP-hard for the class of trees. On the other hand, we give a polynomial algorithm that finds a 2-approximation for convergecast and a 4-approximation for broadcast, for arbitrary graphs.In the distributed setting, we give a 2-competitive algorithm for convergecast in trees and a 4-competitive algorithm for broadcast in trees. The competitive ratio of 2 is proved to be the best for the problem of convergecast, even if we only consider line networks. Indeed, we show that there is no ( @math 2-∈)-competitive algorithm for convergecast or for broadcast in the class of lines, for any @math ∈>0. | Apart from the feasibility questions for limited agents, the optimization problems related to the efficient usage of agents' resources have been also investigated. Energy management of (not necessarily mobile) computational devices has been a major concern in recent research papers (cf. @cite_38 ). Fundamental techniques proposed to reduce power consumption of computer systems include power-down strategies (see @cite_38 @cite_7 @cite_25 ) and speed scaling (introduced in @cite_23 ). Several papers proposed centralized @cite_6 @cite_5 @cite_23 or distributed @cite_38 @cite_22 @cite_7 @cite_25 algorithms. However, most of this research on power efficiency concerned optimization of overall power used. Similar to our setting, assignment of charges to the system components in order to minimize the maximal charge has a flavor of another important optimization problem which is load balancing (cf. @cite_16 ). | {
"cite_N": [
"@cite_38",
"@cite_22",
"@cite_7",
"@cite_6",
"@cite_23",
"@cite_5",
"@cite_16",
"@cite_25"
],
"mid": [
"2014008637",
"1702576938",
"2133783059",
"2148250519",
"2099961254",
"2123115882",
"1512794094",
"2145398804"
],
"abstract": [
"",
"Computing energy efficient broadcast trees is one of the most prominent operations in wireless networks. For stations embedded in the Euclidean plane, the best analytic result known to date is a 6.33-approximation algorithm based on computing an Euclidean minimum spanning tree. We improve the analysis of this algorithm and show that its approximation ratio is 6, which matches a previously known lower bound for this algorithm.",
"We consider the problem of selecting threshold times to transition a device to low-power sleep states during an idle period. The two-state case in which there is a single active and a single sleep state is a continuous version of the ski-rental problem. We consider a generalized version in which there is more than one sleep state, each with its own power consumption rate and transition costs. We give an algorithm that, given a system, produces a deterministic strategy whose competitive ratio is arbitrarily close to optimal. We also give an algorithm to produce the optimal online strategy given a system and a probability distribution that generates the length of the idle period. We also give a simple algorithm that achieves a competitive ratio of 3 + 2 spl radic 2 spl ap 5.828 for any system.",
"We consider offline scheduling algorithms that incorporate speed scaling to address the bicriteria problem of minimizing energy consumption and a scheduling metric. For makespan, we give a linear-time algorithm to compute all non-dominated solutions for the general uniprocessor problem and a fast arbitrarily-good approximation for multiprocessor problems when every job requires the same amount of work. We also show that the multiprocessor problem becomes NP-hard when jobs can require different amounts of work. For total flow, we show that the optimal flow corresponding to a particular energy budget cannot be exactly computed on a machine supporting exact real arithmetic, including the extraction of roots. This hardness result holds even when scheduling equal-work jobs on a uniprocessor. We do, however, extend previous work by to give an arbitrarily-good approximation for scheduling equal-work jobs on a multiprocessor.",
"The energy usage of computer systems is becoming an important consideration, especially for battery-operated systems. Various methods for reducing energy consumption have been investigated, both at the circuit level and at the operating systems level. In this paper, we propose a simple model of job scheduling aimed at capturing some key aspects of energy minimization. In this model, each job is to be executed between its arrival time and deadline by a single processor with variable speed, under the assumption that energy usage per unit time, P, is a convex function, of the processor speed s. We give an off-line algorithm that computes, for any set of jobs, a minimum-energy schedule. We then consider some on-line algorithms and their competitive performance for the power function P(s)=s sup p where p spl ges 2. It is shown that one natural heuristic, called the Average Rate heuristic, uses at most a constant times the minimum energy required. The analysis involves bounding the largest eigenvalue in matrices of a special type.",
"A cost aware metric for wireless networks based on remaining battery power at nodes was proposed for shortest-cost routing algorithms, assuming constant transmission power. Power-aware metrics, where transmission power depends on distance between nodes and corresponding shortest power algorithms were also proposed. We define a power-cost metric based on the combination of both node's lifetime and distance-based power metrics. We investigate some properties of power adjusted transmissions and show that, if additional nodes can be placed at desired locations between two nodes at distance d, the transmission power can be made linear in d as opposed to d sup spl alpha dependence for spl alpha spl ges 2. This provides basis for power, cost, and power-cost localized routing algorithms where nodes make routing decisions solely on the basis, of location of their neighbors and destination. The power-aware routing algorithm attempts to minimize the total power needed to route a message between a source and a destination. The cost-aware routing algorithm is aimed at extending the battery's worst-case lifetime at each node. The combined power-cost localized routing algorithm attempts to minimize the total power needed and to avoid nodes with a short battery's remaining lifetime. We prove that the proposed localized power, cost, and power-cost efficient routing algorithms are loop-free and show their efficiency by experiments.",
"Competitive analysis of algorithms.- Self-organizing data structures.- Competitive analysis of paging.- Metrical task systems, the server problem and the work function algorithm.- Distributed paging.- Competitive analysis of distributed algorithms.- On-line packing and covering problems.- On-line load balancing.- On-line scheduling.- On-line searching and navigation.- On-line network routing.- On-line network optimization problems.- Coloring graphs on-line.- On-Line Algorithms in Machine Learning.- Competitive solutions for on-line financial problems.- On the performance of competitive algorithms in practice.- Competitive odds and ends.",
"This article examines two different mechanisms for saving power in battery-operated embedded systems. The first strategy is that the system can be placed in a sleep state if it is idle. However, a fixed amount of energy is required to bring the system back into an active state in which it can resume work. The second way in which power savings can be achieved is by varying the speed at which jobs are run. We utilize a power consumption curve P(s) which indicates the power consumption level given a particular speed. We assume that P(s) is convex, nondecreasing, and nonnegative for s ≥ 0. The problem is to schedule arriving jobs in a way that minimizes total energy use and so that each job is completed after its release time and before its deadline. We assume that all jobs can be preempted and resumed at no cost. Although each problem has been considered separately, this is the first theoretical analysis of systems that can use both mechanisms. We give an offline algorithm that is within a factor of 2 of the optimal algorithm. We also give an online algorithm with a constant competitive ratio."
]
} |
1603.04234 | 2058122117 | A set of identical, mobile agents is deployed in a weighted network. Each agent has a battery--a power source allowing it to move along network edges. An agent uses its battery proportionally to the distance traveled. We consider two tasks: convergecast, in which at the beginning, each agent has some initial piece of information, and information of all agents has to be collected by some agent; and broadcast in which information of one specified agent has to be made available to all other agents. In both tasks, the agents exchange the currently possessed information when they meet. The objective of this paper is to investigate what is the minimal value of power, initially available to all agents, so that convergecast or broadcast can be achieved. We study this question in the centralized and the distributed settings. In the centralized setting, there is a central monitor that schedules the moves of all agents. In the distributed setting every agent has to perform an algorithm being unaware of the network. In the centralized setting, we give a linear-time algorithm to compute the optimal battery power and the strategy using it, both for convergecast and for broadcast, when agents are on the line. We also show that finding the optimal battery power for convergecast or for broadcast is NP-hard for the class of trees. On the other hand, we give a polynomial algorithm that finds a 2-approximation for convergecast and a 4-approximation for broadcast, for arbitrary graphs.In the distributed setting, we give a 2-competitive algorithm for convergecast in trees and a 4-competitive algorithm for broadcast in trees. The competitive ratio of 2 is proved to be the best for the problem of convergecast, even if we only consider line networks. Indeed, we show that there is no ( @math 2-∈)-competitive algorithm for convergecast or for broadcast in the class of lines, for any @math ∈>0. | In wireless sensor and ad hoc networks the power awareness has been often related to the data communication via efficient routing protocols (e.g. @cite_22 @cite_5 . However in many applications of mobile agents (e.g. those involving actively mobile, physical agents) the agent's energy is mostly used for it's mobility purpose rather than communication, since active moving often requires running some mechanical components, while communication mostly involves (less energy-prone) electronic devices. Consequently, in most tasks involving moving agents, like exploration, searching or pattern formation, the distance traveled is the main optimization criterion (cf. @cite_12 @cite_27 @cite_19 @cite_39 @cite_11 @cite_0 @cite_3 @cite_29 @cite_15 @cite_14 ). Single agent exploration of an unknown environment has been studied for graphs, e.g. @cite_12 @cite_3 , or geometric terrains, @cite_39 @cite_0 . | {
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_29",
"@cite_3",
"@cite_39",
"@cite_0",
"@cite_19",
"@cite_27",
"@cite_5",
"@cite_15",
"@cite_12",
"@cite_11"
],
"mid": [
"105334049",
"1702576938",
"",
"2077944048",
"2019729116",
"2095383780",
"1989793404",
"1501957312",
"2123115882",
"2137561762",
"2008110774",
"2019755546"
],
"abstract": [
"We study the problem of exploring an unknown undirected connected graph. Beginning in some start vertex, a searcher must visit each node of the graph by traversing edges. Upon visiting a vertex for the first time, the searcher learns all incident edges and their respective traversal costs. The goal is to find a tour of minimum total cost. Kalyanasundaram and Pruhs (Constructing competitive tours from local information, Theoretical Computer Science 130, pp. 125-138, 1994) proposed a sophisticated generalization of a Depth First Search that is 16-competitive on planar graphs. While the algorithm is feasible on arbitrary graphs, the question whether it has constant competitive ratio in general has remained open. Our main result is an involved lower bound construction that answers this question negatively. On the positive side, we prove that the algorithm has constant competitive ratio on any class of graphs with bounded genus. Furthermore, we provide a constant competitive algorithm for general graphs with a bounded number of distinct weights.",
"Computing energy efficient broadcast trees is one of the most prominent operations in wireless networks. For stations embedded in the Euclidean plane, the best analytic result known to date is a 6.33-approximation algorithm based on computing an Euclidean minimum spanning tree. We improve the analysis of this algorithm and show that its approximation ratio is 6, which matches a previously known lower bound for this algorithm.",
"",
"We wish to explore all edges of an unknown directed, strongly connected graph. At each point, we have a map of all nodes and edges we have visited, we can recognize these nodes and edges if we see them again, and we know how many unexplored edges emanate from each node we have visited, but we cannot tell where each leads until we traverse it. We wish to minimize the ratio of the total number of edges traversed divided by the optimum number of traversals, had we known the graph. For Eulerian graphs, this ratio cannot be better than two, and two is achievable by a simple algorithm. In contrast, the ratio is unbounded when the deficiency of the graph (the number of edges that have to be added to make it Eulerian) is unbounded. Our main result is an algorithm that achieves a bounded ratio when the deficiency is bounded. © 1999 John Wiley & Sons, Inc. J Graph Theory 32: 265–297, 1999",
"In this paper we initiate a new area of study dealing with the best way to search a possibly unbounded region for an object. The model for our search algorithms is that we must pay costs proportional to the distance of the next probe position relative to our current position. This model is meant to give a realistic cost measure for a robot moving in the plane. We also examine the effect of decreasing the amount of a priori information given to search problems. Problems of this type are very simple analogues of non-trivial problems on searching an unbounded region, processing digitized images, and robot navigation. We show that for some simple search problems, knowing the general direction of the goal is much more informative than knowing the distance to the goal.",
"Consider a robot that has to travel from a start location @math to a target @math in an environment with opaque obstacles that lie in its way. The robot always knows its current absolute position and that of the target. It does not, however, know the positions and extents of the obstacles in advance; rather, it finds out about obstacles as it encounters them. We compare the distance walked by the robot in going from @math to @math to the length of the shortest (obstacle-free) path between @math and @math in the scene. We describe and analyze robot strategies that minimize this ratio for different kinds of scenes. In particular, we consider the cases of rectangular obstacles aligned with the axes, rectangular obstacles in more general orientations, and wider classes of convex bodies both in two and three dimensions. For many of these situations, our algorithms are optimal up to constant factors. We study scenes with nonconvex obstacles, which are related to the study of maze traversal. We also show scenes where randomized algorithms are provably better than deterministic algorithms.",
"We study how a mobile robot can learn an unknown environment in a piecemeal manner. The robot's goal is to learn a complete map of its environment, while satisfying the constraint that it must return every so often to its starting position (for refueling, say). The environment is modeled as an arbitrary, undirected graph, which is initially unknown to the robot. We assume that the robot can distinguish vertices and edges that it has already explored. We present a surprisingly efficient algorithm for piecemeal learning an unknown undirected graph G=(V, E) in which the robot explores every vertex and edge in the graph by traversing at most O(E+V1+o(1)) edges. This nearly linear algorithm improves on the best previous algorithm, in which the robot traverses at most O(E+V2) edges. We also give an application of piecemeal learning to the problem of searching a graph for a “treasure.”",
"Search Theory is one of the original disciplines within the field of Operations Research. It deals with the problem faced by a Searcher who wishes to minimize the time required to find a hidden object, or “target. ” The Searcher chooses a path in the “search space” and finds the target when he is sufficiently close to it. Traditionally, the target is assumed to have no motives of its own regarding when it is found; it is simply stationary and hidden according to a known distribution (e. g. , oil), or its motion is determined stochastically by known rules (e. g. , a fox in a forest). The problems dealt with in this book assume, on the contrary, that the “target” is an independent player of equal status to the Searcher, who cares about when he is found. We consider two possible motives of the target, and divide the book accordingly. Book I considers the zero-sum game that results when the target (here called the Hider) does not want to be found. Such problems have been called Search Games (with the “ze- sum” qualifier understood). Book II considers the opposite motive of the target, namely, that he wants to be found. In this case the Searcher and the Hider can be thought of as a team of agents (simply called Player I and Player II) with identical aims, and the coordination problem they jointly face is called the Rendezvous Search Problem.",
"A cost aware metric for wireless networks based on remaining battery power at nodes was proposed for shortest-cost routing algorithms, assuming constant transmission power. Power-aware metrics, where transmission power depends on distance between nodes and corresponding shortest power algorithms were also proposed. We define a power-cost metric based on the combination of both node's lifetime and distance-based power metrics. We investigate some properties of power adjusted transmissions and show that, if additional nodes can be placed at desired locations between two nodes at distance d, the transmission power can be made linear in d as opposed to d sup spl alpha dependence for spl alpha spl ges 2. This provides basis for power, cost, and power-cost localized routing algorithms where nodes make routing decisions solely on the basis, of location of their neighbors and destination. The power-aware routing algorithm attempts to minimize the total power needed to route a message between a source and a destination. The cost-aware routing algorithm is aimed at extending the battery's worst-case lifetime at each node. The combined power-cost localized routing algorithm attempts to minimize the total power needed and to avoid nodes with a short battery's remaining lifetime. We prove that the proposed localized power, cost, and power-cost efficient routing algorithms are loop-free and show their efficiency by experiments.",
"An n-node tree has to be explored by k mobile agents (robots), starting in its root. Every edge of the tree must be traversed by at least one robot, and exploration must be completed as fast as possible. Even when the tree is known in advance, scheduling optimal collective exploration turns out to be NP-hard. We investigate the problem of distributed collective exploration of unknown trees. Not surprisingly, communication between robots influences the time of exploration. Our main communication scenario is the following: robots can communicate by writing at the currently visited node previously acquired information, and reading information available at this node. We construct an exploration algorithm whose running time for any tree is only O(k log k) larger than optimal exploration time with full knowledge of the tree. (We say that the algorithm has overheadO(k log k)). On the other hand we show that, in order to get overhead sublinear in the number of robots, some communication is necessary. Indeed, we prove that if robots cannot communicate at all, then every distributed exploration algorithm works in time Ω (k) larger than optimal exploration time with full knowledge, for some trees.",
"We consider exploration problems where a robot has to construct a complete map of an unknown environment. We assume that the environment is modeled by a directed, strongly connected graph. The robot's task is to visit all nodes and edges of the graph using the minimum number R of edge traversals. Deng and Papadimitriou [ Proceedings of the 31st Symposium on the Foundations of Computer Science, 1990, pp. 356--361] showed an upper bound for R of dO(d) m and Koutsoupias (reported by Deng and Papadimitriou) gave a lower bound of @math , where m is the number of edges in the graph and d is the minimum number of edges that have to be added to make the graph Eulerian. We give the first subexponential algorithm for this exploration problem, which achieves an upper bound of dO(log d) m. We also show a matching lower bound of @math for our algorithm. Additionally, we give lower bounds of @math , respectively, @math for various other natural exploration algorithms.",
"We introduce a new learning problem: learning a graph by piecemeal search, in which the learner must return every so often to its starting point (for refueling, say). We present two linear-time piecemeal-search algorithms for learning city-block graphs: grid graphs with rectangular obstacles."
]
} |
1603.04234 | 2058122117 | A set of identical, mobile agents is deployed in a weighted network. Each agent has a battery--a power source allowing it to move along network edges. An agent uses its battery proportionally to the distance traveled. We consider two tasks: convergecast, in which at the beginning, each agent has some initial piece of information, and information of all agents has to be collected by some agent; and broadcast in which information of one specified agent has to be made available to all other agents. In both tasks, the agents exchange the currently possessed information when they meet. The objective of this paper is to investigate what is the minimal value of power, initially available to all agents, so that convergecast or broadcast can be achieved. We study this question in the centralized and the distributed settings. In the centralized setting, there is a central monitor that schedules the moves of all agents. In the distributed setting every agent has to perform an algorithm being unaware of the network. In the centralized setting, we give a linear-time algorithm to compute the optimal battery power and the strategy using it, both for convergecast and for broadcast, when agents are on the line. We also show that finding the optimal battery power for convergecast or for broadcast is NP-hard for the class of trees. On the other hand, we give a polynomial algorithm that finds a 2-approximation for convergecast and a 4-approximation for broadcast, for arbitrary graphs.In the distributed setting, we give a 2-competitive algorithm for convergecast in trees and a 4-competitive algorithm for broadcast in trees. The competitive ratio of 2 is proved to be the best for the problem of convergecast, even if we only consider line networks. Indeed, we show that there is no ( @math 2-∈)-competitive algorithm for convergecast or for broadcast in the class of lines, for any @math ∈>0. | While a single agent cannot explore a graph of unknown size unless pebble (landmark) usage is permitted (see @cite_20 ), a pair of robots are able to explore and map a directed graph of maximal degree @math in @math time with high probability (cf. @cite_24 ). In the case of a team of collaborating mobile agents, the challenge is to balance the workload among the agents so that the time to achieve the required goal is minimized. However this task is often hard (cf. @cite_13 ), even in the case of two agents in a tree, @cite_21 . On the other hand, the authors of @cite_15 study the problem of agents exploring a tree, showing @math competitive ratio of their distributed algorithm provided that writing (and reading) at tree nodes is permitted. | {
"cite_N": [
"@cite_21",
"@cite_24",
"@cite_15",
"@cite_13",
"@cite_20"
],
"mid": [
"2027056835",
"2154255883",
"2137561762",
"2143987816",
"2149888497"
],
"abstract": [
"Abstract Suppose two travelling salesmen must visit together all points nodes of a tree, and the objective is to minimize the maximal length of their tours. Home locations of the salesmen are given, and it is required to find optimal tours. For this NP-hard problem a heuristic with complexity O ( n ) is presented. The worst-case relative error for the heuristic performance is 1 3 for the case of equal home locations for both servers and 1 2 for the case of different home locations.",
"We show that two cooperating robots can learn exactly any strongly-connected directed graph with n indistinguishable nodes in expected time polynomial in n. We introduce a new type of homing sequence for two robots which helps the robots recognize certain previously-seen nodes. We then present an algorithm in which the robots learn the graph and the homing sequence simultaneously by wandering actively through the graph. Unlike most previous learning results using homing sequences, our algorithm does not require a teacher to provide counterexamples. Furthermore, the algorithm can use efficiently any additional information available that distinguishes nodes. We also present an algorithm in which the robots learn by taking random walks. The rate at which a random walk converges to the stationary distribution is characterized by the conductance of the graph. Our random-walk algorithm learns in expected time polynomial in n and in the inverse of the conductance and is more efficient than the homing-sequence algorithm for high-conductance graphs. >",
"An n-node tree has to be explored by k mobile agents (robots), starting in its root. Every edge of the tree must be traversed by at least one robot, and exploration must be completed as fast as possible. Even when the tree is known in advance, scheduling optimal collective exploration turns out to be NP-hard. We investigate the problem of distributed collective exploration of unknown trees. Not surprisingly, communication between robots influences the time of exploration. Our main communication scenario is the following: robots can communicate by writing at the currently visited node previously acquired information, and reading information available at this node. We construct an exploration algorithm whose running time for any tree is only O(k log k) larger than optimal exploration time with full knowledge of the tree. (We say that the algorithm has overheadO(k log k)). On the other hand we show that, in order to get overhead sublinear in the number of robots, some communication is necessary. Indeed, we prove that if robots cannot communicate at all, then every distributed exploration algorithm works in time Ω (k) larger than optimal exploration time with full knowledge, for some trees.",
"Several polynomial time approximation algorithms for some @math -complete routing problems are presented, and the worst-case ratios of the cost of the obtained route to that of an optimal are determined. A mixed-strategy heuristic with a bound of 9 5 is presented for the stacker-crane problem (a modified traveling salesman problem). A tour-splitting heuristic is given for k-person variants of the traveling salesman problem, the Chinese postman problem, and the stacker-crane problem, for which a minimax solution is sought. This heuristic has a bound of @math , where e is the bound for the corresponding 1-person algorithm.",
"Exploring and mapping an unknown environment is a fundamental problem that is studied in a variety of contexts. Many results have focused on finding efficient solutions to restricted versions of the problem. In this paper, we consider a model that makes very limited assumptions about the environment and solve the mapping problem in this general setting. We model the environment by an unknown directed graph G, and consider the problem of a robot exploring and mapping G. The edges emanating from each vertex are numbered from ‘1’ to ‘d’, but we do not assume that the vertices of G are labeled. Since the robot has no way of distinguishing between vertices, it has no hope of succeeding unless it is given some means of distinguishing between vertices. For this reason we provide the robot with a “pebble”—a device that it can place on a vertex and use to identify the vertex later. In this paper we show: (1) If the robot knows an upper bound on the number of vertices then it can learn the graph efficiently with only one pebble. (2) If the robot does not know an upper bound on the number of vertices n, then (log log n) pebbles are both necessary and sufficient. In both cases our algorithms are deterministic. C © 2002 Elsevier Science (USA)"
]
} |
1603.04234 | 2058122117 | A set of identical, mobile agents is deployed in a weighted network. Each agent has a battery--a power source allowing it to move along network edges. An agent uses its battery proportionally to the distance traveled. We consider two tasks: convergecast, in which at the beginning, each agent has some initial piece of information, and information of all agents has to be collected by some agent; and broadcast in which information of one specified agent has to be made available to all other agents. In both tasks, the agents exchange the currently possessed information when they meet. The objective of this paper is to investigate what is the minimal value of power, initially available to all agents, so that convergecast or broadcast can be achieved. We study this question in the centralized and the distributed settings. In the centralized setting, there is a central monitor that schedules the moves of all agents. In the distributed setting every agent has to perform an algorithm being unaware of the network. In the centralized setting, we give a linear-time algorithm to compute the optimal battery power and the strategy using it, both for convergecast and for broadcast, when agents are on the line. We also show that finding the optimal battery power for convergecast or for broadcast is NP-hard for the class of trees. On the other hand, we give a polynomial algorithm that finds a 2-approximation for convergecast and a 4-approximation for broadcast, for arbitrary graphs.In the distributed setting, we give a 2-competitive algorithm for convergecast in trees and a 4-competitive algorithm for broadcast in trees. The competitive ratio of 2 is proved to be the best for the problem of convergecast, even if we only consider line networks. Indeed, we show that there is no ( @math 2-∈)-competitive algorithm for convergecast or for broadcast in the class of lines, for any @math ∈>0. | Assumptions similar to our paper have been made in @cite_19 @cite_0 @cite_29 where the mobile agents are constrained to travel a fixed distance to explore an unknown graph @cite_19 @cite_0 , or tree @cite_29 . In @cite_19 @cite_0 a mobile agent has to return to its home base to refuel (or recharge its battery) so that the same maximal distance may repeatedly be traversed. @cite_29 gives an 8-competitive distributed algorithm for a set of agents with the same amount of power exploring the tree starting at the same node. | {
"cite_N": [
"@cite_0",
"@cite_19",
"@cite_29"
],
"mid": [
"2095383780",
"1989793404",
""
],
"abstract": [
"Consider a robot that has to travel from a start location @math to a target @math in an environment with opaque obstacles that lie in its way. The robot always knows its current absolute position and that of the target. It does not, however, know the positions and extents of the obstacles in advance; rather, it finds out about obstacles as it encounters them. We compare the distance walked by the robot in going from @math to @math to the length of the shortest (obstacle-free) path between @math and @math in the scene. We describe and analyze robot strategies that minimize this ratio for different kinds of scenes. In particular, we consider the cases of rectangular obstacles aligned with the axes, rectangular obstacles in more general orientations, and wider classes of convex bodies both in two and three dimensions. For many of these situations, our algorithms are optimal up to constant factors. We study scenes with nonconvex obstacles, which are related to the study of maze traversal. We also show scenes where randomized algorithms are provably better than deterministic algorithms.",
"We study how a mobile robot can learn an unknown environment in a piecemeal manner. The robot's goal is to learn a complete map of its environment, while satisfying the constraint that it must return every so often to its starting position (for refueling, say). The environment is modeled as an arbitrary, undirected graph, which is initially unknown to the robot. We assume that the robot can distinguish vertices and edges that it has already explored. We present a surprisingly efficient algorithm for piecemeal learning an unknown undirected graph G=(V, E) in which the robot explores every vertex and edge in the graph by traversing at most O(E+V1+o(1)) edges. This nearly linear algorithm improves on the best previous algorithm, in which the robot traverses at most O(E+V2) edges. We also give an application of piecemeal learning to the problem of searching a graph for a “treasure.”",
""
]
} |
1603.04234 | 2058122117 | A set of identical, mobile agents is deployed in a weighted network. Each agent has a battery--a power source allowing it to move along network edges. An agent uses its battery proportionally to the distance traveled. We consider two tasks: convergecast, in which at the beginning, each agent has some initial piece of information, and information of all agents has to be collected by some agent; and broadcast in which information of one specified agent has to be made available to all other agents. In both tasks, the agents exchange the currently possessed information when they meet. The objective of this paper is to investigate what is the minimal value of power, initially available to all agents, so that convergecast or broadcast can be achieved. We study this question in the centralized and the distributed settings. In the centralized setting, there is a central monitor that schedules the moves of all agents. In the distributed setting every agent has to perform an algorithm being unaware of the network. In the centralized setting, we give a linear-time algorithm to compute the optimal battery power and the strategy using it, both for convergecast and for broadcast, when agents are on the line. We also show that finding the optimal battery power for convergecast or for broadcast is NP-hard for the class of trees. On the other hand, we give a polynomial algorithm that finds a 2-approximation for convergecast and a 4-approximation for broadcast, for arbitrary graphs.In the distributed setting, we give a 2-competitive algorithm for convergecast in trees and a 4-competitive algorithm for broadcast in trees. The competitive ratio of 2 is proved to be the best for the problem of convergecast, even if we only consider line networks. Indeed, we show that there is no ( @math 2-∈)-competitive algorithm for convergecast or for broadcast in the class of lines, for any @math ∈>0. | The problem is sometimes viewed as a special case of the data aggregation question (e.g. @cite_31 @cite_2 ) and it has been studied mainly for wireless and sensor networks, where the battery power usage is an important issue (cf. @cite_28 @cite_30 ). Recently @cite_9 considered the online and offline settings of the scheduling problem when data has to be delivered to mobile clients while they travel within the communication range of wireless stations. @cite_28 presents a randomized distributed algorithm for geometric ad-hoc networks and study the trade-off between the energy used and the latency of . The problem for stationary processors has been extensively studied both for the message passing model, see e.g. @cite_1 , and for the wireless model, see e.g. @cite_26 . To the best of our knowledge, the problem of the present paper, when the mobile agents perform or broadcast by exchanging the held information when meeting, while optimizing the maximal power used by a mobile agent, has never been investigated before. | {
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_28",
"@cite_9",
"@cite_1",
"@cite_2",
"@cite_31"
],
"mid": [
"2142502714",
"2142458954",
"2015710071",
"2120423983",
"1976485065",
"2126644064",
"2110173519"
],
"abstract": [
"A wireless sensor network (WSN) consists of sensors implanted in an environment for collecting and transmitting data regarding changes in the environment based on the requests from a controlling device (called base station) using wireless communication. WSNs are being used in medical, military, and environment monitoring applications. Broadcast (dissemination of information from a central node) and convergecast (gathering of information towards a central node) are important communication paradigms across all application domains. Most sensor applications involve both convergecasting and broadcasting. The time taken to complete either of them has to be kept minimal. This can be accomplished by constructing an efficient tree for both broadcasting as well as convergecasting and allocating wireless communication channels to ensure collision-free communication. There exist several works on broadcasting in multihop radio networks (a.k.a. ad hoc networks), which can also be used for broadcasting in WSNs. These algorithms construct a broadcast tree and compute a schedule for transmitting and receiving for each node to achieve collision-free broadcasting. In this paper, we show that we need a new algorithm for applications, which involve both convergecasting and broadcasting since the broadcast tree may not be efficient for convergecasting. So we propose a heuristic algorithm (convergecasting tree construction and channel allocation algorithm (CTCCAA)), which constructs a tree with schedules assigned to nodes for collision free convergecasting. The algorithm is capable of code allocation (direct sequence spread spectrum (DSSS) frequency hopping spread spectrum (FHSS)), in case multiple codes are available, to minimize the total duration required for convergecasting. We also show that the same tree can be used for broadcasting and is as efficient as a tree exclusively constructed for broadcasting.",
"The time-complexity of deterministic and randomized protocols for achieving broadcast (distributing a message from a source to all other nodes) in arbitrary multi-hop radio networks is investigated. In many such networks, communication takes place in synchronous time-slots. A processor receives a message at a certain time-slot if exactly one of its neighbors transmits at that time-slot. We assume no collision-detection mechanism; i.e., it is not always possible to distinguish the case where no neighbor transmits from the case where several neighbors transmit simultaneously. We present a randomized protocol that achieves broadcast in time which is optimal up to a logarithmic factor. In particular, with probability 1 --E, the protocol achieves broadcast within O((D + log n s) ‘log n) time-slots, where n is the number of processors in the network and D its diameter. On the other hand, we prove a linear lower bound on the deterministic time-complexity of broadcast in this model. Namely, we show that any deterministic broadcast protocol requires 8(n) time-slots, even if the network has diameter 3, and n is known to all processors. These two results demonstrate an exponential gap in complexity between randomization and determinism.",
"Wireless ad hoc radio networks have gained a lot of attention in recent years. We consider geometric networks, where nodes are located in a Euclidean plane. We assume that each node has a variable transmission range and can learn the distance to the closest active neighbor at any time. We also assume that nodes have a special collision detection (CD) capability so that a transmitting node can detect a collision within its transmission range. We study the basic communication problem of collecting data from all nodes called convergecast. Recently, there appeared many new applications such as real-time multimedia, battlefield communications and rescue operations that impose stringent delay requirements on the convergecast time. We measure the latency of convergecast, that is the number of time steps needed to collect the data in any n-node network. We propose a very simple randomized distributed algorithm that has the expected running time O(logn). We also show that this bound is tight and any algorithm needs @W(logn) time steps while performing convergecast in an arbitrary network. One of the most important problems in wireless ad hoc networks is to minimize the energy consumption, which maximizes the network lifetime. We study the trade-off between the energy and the latency of convergecast. We show that our algorithm consumes at most O(nlogn) times the minimum energy. We also demonstrate that for a line topology, the minimum energy convergecast takes n time steps while any algorithm performing convergecast within O(logn) time steps requires @W(n logn) times the minimum energy.",
"We consider variations of a problem in which data must be delivered to mobile clients en route, as they travel toward their destinations. The data can only be delivered to the mobile clients as they pass within range of wireless base stations. Example scenarios include the delivery of building maps to firefighters responding to multiple alarms. We cast this scenario as a parallel-machine scheduling problem with the little-studied property that jobs may have different release times and deadlines when assigned to different machines. We present new algorithms and also adapt existing algorithms, for both online and offline settings. We evaluate these algorithms on a variety of problem instance types, using both synthetic and real-world data, including several geographical scenarios, and show that our algorithms produce schedules achieving near-optimal throughput.",
"This paper concerns the message complexity of broadcast in arbitrary point-to-point communication networks. Broadcast is a task initiated by a single processor that wishes to convey a message to all processors in the network. The widely accepted model of communication networks, in which each processor initially knows the identity of its neighbors but does not know the entire network topology, is assumed. Although it seems obvious that the number of messages required for broadcast in this model equals the number of links, no proof of this basic fact has been given before. It is shown that the message complexity of broadcast depends on the exact complexity measure. If messages of unbounded length are counted at unit cost, then broadcast requires T(u V u) messages, where V is the set of processors in the network. It is proved that, if one counts messages of bounded length , then broadcast requires T(u E u) messages, where E is the set of edges in the network. Assuming an intermediate model in which each vertex knows the topology of the network in radius r ≥ 1 from itself, matching upper and lower bounds of T(min u E u, u V u 1+T(l) r ) is proved on the number of messages of bounded length required for broadcast. Both the upper and lower bounds hold for both synchronous and asynchronous network models. The same results hold for the construction of spanning trees, and various other global tasks.",
"Wireless sensor networks consist of sensor nodes with sensing and com- munication capabilities. We focus on data-aggregation problems in energy- constrained sensor networks. The main goal of data-aggregation algorithms is to gather and aggregate data in an energy efficient manner so that net- work lifetime is enhanced. In this article we present a survey of data-aggre- gation algorithms in wireless sensor networks. We compare and contrast different algorithms on the basis of performance measures such as lifetime, latency, and data accuracy. We conclude with possible future research directions.",
"Sensor networks are distributed event-based systems that differ from traditional communication networks in several ways: sensor networks have severe energy constraints, redundant low-rate data, and many-to-one flows. Data-centric mechanisms that perform in-network aggregation of data are needed in this setting for energy-efficient information flow. In this paper we model data-centric routing and compare its performance with traditional end-to-end routing schemes. We examine the impact of source-destination placement and communication network density on the energy costs and delay associated with data aggregation. We show that data-centric routing offers significant performance gains across a wide range of operational scenarios. We also examine the complexity of optimal data aggregation, showing that although it is an NP-hard problem in general, there exist useful polynomial-time special cases."
]
} |
1603.03817 | 2301352816 | We study paths of time-length @math of a continuous-time random walk on @math subject to self-interaction that depends on the geometry of the walk range and a collection of random, uniformly positive and finite edge weights. The interaction enters through a Gibbs weight at inverse temperature @math ; the "energy" is the total sum of the edge weights for edges on the outer boundary of the range. For edge weights sampled from a translation-invariant, ergodic law, we prove that the range boundary condensates around an asymptotic shape in the limit @math followed by @math . The limit shape is a minimizer (unique, modulo translates) of the sum of the principal harmonic frequency of the domain and the perimeter with respect to the first-passage percolation norm derived from (the law of) the edge weights. A dense subset of all norms in @math , and thus a large variety of shapes, arise from the class of weight distributions to which our proofs apply. | As noted above, Berestycki and Yadin @cite_25 (apparently prompted by questions from I. Benjamini) studied a related model of an interacting random walk. There are two notable differences between their and our setting: First, their interaction includes the internal components of the boundary and, second, it is given by the number of on the inner boundary. For this case they showed that the path is confined (with different type of control in @math and @math ) on the spatial scale The exponent is strictly less than @math in all spatial dimensions @math ; the walk is thus squeezed'' by the interaction relative to its typical (diffusive) scaling. | {
"cite_N": [
"@cite_25"
],
"mid": [
"1573609569"
],
"abstract": [
"We introduce a Gibbs measure on nearest-neighbour paths of length @math in the Euclidean @math -dimensional lattice, where each path is penalised by a factor proportional to the size of its boundary and an inverse temperature @math . We prove that, for all @math , the random walk condensates to a set of diameter @math in dimension @math , up to a multiplicative constant. In all dimensions @math , we also prove that the volume is bounded above by @math and the diameter is bounded below by @math . Similar results hold for a random walk conditioned to have local time greater than @math everywhere in its range when @math is larger than some explicit constant, which in dimension two is the logarithm of the connective constant."
]
} |
1603.03817 | 2301352816 | We study paths of time-length @math of a continuous-time random walk on @math subject to self-interaction that depends on the geometry of the walk range and a collection of random, uniformly positive and finite edge weights. The interaction enters through a Gibbs weight at inverse temperature @math ; the "energy" is the total sum of the edge weights for edges on the outer boundary of the range. For edge weights sampled from a translation-invariant, ergodic law, we prove that the range boundary condensates around an asymptotic shape in the limit @math followed by @math . The limit shape is a minimizer (unique, modulo translates) of the sum of the principal harmonic frequency of the domain and the perimeter with respect to the first-passage percolation norm derived from (the law of) the edge weights. A dense subset of all norms in @math , and thus a large variety of shapes, arise from the class of weight distributions to which our proofs apply. | The appearance of the Dirichlet eigenvalue has to do with the large-deviations cost of keeping the walk confined to a given spatial region. The associated large-deviations principle (which goes back to Donsker and Varadhan @cite_11 ) underlies a large body of literature on random walks interacting via their local time and or through an underlying random environment; e.g., the study of the parabolic Anderson model (cf K "onig @cite_9 for a recent review), random walk and or Brownian motion among random obstacles (Sznitman @cite_7 ), etc. Two recent papers of Asselah and Shapira @cite_0 @cite_20 are relevant for our context as they develop a detailed large-deviation approach to the size of the boundary of the random walk range in spatial dimensions @math . This expands on the study of moderate deviations for the Brownian sausage'' by van den Berg, Bolthausen and den Hollander @cite_17 . | {
"cite_N": [
"@cite_7",
"@cite_9",
"@cite_17",
"@cite_0",
"@cite_20",
"@cite_11"
],
"mid": [
"",
"1592584515",
"2564141741",
"2209438317",
"2347064268",
"1992546213"
],
"abstract": [
"",
"This is a survey on the intermittent behavior of the parabolic Anderson model, which is the Cauchy problem for the heat equation with random potential on the lattice ℤd. We first introduce the model and give heuristic explanations of the long-time behavior of the solution, both in the annealed and the quenched setting for time-independent potentials. We thereby consider examples of potentials studied in the literature. In the particularly important case of an i.i.d. potential with double-exponential tails we formulate the asymptotic results in detail. Furthermore, we explain that, under mild regularity assumptions, there are only four different universality classes of asymptotic behaviors. Finally, we study the moment Lyapunov exponents for space-time homogeneous catalytic potentials generated by a Poisson field of random walks.",
"For a > 0, let Wa(t) be the a-neighbourhood of standard Brownian motion in Rd starting at 0 and observed until time t. It is well-known that E|Wa(t)| ?at (t ? 8) for d = 3, with ?a the Newtonian capacity of the ball with radius a. We prove that @math and derive a variational representation for the rate function I?a . We show that the optimal strategy to realise the above moderate deviation is for Wa(t) to look like a Swiss cheese': Wa(t) has random holes whose sizes are of order 1 and whose density varies on scale t1 d. The optimal strategy is such that t-1 dWa(t) is delocalised in the limit as t ? 8. This is markedly different from the optimal strategy for large deviations |Wa(t)| = f(t) with f(t) = o(t), where Wa(t) is known to fill completely a ball of volume f(t) and nothing outside, so that Wa(t) has no holes and f(t)-1 dWa(t) is localised in the limit as t ? 8. We give a detailed analysis of the rate function I?a , in particular, its behaviour near the boundary points of (0, ?a) as well as certain monotonicity properties. It turns out that I?a has an infinite slope at ?a and, remarkably, for d = 5 is nonanalytic at some critical point in (0, ?a), above which it follows a pure power law. This crossover is associated with a collapse transition in the optimal strategy. We also derive the analogous moderate deviation result for d = 2. In this case E|Wa(t)| 2p t log t (t ? 8), and we prove that @math The rate function I2p has a finite slope at 2p.",
"We study the boundary of the range for the simple random walk on Z d in the transient regime d ≥ 3. We show that sizes of the range and its boundary differ mainly by a martingale. As a consequence, we obtain a bound on the variance of order n log n in dimension three. We also establish a central limit theorem in dimension four and larger.",
"We study downward deviations of the boundary of the range of a transient walk on the Euclidean lattice. We describe the optimal strategy adopted by the walk in order to shrink the boundary of its range. The technics we develop apply equally well to the range, and provide pathwise statements for the Swiss cheese picture of Bolthausen, van den Berg and den Hollander [BBH01].",
""
]
} |
1603.04186 | 2952545564 | Convolutional neural networks have been shown to develop internal representations, which correspond closely to semantically meaningful objects and parts, although trained solely on class labels. Class Activation Mapping (CAM) is a recent method that makes it possible to easily highlight the image regions contributing to a network's classification decision. We build upon these two developments to enable a network to re-examine informative image regions, which we term introspection. We propose a weakly-supervised iterative scheme, which shifts its center of attention to increasingly discriminative regions as it progresses, by alternating stages of classification and introspection. We evaluate our method and show its effectiveness over a range of several datasets, where we obtain competitive or state-of-the-art results: on Stanford-40 Actions, we set a new state-of the art of 81.74 . On FGVC-Aircraft and the Stanford Dogs dataset, we show consistent improvements over baselines, some of which include significantly more supervision. | Supervised methods consistently outperform unsupervised or semi-supervised methods, as they allow for the incorporation of prior knowledge into the learning process. There is a trade-off between more accurate classification results and structured output on the one and, the cost of labor-intensive manual annotations, on the other. Some examples are @cite_9 @cite_28 , where bounding boxes and part annotations are given at train time. Aside from the resources required for large-scale annotations, such methods elude the question of learning from weakly supervised data (and mostly unsupervised data), as is known to happen in human infants, who can learn from limited examples @cite_8 . Following are a few lines of work related to the proposed method. | {
"cite_N": [
"@cite_28",
"@cite_9",
"@cite_8"
],
"mid": [
"2275770195",
"",
"2194321275"
],
"abstract": [
"Pose variation and subtle differences in appearance are key challenges to fine-grained classification. While deep networks have markedly improved general recognition, many approaches to fine-grained recognition rely on anchoring networks to parts for better accuracy. Identifying parts to find correspondence discounts pose variation so that features can be tuned to appearance. To this end previous methods have examined how to find parts and extract pose-normalized features. These methods have generally separated fine-grained recognition into stages which first localize parts using hand-engineered and coarsely-localized proposal features, and then separately learn deep descriptors centered on inferred part positions. We unify these steps in an end-to-end trainable network supervised by keypoint locations and class labels that localizes parts by a fully convolutional network to focus the learning of feature representations for the fine-grained classification task. Experiments on the popular CUB200 dataset show that our method is state-of-the-art and suggest a continuing role for strong supervision.",
"",
"People learning new concepts can often generalize successfully from just a single example, yet machine learning algorithms typically require tens or hundreds of examples to perform with similar accuracy. People can also use learned concepts in richer ways than conventional algorithms—for action, imagination, and explanation. We present a computational model that captures these human learning abilities for a large class of simple visual concepts: handwritten characters from the world’s alphabets. The model represents concepts as simple programs that best explain observed examples under a Bayesian criterion. On a challenging one-shot classification task, the model achieves human-level performance while outperforming recent deep learning approaches. We also present several “visual Turing tests” probing the model’s creative generalization abilities, which in many cases are indistinguishable from human behavior."
]
} |
1603.04186 | 2952545564 | Convolutional neural networks have been shown to develop internal representations, which correspond closely to semantically meaningful objects and parts, although trained solely on class labels. Class Activation Mapping (CAM) is a recent method that makes it possible to easily highlight the image regions contributing to a network's classification decision. We build upon these two developments to enable a network to re-examine informative image regions, which we term introspection. We propose a weakly-supervised iterative scheme, which shifts its center of attention to increasingly discriminative regions as it progresses, by alternating stages of classification and introspection. We evaluate our method and show its effectiveness over a range of several datasets, where we obtain competitive or state-of-the-art results: on Stanford-40 Actions, we set a new state-of the art of 81.74 . On FGVC-Aircraft and the Stanford Dogs dataset, we show consistent improvements over baselines, some of which include significantly more supervision. | Several methods have been proposed to visualize the output of a neural net or explore its internal activations. Zeiler al @cite_11 found patterns that activate hidden units via deconvolutional neural networks. They also explore the localization ability of a CNN by observing the change in classification as different image regions are masked out. @cite_13 Solves an optimization problem, aiming to generate an image whose features are similar to a target image, regularized by a natural image prior. Zhou al @cite_0 aims to explicitly find what image patches activate hidden network units, finding that indeed many of them correspond to semantic concepts and object parts. These visualizations suggest that, despite training solely with image labels, there is much to exploit within the internal representations learned by the network and that the emergent representations can be used for weakly supervised localization and other tasks of fine-grained nature. | {
"cite_N": [
"@cite_0",
"@cite_13",
"@cite_11"
],
"mid": [
"1899185266",
"2949987032",
"1849277567"
],
"abstract": [
"With the success of new computational architectures for visual processing, such as convolutional neural networks (CNN) and access to image databases with millions of labeled examples (e.g., ImageNet, Places), the state of the art in computer vision is advancing rapidly. One important factor for continued progress is to understand the representations that are learned by the inner layers of these deep architectures. Here we show that object detectors emerge from training CNNs to perform scene classification. As scenes are composed of objects, the CNN for scene classification automatically discovers meaningful objects detectors, representative of the learned scene categories. With object detectors emerging as a result of learning to recognize scenes, our work demonstrates that the same network can perform both scene recognition and object localization in a single forward-pass, without ever having been explicitly taught the notion of objects.",
"Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system. Nevertheless, our understanding of them remains limited. In this paper we conduct a direct analysis of the visual information contained in representations by asking the following question: given an encoding of an image, to which extent is it possible to reconstruct the image itself? To answer this question we contribute a general framework to invert representations. We show that this method can invert representations such as HOG and SIFT more accurately than recent alternatives while being applicable to CNNs too. We then use this technique to study the inverse of recent state-of-the-art CNN image representations for the first time. Among our findings, we show that several layers in CNNs retain photographically accurate information about the image, with different degrees of geometric and photometric invariance.",
"Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark [18]. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we explore both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Used in a diagnostic role, these visualizations allow us to find model architectures that outperform on the ImageNet classification benchmark. We also perform an ablation study to discover the performance contribution from different model layers. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets."
]
} |
1603.04186 | 2952545564 | Convolutional neural networks have been shown to develop internal representations, which correspond closely to semantically meaningful objects and parts, although trained solely on class labels. Class Activation Mapping (CAM) is a recent method that makes it possible to easily highlight the image regions contributing to a network's classification decision. We build upon these two developments to enable a network to re-examine informative image regions, which we term introspection. We propose a weakly-supervised iterative scheme, which shifts its center of attention to increasingly discriminative regions as it progresses, by alternating stages of classification and introspection. We evaluate our method and show its effectiveness over a range of several datasets, where we obtain competitive or state-of-the-art results: on Stanford-40 Actions, we set a new state-of the art of 81.74 . On FGVC-Aircraft and the Stanford Dogs dataset, we show consistent improvements over baselines, some of which include significantly more supervision. | Some recent works attempt to obtain object localization through weak labels, i.e., the net is trained on image-level class labels, but it also learns localization. @cite_3 Localizes image regions pertaining to the target class by masking out sub-images and inspecting change in activations of the network. Oquab al @cite_21 use global max-pooling to obtain points on the target objects. Recently, Zhou al @cite_2 used global average pooling (GAP) to generate a Class-Activation Mapping (CAM), visualizing discriminative image regions and enabling the localization of detected concepts. Our introspection mechanism utilizes their CAMs to iteratively identify discriminative regions and uses them to improve classification without additional supervision. | {
"cite_N": [
"@cite_21",
"@cite_3",
"@cite_2"
],
"mid": [
"1994488211",
"2951505120",
"2950328304"
],
"abstract": [
"Successful methods for visual object recognition typically rely on training datasets containing lots of richly annotated images. Detailed image annotation, e.g. by object bounding boxes, however, is both expensive and often subjective. We describe a weakly supervised convolutional neural network (CNN) for object classification that relies only on image-level labels, yet can learn from cluttered scenes containing multiple objects. We quantify its object classification and object location prediction performance on the Pascal VOC 2012 (20 object classes) and the much larger Microsoft COCO (80 object classes) datasets. We find that the network (i) outputs accurate image-level labels, (ii) predicts approximate locations (but not extents) of objects, and (iii) performs comparably to its fully-supervised counterparts using object bounding box annotation for training.",
"This paper introduces self-taught object localization, a novel approach that leverages deep convolutional networks trained for whole-image recognition to localize objects in images without additional human supervision, i.e., without using any ground-truth bounding boxes for training. The key idea is to analyze the change in the recognition scores when artificially masking out different regions of the image. The masking out of a region that includes the object typically causes a significant drop in recognition score. This idea is embedded into an agglomerative clustering technique that generates self-taught localization hypotheses. Our object localization scheme outperforms existing proposal methods in both precision and recall for small number of subwindow proposals (e.g., on ILSVRC-2012 it produces a relative gain of 23.4 over the state-of-the-art for top-1 hypothesis). Furthermore, our experiments show that the annotations automatically-generated by our method can be used to train object detectors yielding recognition results remarkably close to those obtained by training on manually-annotated bounding boxes.",
"In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despite being trained on image-level labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that can be applied to a variety of tasks. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014, which is remarkably close to the 34.2 top-5 error achieved by a fully supervised CNN approach. We demonstrate that our network is able to localize the discriminative image regions on a variety of tasks despite not being trained for them"
]
} |
1603.04186 | 2952545564 | Convolutional neural networks have been shown to develop internal representations, which correspond closely to semantically meaningful objects and parts, although trained solely on class labels. Class Activation Mapping (CAM) is a recent method that makes it possible to easily highlight the image regions contributing to a network's classification decision. We build upon these two developments to enable a network to re-examine informative image regions, which we term introspection. We propose a weakly-supervised iterative scheme, which shifts its center of attention to increasingly discriminative regions as it progresses, by alternating stages of classification and introspection. We evaluate our method and show its effectiveness over a range of several datasets, where we obtain competitive or state-of-the-art results: on Stanford-40 Actions, we set a new state-of the art of 81.74 . On FGVC-Aircraft and the Stanford Dogs dataset, we show consistent improvements over baselines, some of which include significantly more supervision. | Recently, some attention based mechanisms have been proposed, which allow focusing on relevant image regions, either for the task of better classification @cite_10 or efficient object localization @cite_23 . Such methods benefit from the recent fusion between the fields of deep learning and reinforcement learning @cite_14 . Another method of interest is the spatial-transformer networks in @cite_26 : they designed a network that learns and applies spatial warping to the feature maps, effectively aligning inputs, which results in increased robustness to geometric transformations. This enables fine-grained categorization on the CUB-200-2011 birds @cite_25 dataset by transforming the image so that only discriminative parts are considered (bird's head, body). Additional works appear in @cite_5 , who discovers discriminative patches and groups them to generate part detectors, whose detections are combined with the discovered patches for a final classification. In @cite_27 , the outputs of two networks are combined via an outer-product, creating a strong feature representation. @cite_29 discovers and uses parts by using co-segmentation on ground-truth bounding boxes followed by alignment. | {
"cite_N": [
"@cite_14",
"@cite_26",
"@cite_29",
"@cite_27",
"@cite_23",
"@cite_5",
"@cite_10",
"@cite_25"
],
"mid": [
"1757796397",
"2951005624",
"1898560071",
"",
"2179488730",
"",
"2951527505",
""
],
"abstract": [
"We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them.",
"Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations.",
"Scaling up fine-grained recognition to all domains of fine-grained objects is a challenge the computer vision community will need to face in order to realize its goal of recognizing all object categories. Current state-of-the-art techniques rely heavily upon the use of keypoint or part annotations, but scaling up to hundreds or thousands of domains renders this annotation cost-prohibitive for all but the most important categories. In this work we propose a method for fine-grained recognition that uses no part annotations. Our method is based on generating parts using co-segmentation and alignment, which we combine in a discriminative mixture. Experimental results show its efficacy, demonstrating state-of-the-art results even when compared to methods that use part annotations during training.",
"",
"We present an active detection model for localizing objects in scenes. The model is class-specific and allows an agent to focus attention on candidate regions for identifying the correct location of a target object. This agent learns to deform a bounding box using simple transformation actions, with the goal of determining the most specific location of target objects following top-down reasoning. The proposed localization agent is trained using deep reinforcement learning, and evaluated on the Pascal VOC 2007 dataset. We show that agents guided by the proposed model are able to localize a single instance of an object after analyzing only between 11 and 25 regions in an image, and obtain the best detection results among systems that do not use object proposals for object localization.",
"",
"Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so.",
""
]
} |
1603.03925 | 2953022248 | Automatically generating a natural language description of an image has attracted interests recently both because of its importance in practical applications and because it connects two major artificial intelligence fields: computer vision and natural language processing. Existing approaches are either top-down, which start from a gist of an image and convert it into words, or bottom-up, which come up with words describing various aspects of an image and then combine them. In this paper, we propose a new algorithm that combines both approaches through a model of semantic attention. Our algorithm learns to selectively attend to semantic concept proposals and fuse them into hidden states and outputs of recurrent neural networks. The selection and fusion form a feedback connecting the top-down and bottom-up computation. We evaluate our algorithm on two public benchmarks: Microsoft COCO and Flickr30K. Experimental results show that our algorithm significantly outperforms the state-of-the-art approaches consistently across different evaluation metrics. | There is a growing body of literature on image captioning which can be generally divided into two categories: top-down and bottom-up. Bottom-up approaches are the classical'' ones, which start with visual concepts, objects, attributes, words and phrases, and combine them into sentences using language models. @cite_29 and @cite_26 detect concepts and use templates to obtain sentences, while @cite_27 pieces together detected concepts. @cite_49 and @cite_33 use more powerful language models. @cite_28 and @cite_42 are the latest attempts along this direction and they achieve close to the state-of-the-art performance on various image captioning benchmarks. | {
"cite_N": [
"@cite_26",
"@cite_33",
"@cite_28",
"@cite_29",
"@cite_42",
"@cite_27",
"@cite_49"
],
"mid": [
"",
"2149172860",
"2949769367",
"1897761818",
"1780856595",
"",
"2143449221"
],
"abstract": [
"",
"We present a holistic data-driven approach to image description generation, exploiting the vast amount of (noisy) parallel image data and associated natural language descriptions available on the web. More specifically, given a query image, we retrieve existing human-composed phrases used to describe visually similar images, then selectively combine those phrases to generate a novel description for the query image. We cast the generation process as constraint optimization problems, collectively incorporating multiple interconnected aspects of language composition for content planning, surface realization and discourse structure. Evaluation by human annotators indicates that our final system generates more semantically correct and linguistically appealing descriptions than two nontrivial baselines.",
"This paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture global semantics by re-ranking caption candidates using sentence-level features and a deep multimodal similarity model. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1 . When human judges compare the system captions to ones written by other people on our held-out test set, the system captions have equal or better quality 34 of the time.",
"Humans can prepare concise descriptions of pictures, focusing on what they find important. We demonstrate that automatic methods can do so too. We describe a system that can compute a score linking an image to a sentence. This score can be used to attach a descriptive sentence to a given image, or to obtain images that illustrate a given sentence. The score is obtained by comparing an estimate of meaning obtained from the image to one obtained from the sentence. Each estimate of meaning comes from a discriminative procedure that is learned us-ingdata. We evaluate on a novel dataset consisting of human-annotated images. While our underlying estimate of meaning is impoverished, it is sufficient to produce very good quantitative results, evaluated with a novel score that can account for synecdoche.",
"Generating a novel textual description of an image is an interesting problem that connects computer vision and natural language processing. In this paper, we present a simple model that is able to generate descriptive sentences given a sample image. This model has a strong focus on the syntax of the descriptions. We train a purely bilinear model that learns a metric between an image representation (generated from a previously trained Convolutional Neural Network) and phrases that are used to described them. The system is then able to infer phrases from a given image sample. Based on caption syntax statistics, we propose a simple language model that can produce relevant descriptions for a given test image using the phrases inferred. Our approach, which is considerably simpler than state-of-the-art models, achieves comparable results on the recently release Microsoft COCO dataset.",
"",
"Describing the main event of an image involves identifying the objects depicted and predicting the relationships between them. Previous approaches have represented images as unstructured bags of regions, which makes it difficult to accurately predict meaningful relationships between regions. In this paper, we introduce visual dependency representations to capture the relationships between the objects in an image, and hypothesize that this representation can improve image description. We test this hypothesis using a new data set of region-annotated images, associated with visual dependency representations and gold-standard descriptions. We describe two template-based description generation models that operate over visual dependency representations. In an image description task, we find that these models outperform approaches that rely on object proximity or corpus information to generate descriptions on both automatic measures and on human judgements."
]
} |
1603.03925 | 2953022248 | Automatically generating a natural language description of an image has attracted interests recently both because of its importance in practical applications and because it connects two major artificial intelligence fields: computer vision and natural language processing. Existing approaches are either top-down, which start from a gist of an image and convert it into words, or bottom-up, which come up with words describing various aspects of an image and then combine them. In this paper, we propose a new algorithm that combines both approaches through a model of semantic attention. Our algorithm learns to selectively attend to semantic concept proposals and fuse them into hidden states and outputs of recurrent neural networks. The selection and fusion form a feedback connecting the top-down and bottom-up computation. We evaluate our algorithm on two public benchmarks: Microsoft COCO and Flickr30K. Experimental results show that our algorithm significantly outperforms the state-of-the-art approaches consistently across different evaluation metrics. | Visual attention is known in Psychology and Neuroscience for long but is only recently studied in Computer Vision and related areas. In terms of models, @cite_31 @cite_46 approach it with Boltzmann machines while @cite_10 does with recurrent neural networks. In terms of applications, @cite_11 studies it for image tracking, @cite_32 studies it for image recognition of multiple objects, and @cite_23 uses for image generation. Finally, as we discuss in , we are not the first to consider it for image captioning. In @cite_43 , , propose a spatial attention model for image captioning. | {
"cite_N": [
"@cite_31",
"@cite_32",
"@cite_43",
"@cite_23",
"@cite_46",
"@cite_10",
"@cite_11"
],
"mid": [
"2141399712",
"1484210532",
"2950178297",
"1850742715",
"",
"2951527505",
"2154071538"
],
"abstract": [
"We describe a model based on a Boltzmann machine with third-order connections that can learn how to accumulate information about a shape over several fixations. The model uses a retina that only has enough high resolution pixels to cover a small area of the image, so it must decide on a sequence of fixations and it must combine the \"glimpse\" at each fixation with the location of the fixation before integrating the information with information from other glimpses of the same object. We evaluate this model on a synthetic dataset and two image classification datasets, showing that it can perform at least as well as a model trained on whole images.",
"We present an attention-based model for recognizing multiple objects in images. The proposed model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant regions of the input image. We show that the model learns to both localize and recognize multiple objects despite being given only class labels during training. We evaluate the model on the challenging task of transcribing house number sequences from Google Street View images and show that it is both more accurate than the state-of-the-art convolutional networks and uses fewer parameters and less computation.",
"Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.",
"This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.",
"",
"Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so.",
"We discuss an attentional model for simultaneous object tracking and recognition that is driven by gaze data. Motivated by theories of perception, the model consists of two interacting pathways, identity and control, intended to mirror the what and where pathways in neuroscience models. The identity pathway models object appearance and performs classification using deep (factored)-restricted Boltzmann machines. At each point in time, the observations consist of foveated images, with decaying resolution toward the periphery of the gaze. The control pathway models the location, orientation, scale, and speed of the attended object. The posterior distribution of these states is estimated with particle filtering. Deeper in the control pathway, we encounter an attentional mechanism that learns to select gazes so as to minimize tracking uncertainty. Unlike in our previous work, we introduce gaze selection strategies that operate in the presence of partial information and on a continuous action space. We show that a straightforward extension of the existing approach to the partial information setting results in poor performance, and we propose an alternative method based on modeling the reward surface as a gaussian process. This approach gives good performance in the presence of partial information and allows us to expand the action space from a small, discrete set of fixation points to a continuous domain."
]
} |
1603.03875 | 2300659874 | We present a portable device to capture both shape and reflectance of an indoor scene. Consisting of a Kinect, an IR camera and several IR LEDs, our device allows the user to acquire data in a similar way as he she scans with a single Kinect. Scene geometry is reconstructed by KinectFusion. To estimate reflectance from incomplete and noisy observations, 3D vertices of the same material are identified by our material segmentation propagation algorithm. Then BRDF observations at these vertices are merged into a more complete and accurate BRDF for the material. Effectiveness of our device is demonstrated by quality results on real-world scenes. | The work closest to ours is @cite_6 , which made use of a single Kinect for appearance capture. While both systems feature portability and ease of use, ours is different from that of @cite_6 in following ways. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2168792505"
],
"abstract": [
"We present an interactive material acquisition system for average users to capture the spatially varying appearance of daily objects. While an object is being scanned, our system estimates its appearance on-the-fly and provides quick visual feedback. We build the system entirely on low-end, off-the-shelf components: a Kinect sensor, a mirror ball and printed markers. We exploit the Kinect infra-red emitter receiver, originally designed for depth computation, as an active hand-held reflectometer, to segment the object into clusters of similar specular materials and estimate the roughness parameters of BRDFs simultaneously. Next, the diffuse albedo and specular intensity of the spatially varying materials are rapidly computed in an inverse rendering framework, using data from the Kinect RGB camera. We demonstrate captured results of a range of materials, and physically validate our system."
]
} |
1603.03875 | 2300659874 | We present a portable device to capture both shape and reflectance of an indoor scene. Consisting of a Kinect, an IR camera and several IR LEDs, our device allows the user to acquire data in a similar way as he she scans with a single Kinect. Scene geometry is reconstructed by KinectFusion. To estimate reflectance from incomplete and noisy observations, 3D vertices of the same material are identified by our material segmentation propagation algorithm. Then BRDF observations at these vertices are merged into a more complete and accurate BRDF for the material. Effectiveness of our device is demonstrated by quality results on real-world scenes. | 1. @cite_6 worked on a single object because of the requirement of environmental lighting while our device works on scenes, such as a room's corner, thanks to its active illumination in the IR spectrum; | {
"cite_N": [
"@cite_6"
],
"mid": [
"2168792505"
],
"abstract": [
"We present an interactive material acquisition system for average users to capture the spatially varying appearance of daily objects. While an object is being scanned, our system estimates its appearance on-the-fly and provides quick visual feedback. We build the system entirely on low-end, off-the-shelf components: a Kinect sensor, a mirror ball and printed markers. We exploit the Kinect infra-red emitter receiver, originally designed for depth computation, as an active hand-held reflectometer, to segment the object into clusters of similar specular materials and estimate the roughness parameters of BRDFs simultaneously. Next, the diffuse albedo and specular intensity of the spatially varying materials are rapidly computed in an inverse rendering framework, using data from the Kinect RGB camera. We demonstrate captured results of a range of materials, and physically validate our system."
]
} |
1603.03875 | 2300659874 | We present a portable device to capture both shape and reflectance of an indoor scene. Consisting of a Kinect, an IR camera and several IR LEDs, our device allows the user to acquire data in a similar way as he she scans with a single Kinect. Scene geometry is reconstructed by KinectFusion. To estimate reflectance from incomplete and noisy observations, 3D vertices of the same material are identified by our material segmentation propagation algorithm. Then BRDF observations at these vertices are merged into a more complete and accurate BRDF for the material. Effectiveness of our device is demonstrated by quality results on real-world scenes. | 2. @cite_6 assumed a parametric BRDF model while we use a bivariate BRDF model which is deduced from reflectance symmetries and is represented as a 2D table. Compared with a parametric model, the bivariate model is applicable to a wider range of real-world materials. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2168792505"
],
"abstract": [
"We present an interactive material acquisition system for average users to capture the spatially varying appearance of daily objects. While an object is being scanned, our system estimates its appearance on-the-fly and provides quick visual feedback. We build the system entirely on low-end, off-the-shelf components: a Kinect sensor, a mirror ball and printed markers. We exploit the Kinect infra-red emitter receiver, originally designed for depth computation, as an active hand-held reflectometer, to segment the object into clusters of similar specular materials and estimate the roughness parameters of BRDFs simultaneously. Next, the diffuse albedo and specular intensity of the spatially varying materials are rapidly computed in an inverse rendering framework, using data from the Kinect RGB camera. We demonstrate captured results of a range of materials, and physically validate our system."
]
} |
1603.03875 | 2300659874 | We present a portable device to capture both shape and reflectance of an indoor scene. Consisting of a Kinect, an IR camera and several IR LEDs, our device allows the user to acquire data in a similar way as he she scans with a single Kinect. Scene geometry is reconstructed by KinectFusion. To estimate reflectance from incomplete and noisy observations, 3D vertices of the same material are identified by our material segmentation propagation algorithm. Then BRDF observations at these vertices are merged into a more complete and accurate BRDF for the material. Effectiveness of our device is demonstrated by quality results on real-world scenes. | 3. @cite_6 required illumination calibration whenever the visible illumination changes, which is done by placing a mirror sphere into the scene, while our device does not require such calibration; | {
"cite_N": [
"@cite_6"
],
"mid": [
"2168792505"
],
"abstract": [
"We present an interactive material acquisition system for average users to capture the spatially varying appearance of daily objects. While an object is being scanned, our system estimates its appearance on-the-fly and provides quick visual feedback. We build the system entirely on low-end, off-the-shelf components: a Kinect sensor, a mirror ball and printed markers. We exploit the Kinect infra-red emitter receiver, originally designed for depth computation, as an active hand-held reflectometer, to segment the object into clusters of similar specular materials and estimate the roughness parameters of BRDFs simultaneously. Next, the diffuse albedo and specular intensity of the spatially varying materials are rapidly computed in an inverse rendering framework, using data from the Kinect RGB camera. We demonstrate captured results of a range of materials, and physically validate our system."
]
} |
1603.03875 | 2300659874 | We present a portable device to capture both shape and reflectance of an indoor scene. Consisting of a Kinect, an IR camera and several IR LEDs, our device allows the user to acquire data in a similar way as he she scans with a single Kinect. Scene geometry is reconstructed by KinectFusion. To estimate reflectance from incomplete and noisy observations, 3D vertices of the same material are identified by our material segmentation propagation algorithm. Then BRDF observations at these vertices are merged into a more complete and accurate BRDF for the material. Effectiveness of our device is demonstrated by quality results on real-world scenes. | 4. @cite_6 required a user-specified number of materials, which is hard for average users to decide, while our system does not require additional input from the user. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2168792505"
],
"abstract": [
"We present an interactive material acquisition system for average users to capture the spatially varying appearance of daily objects. While an object is being scanned, our system estimates its appearance on-the-fly and provides quick visual feedback. We build the system entirely on low-end, off-the-shelf components: a Kinect sensor, a mirror ball and printed markers. We exploit the Kinect infra-red emitter receiver, originally designed for depth computation, as an active hand-held reflectometer, to segment the object into clusters of similar specular materials and estimate the roughness parameters of BRDFs simultaneously. Next, the diffuse albedo and specular intensity of the spatially varying materials are rapidly computed in an inverse rendering framework, using data from the Kinect RGB camera. We demonstrate captured results of a range of materials, and physically validate our system."
]
} |
1603.04134 | 2300107061 | This paper presents a novel appearance and shape feature, RISAS, which is robust to viewpoint, illumination, scale and rotation variations. RISAS consists of a keypoint detector and a feature descriptor both of which utilise texture and geometric information present in the appearance and shape channels. A novel response function based on the surface normals is used in combination with the Harris corner detector for selecting keypoints in the scene. A strategy that uses the depth information for scale estimation and background elimination is proposed to select the neighbourhood around the keypoints in order to build precise invariant descriptors. Proposed descriptor relies on the ordering of both grayscale intensity and shape information in the neighbourhood. Comprehensive experiments which confirm the effectiveness of the proposed RGB-D feature when compared with CSHOT and LOIND are presented. Furthermore, we highlight the utility of incorporating texture and shape information in the design of both the detector and the descriptor by demonstrating the enhanced performance of CSHOT and LOIND when combined with RISAS detector. | In general, feature extraction can be separated into two sub-problems: keypoint detection and descriptor construction. Some of the feature extraction algorithms such as SIFT @cite_4 and SURF @cite_2 tightly couple these two steps while methods such as FAST(Features from Accelerated Segment Test) and BRIEF(Binary Robust Independent Elementary Features) only focus on either keypoint detection or feature description. | {
"cite_N": [
"@cite_4",
"@cite_2"
],
"mid": [
"2151103935",
"1677409904"
],
"abstract": [
"This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.",
"In this paper, we present a novel scale- and rotation-invariant interest point detector and descriptor, coined SURF (Speeded Up Robust Features). It approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster. This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (in casu, using a Hessian matrix-based measure for the detector, and a distribution-based descriptor); and by simplifying these methods to the essential. This leads to a combination of novel detection, description, and matching steps. The paper presents experimental results on a standard evaluation set, as well as on imagery obtained in the context of a real-life object recognition application. Both show SURF's strong performance."
]
} |
1603.04134 | 2300107061 | This paper presents a novel appearance and shape feature, RISAS, which is robust to viewpoint, illumination, scale and rotation variations. RISAS consists of a keypoint detector and a feature descriptor both of which utilise texture and geometric information present in the appearance and shape channels. A novel response function based on the surface normals is used in combination with the Harris corner detector for selecting keypoints in the scene. A strategy that uses the depth information for scale estimation and background elimination is proposed to select the neighbourhood around the keypoints in order to build precise invariant descriptors. Proposed descriptor relies on the ordering of both grayscale intensity and shape information in the neighbourhood. Comprehensive experiments which confirm the effectiveness of the proposed RGB-D feature when compared with CSHOT and LOIND are presented. Furthermore, we highlight the utility of incorporating texture and shape information in the design of both the detector and the descriptor by demonstrating the enhanced performance of CSHOT and LOIND when combined with RISAS detector. | SIFT is one of the most well-known visual features @cite_4 . SIFT combines a Difference-of-Gaussian interest region detector and a gradient orientation histogram as the descriptor. By constructing the descriptor from a scale and orientation normalised image patch, SIFT exhibits robustness to scale and rotation variations. SURF, proposed by @cite_2 , relies on integral images for image convolution. SURF uses a Hessian matrix-based measure for the detector and a distribution-based descriptor. @cite_7 proposed BRIEF which uses a binary string as the descriptor. BRIEF feature takes relatively less memory and can be matched fast using Hamming distance in real-time with very limited computational resources. However, BRIEF is not designed to be robust to scale variations. @cite_5 proposed BRISK(Binary Robust Invariant Scalable Keypoints) which has a scale invariant keypoint detector and binary string like descriptor. ORB, another well-known binary feature, proposed by @cite_17 , has been widely used in SLAM community @cite_0 . ORB is invariant to rotation variations and more robust to noise compared with BRIEF. | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_17"
],
"mid": [
"2151103935",
"1491719799",
"2295862812",
"1677409904",
"2141584146",
"2117228865"
],
"abstract": [
"This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.",
"We propose to use binary strings as an efficient feature point descriptor, which we call BRIEF. We show that it is highly discriminative even when using relatively few bits and can be computed using simple intensity difference tests. Furthermore, the descriptor similarity can be evaluated using the Hamming distance, which is very efficient to compute, instead of the L2 norm as is usually done. As a result, BRIEF is very fast both to build and to match. We compare it against SURF and U-SURF on standard benchmarks and show that it yields a similar or better recognition performance, while running in a fraction of the time required by either.",
"",
"In this paper, we present a novel scale- and rotation-invariant interest point detector and descriptor, coined SURF (Speeded Up Robust Features). It approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster. This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (in casu, using a Hessian matrix-based measure for the detector, and a distribution-based descriptor); and by simplifying these methods to the essential. This leads to a combination of novel detection, description, and matching steps. The paper presents experimental results on a standard evaluation set, as well as on imagery obtained in the context of a real-life object recognition application. Both show SURF's strong performance.",
"Effective and efficient generation of keypoints from an image is a well-studied problem in the literature and forms the basis of numerous Computer Vision applications. Established leaders in the field are the SIFT and SURF algorithms which exhibit great performance under a variety of image transformations, with SURF in particular considered as the most computationally efficient amongst the high-performance methods to date. In this paper we propose BRISK1, a novel method for keypoint detection, description and matching. A comprehensive evaluation on benchmark datasets reveals BRISK's adaptive, high quality performance as in state-of-the-art algorithms, albeit at a dramatically lower computational cost (an order of magnitude faster than SURF in cases). The key to speed lies in the application of a novel scale-space FAST-based detector in combination with the assembly of a bit-string descriptor from intensity comparisons retrieved by dedicated sampling of each keypoint neighborhood.",
"Feature matching is at the base of many computer vision problems, such as object recognition or structure from motion. Current methods rely on costly descriptors for detection and matching. In this paper, we propose a very fast binary descriptor based on BRIEF, called ORB, which is rotation invariant and resistant to noise. We demonstrate through experiments how ORB is at two orders of magnitude faster than SIFT, while performing as well in many situations. The efficiency is tested on several real-world applications, including object detection and patch-tracking on a smart phone."
]
} |
1603.04134 | 2300107061 | This paper presents a novel appearance and shape feature, RISAS, which is robust to viewpoint, illumination, scale and rotation variations. RISAS consists of a keypoint detector and a feature descriptor both of which utilise texture and geometric information present in the appearance and shape channels. A novel response function based on the surface normals is used in combination with the Harris corner detector for selecting keypoints in the scene. A strategy that uses the depth information for scale estimation and background elimination is proposed to select the neighbourhood around the keypoints in order to build precise invariant descriptors. Proposed descriptor relies on the ordering of both grayscale intensity and shape information in the neighbourhood. Comprehensive experiments which confirm the effectiveness of the proposed RGB-D feature when compared with CSHOT and LOIND are presented. Furthermore, we highlight the utility of incorporating texture and shape information in the design of both the detector and the descriptor by demonstrating the enhanced performance of CSHOT and LOIND when combined with RISAS detector. | In order to select salient keypoints from geometric information, researchers have adopted different criteria to evaluate the distinctiveness of the points in the scene, e.g., the normal vector of the surface and curvature of the mesh. Survey paper from @cite_13 categorises 3D keypoint detectors into 2 classes: detectors and detectors and provides a detailed comparison of existing 3D keypoint detectors. Hebert contributed several well-known detectors such as LBSS (Laplace-Beltrami Scale-Space) and MeshDoG @cite_23 . proposed Intrinsic Shape Signature (ISS) @cite_12 to characterise a local semi-local region of a point cloud and ISS had been combined with various 3D descriptors in RGB-D descriptor evaluation @cite_11 . | {
"cite_N": [
"@cite_13",
"@cite_12",
"@cite_23",
"@cite_11"
],
"mid": [
"2067956279",
"1989625560",
"2140526520",
"2024039087"
],
"abstract": [
"In the past few years detection of repeatable and distinctive keypoints on 3D surfaces has been the focus of intense research activity, due on the one hand to the increasing diffusion of low-cost 3D sensors, on the other to the growing importance of applications such as 3D shape retrieval and 3D object recognition. This work aims at contributing to the maturity of this field by a thorough evaluation of several recent 3D keypoint detectors. A categorization of existing methods in two classes, that allows for highlighting their common traits, is proposed, so as to abstract all algorithms to two general structures. Moreover, a comprehensive experimental evaluation is carried out in terms of repeatability, distinctiveness and computational efficiency, based on a vast data corpus characterized by nuisances such as noise, clutter, occlusions and viewpoint changes.",
"This paper presents a new approach for recognition of 3D objects that are represented as 3D point clouds. We introduce a new 3D shape descriptor called Intrinsic Shape Signature (ISS) to characterize a local semi-local region of a point cloud. An intrinsic shape signature uses a view-independent representation of the 3D shape to match shape patches from different views directly, and a view-dependent transform encoding the viewing geometry to facilitate fast pose estimation. In addition, we present a highly efficient indexing scheme for the high dimensional ISS shape descriptors, allowing for fast and accurate search of large model databases. We evaluate the performance of the proposed algorithm on a very challenging task of recognizing different vehicle types using a database of 72 models in the presence of sensor noise, obscuration and scene clutter.",
"Several computer vision algorithms rely on detecting a compact but representative set of interest regions and their associated descriptors from input data. When the input is in the form of an unorganized 3D point cloud, current practice is to compute shape descriptors either exhaustively or at randomly chosen locations using one or more preset neighborhood sizes. Such a strategy ignores the relative variation in the spatial extent of geometric structures and also risks introducing redundancy in the representation. This paper pursues multi-scale operators on point clouds that allow detection of interest regions whose locations as well as spatial extent are completely data-driven. The approach distinguishes itself from related work by operating directly in the input 3D space without assuming an available polygon mesh or resorting to an intermediate global 2D parameterization. Results are shown to demonstrate the utility and robustness of the proposed method.",
"A number of 3D local feature descriptors have been proposed in the literature. It is however, unclear which descriptors are more appropriate for a particular application. A good descriptor should be descriptive, compact, and robust to a set of nuisances. This paper compares ten popular local feature descriptors in the contexts of 3D object recognition, 3D shape retrieval, and 3D modeling. We first evaluate the descriptiveness of these descriptors on eight popular datasets which were acquired using different techniques. We then analyze their compactness using the recall of feature matching per each float value in the descriptor. We also test the robustness of the selected descriptors with respect to support radius variations, Gaussian noise, shot noise, varying mesh resolution, distance to the mesh boundary, keypoint localization error, occlusion, clutter, and dataset size. Moreover, we present the performance results of these descriptors when combined with different 3D keypoint detection methods. We finally analyze the computational efficiency for generating each descriptor."
]
} |
1603.04134 | 2300107061 | This paper presents a novel appearance and shape feature, RISAS, which is robust to viewpoint, illumination, scale and rotation variations. RISAS consists of a keypoint detector and a feature descriptor both of which utilise texture and geometric information present in the appearance and shape channels. A novel response function based on the surface normals is used in combination with the Harris corner detector for selecting keypoints in the scene. A strategy that uses the depth information for scale estimation and background elimination is proposed to select the neighbourhood around the keypoints in order to build precise invariant descriptors. Proposed descriptor relies on the ordering of both grayscale intensity and shape information in the neighbourhood. Comprehensive experiments which confirm the effectiveness of the proposed RGB-D feature when compared with CSHOT and LOIND are presented. Furthermore, we highlight the utility of incorporating texture and shape information in the design of both the detector and the descriptor by demonstrating the enhanced performance of CSHOT and LOIND when combined with RISAS detector. | Descriptors can also be constructed using 3D geometric information. Johnson and Hebert @cite_14 proposed spin image which is a data level descriptor that can be used to match surfaces represented as meshes. With the development of low-cost RGB-D sensors, geometric information of the environment can be easily captured thus 3D shape descriptors have attracted renewed attention. More recent developments include PFH @cite_19 , FPFH(Fast PFH) and SHOT(Signature of Histograms of OrienTations). proposed PFH @cite_19 which is a multi-dimensional histogram which characterises the local geometry of a given keypoint. PFH is invariant to position, orientation and point cloud density. Enhanced version of PFH, termed FPFH @cite_6 reduces the complexity of PFH from @math to @math where @math is the number of points in the neighbourhood of the keypoint. SHOT descriptor proposed by @cite_9 is another example of a widely used local surface descriptor. SHOT encodes the histograms of the surface normals in different partitions in the support region. | {
"cite_N": [
"@cite_19",
"@cite_9",
"@cite_14",
"@cite_6"
],
"mid": [
"1735588541",
"2160643963",
"2099606917",
"2160821342"
],
"abstract": [
"",
"This paper deals with local 3D descriptors for surface matching. First, we categorize existing methods into two classes: Signatures and Histograms. Then, by discussion and experiments alike, we point out the key issues of uniqueness and repeatability of the local reference frame. Based on these observations, we formulate a novel comprehensive proposal for surface representation, which encompasses a new unique and repeatable local reference frame as well as a new 3D descriptor. The latter lays at the intersection between Signatures and Histograms, so as to possibly achieve a better balance between descriptiveness and robustness. Experiments on publicly available datasets as well as on range scans obtained with Spacetime Stereo provide a thorough validation of our proposal.",
"We present a 3D shape-based object recognition system for simultaneous recognition of multiple objects in scenes containing clutter and occlusion. Recognition is based on matching surfaces by matching points using the spin image representation. The spin image is a data level shape descriptor that is used to match surfaces represented as surface meshes. We present a compression scheme for spin images that results in efficient multiple object recognition which we verify with results showing the simultaneous recognition of multiple objects from a library of 20 models. Furthermore, we demonstrate the robust performance of recognition in the presence of clutter and occlusion through analysis of recognition trials on 100 scenes.",
"In our recent work [1], [2], we proposed Point Feature Histograms (PFH) as robust multi-dimensional features which describe the local geometry around a point p for 3D point cloud datasets. In this paper, we modify their mathematical expressions and perform a rigorous analysis on their robustness and complexity for the problem of 3D registration for overlapping point cloud views. More concretely, we present several optimizations that reduce their computation times drastically by either caching previously computed values or by revising their theoretical formulations. The latter results in a new type of local features, called Fast Point Feature Histograms (FPFH), which retain most of the discriminative power of the PFH. Moreover, we propose an algorithm for the online computation of FPFH features for realtime applications. To validate our results we demonstrate their efficiency for 3D registration and propose a new sample consensus based method for bringing two datasets into the convergence basin of a local non-linear optimizer: SAC-IA (SAmple Consensus Initial Alignment)."
]
} |
1603.04134 | 2300107061 | This paper presents a novel appearance and shape feature, RISAS, which is robust to viewpoint, illumination, scale and rotation variations. RISAS consists of a keypoint detector and a feature descriptor both of which utilise texture and geometric information present in the appearance and shape channels. A novel response function based on the surface normals is used in combination with the Harris corner detector for selecting keypoints in the scene. A strategy that uses the depth information for scale estimation and background elimination is proposed to select the neighbourhood around the keypoints in order to build precise invariant descriptors. Proposed descriptor relies on the ordering of both grayscale intensity and shape information in the neighbourhood. Comprehensive experiments which confirm the effectiveness of the proposed RGB-D feature when compared with CSHOT and LOIND are presented. Furthermore, we highlight the utility of incorporating texture and shape information in the design of both the detector and the descriptor by demonstrating the enhanced performance of CSHOT and LOIND when combined with RISAS detector. | @cite_25 have demonstrated that by combining RGB and depth channels together, better object recognition performance can be achieved. @cite_20 developed CSHOT via incorporating RGB information into original SHOT descriptor. @cite_15 proposed a binary RGB-D descriptor BRAND which encodes local information as a binary string thus makes it feasible to achieve low memory consumption. They have also demonstrated the rotation and scale invariance of BRAND. More recently, @cite_1 proposed LOIND which encodes the texture and depth information into one descriptor supported by orders of intensities and angles between normal vectors. | {
"cite_N": [
"@cite_15",
"@cite_1",
"@cite_25",
"@cite_20"
],
"mid": [
"2002054993",
"1585415652",
"2106766627",
"2139114878"
],
"abstract": [
"This work introduces a novel descriptor called Binary Robust Appearance and Normals Descriptor (BRAND), that efficiently combines appearance and geometric shape information from RGB-D images, and is largely invariant to rotation and scale transform. The proposed approach encodes point information as a binary string providing a descriptor that is suitable for applications that demand speed performance and low memory consumption. Results of several experiments demonstrate that as far as precision and robustness are con- cerned, BRAND achieves improved results when compared to state of the art descriptors based on texture, geometry and combination of both information. We also demonstrate that our descriptor is robust and provides reliable results in a registration task even when a sparsely textured and poorly illuminated scene is used.",
"We introduce a novel RGB-D descriptor called local ordinal intensity and normal descriptor (LOIND) with the integration of texture information in RGB image and geometric information in depth image. We implement the descriptor with a 3-D histogram supported by orders of intensities and angles between normal vectors, in addition with the spatial sub-divisions. The former ordering information which is invariant under the transformation of illumination, scale and rotation provides the robustness of our descriptor, while the latter spatial distribution provides higher information capacity so that the discriminative performance is promoted. Comparable experiments with the state-of-art descriptors, e.g. SIFT, SURF, CSHOT and BRAND, show the effectiveness of our LOIND to the complex illumination changes and scale transformation. We also provide a new method to estimate the dominant orientation with only the geometric information, which can ensure the rotation invariance under extremely poor illumination.",
"In this work we address joint object category and instance recognition in the context of RGB-D (depth) cameras. Motivated by local distance learning, where a novel view of an object is compared to individual views of previously seen objects, we define a view-to-object distance where a novel view is compared simultaneously to all views of a previous object. This novel distance is based on a weighted combination of feature differences between views. We show, through jointly learning per-view weights, that this measure leads to superior classification performance on object category and instance recognition. More importantly, the proposed distance allows us to find a sparse solution via Group-Lasso regularization, where a small subset of representative views of an object is identified and used, with the rest discarded. This significantly reduces computational cost without compromising recognition accuracy. We evaluate the proposed technique, Instance Distance Learning (IDL), on the RGB-D Object Dataset, which consists of 300 object instances in 51 everyday categories and about 250,000 views of objects with both RGB color and depth. We empirically compare IDL to several alternative state-of-the-art approaches and also validate the use of visual and shape cues and their combination.",
"Motivated by the increasing availability of 3D sensors capable of delivering both shape and texture information, this paper presents a novel descriptor for feature matching in 3D data enriched with texture. The proposed approach stems from the theory of a recently proposed descriptor for 3D data which relies on shape only, and represents its generalization to the case of multiple cues associated with a 3D mesh. The proposed descriptor, dubbed CSHOT, is demonstrated to notably improve the accuracy of feature matching in challenging object recognition scenarios characterized by the presence of clutter and occlusions."
]
} |
1603.04134 | 2300107061 | This paper presents a novel appearance and shape feature, RISAS, which is robust to viewpoint, illumination, scale and rotation variations. RISAS consists of a keypoint detector and a feature descriptor both of which utilise texture and geometric information present in the appearance and shape channels. A novel response function based on the surface normals is used in combination with the Harris corner detector for selecting keypoints in the scene. A strategy that uses the depth information for scale estimation and background elimination is proposed to select the neighbourhood around the keypoints in order to build precise invariant descriptors. Proposed descriptor relies on the ordering of both grayscale intensity and shape information in the neighbourhood. Comprehensive experiments which confirm the effectiveness of the proposed RGB-D feature when compared with CSHOT and LOIND are presented. Furthermore, we highlight the utility of incorporating texture and shape information in the design of both the detector and the descriptor by demonstrating the enhanced performance of CSHOT and LOIND when combined with RISAS detector. | Most of the current RGB-D fused descriptors adopt traditional 2D keypoint detectors that rely only on appearance information. For instance, BRAND @cite_15 is combined with CenSurE(Centre Surround Extremas @cite_18 ) detector and LOIND @cite_1 uses keypoints from multi-scale Harris detector. In CSHOT @cite_20 , in order to eliminate the influence of detector, the keypoints are selected randomly from the model. Clearly, selecting keypoints by exploiting geometrically information-rich regions in the scene has the potential to enhance the matching performance of a RGB-D descriptor. In this work, we propose a keypoint detector and descriptor which relies on information from both appearance and depth channels. It is demonstrated that using both texture and depth information leads to a detector which will extract keypoints that are more distinctive in the context of a descriptor that also uses similar information, thus improving the discriminativeness of the descriptor. | {
"cite_N": [
"@cite_1",
"@cite_15",
"@cite_18",
"@cite_20"
],
"mid": [
"1585415652",
"2002054993",
"1533496907",
"2139114878"
],
"abstract": [
"We introduce a novel RGB-D descriptor called local ordinal intensity and normal descriptor (LOIND) with the integration of texture information in RGB image and geometric information in depth image. We implement the descriptor with a 3-D histogram supported by orders of intensities and angles between normal vectors, in addition with the spatial sub-divisions. The former ordering information which is invariant under the transformation of illumination, scale and rotation provides the robustness of our descriptor, while the latter spatial distribution provides higher information capacity so that the discriminative performance is promoted. Comparable experiments with the state-of-art descriptors, e.g. SIFT, SURF, CSHOT and BRAND, show the effectiveness of our LOIND to the complex illumination changes and scale transformation. We also provide a new method to estimate the dominant orientation with only the geometric information, which can ensure the rotation invariance under extremely poor illumination.",
"This work introduces a novel descriptor called Binary Robust Appearance and Normals Descriptor (BRAND), that efficiently combines appearance and geometric shape information from RGB-D images, and is largely invariant to rotation and scale transform. The proposed approach encodes point information as a binary string providing a descriptor that is suitable for applications that demand speed performance and low memory consumption. Results of several experiments demonstrate that as far as precision and robustness are con- cerned, BRAND achieves improved results when compared to state of the art descriptors based on texture, geometry and combination of both information. We also demonstrate that our descriptor is robust and provides reliable results in a registration task even when a sparsely textured and poorly illuminated scene is used.",
"We explore the suitability of different feature detectors for the task of image registration, and in particular for visual odometry, using two criteria: stability (persistence across viewpoint change) and accuracy (consistent localization across viewpoint change). In addition to the now-standard SIFT, SURF, FAST, and Harris detectors, we introduce a suite of scale-invariant center-surround detectors (CenSurE) that outperform the other detectors, yet have better computational characteristics than other scale-space detectors, and are capable of real-time implementation.",
"Motivated by the increasing availability of 3D sensors capable of delivering both shape and texture information, this paper presents a novel descriptor for feature matching in 3D data enriched with texture. The proposed approach stems from the theory of a recently proposed descriptor for 3D data which relies on shape only, and represents its generalization to the case of multiple cues associated with a 3D mesh. The proposed descriptor, dubbed CSHOT, is demonstrated to notably improve the accuracy of feature matching in challenging object recognition scenarios characterized by the presence of clutter and occlusions."
]
} |
1603.04037 | 2298866288 | In this work we propose to utilize information about human actions to improve pose estimation in monocular videos. To this end, we present a pictorial structure model that exploits high-level information about activities to incorporate higher-order part dependencies by modeling action specific appearance models and pose priors. However, instead of using an additional expensive action recognition framework, the action priors are efficiently estimated by our pose estimation framework. This is achieved by starting with a uniform action prior and updating the action prior during pose estimation. We also show that learning the right amount of appearance sharing among action classes improves the pose estimation. Our proposed model achieves state-of-the-art performance on two challenging datasets for pose estimation and action recognition with over 80,000 test images. | Several approaches have been proposed to improve the accuracy of PS models for human pose estimation. For instance, joint dependencies can be modeled not only by the PS model, but also by a mid-level image representation such as poselets @cite_0 , exemplars @cite_14 or data dependent probabilities learned by a neural network @cite_6 . Pose estimation in videos can be improved by taking temporal information or motion cues into account @cite_21 @cite_30 @cite_35 @cite_2 @cite_61 @cite_27 . In @cite_21 several pose hypotheses are generated for each video frame and a smooth configuration of poses over time is selected from all hypotheses. Instead of complete articulated pose, @cite_16 and @cite_5 track individual body parts and regularize the trajectories of the body parts through the location of neighboring parts. Similar in spirit, the approach in @cite_44 jointly tracks symmetric body parts in order to better incorporate spatio-temporal constraints, and also to avoid double-counting. Optical flow information has also been used to enhance detected poses at each video frame by analyzing body motion in adjacent frames @cite_36 @cite_35 . | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_61",
"@cite_14",
"@cite_36",
"@cite_21",
"@cite_6",
"@cite_0",
"@cite_44",
"@cite_27",
"@cite_2",
"@cite_5",
"@cite_16"
],
"mid": [
"",
"",
"",
"1997691213",
"",
"2099305880",
"2155394491",
"",
"2207258529",
"",
"",
"2157982431",
"1981095481"
],
"abstract": [
"",
"",
"",
"Pictorial structure (PS) models are extensively used for part-based recognition of scenes, people, animals and multi-part objects. To achieve tractability, the structure and parameterization of the model is often restricted, for example, by assuming tree dependency structure and unimodal, data-independent pairwise interactions. These expressivity restrictions fail to capture important patterns in the data. On the other hand, local methods such as nearest-neighbor classification and kernel density estimation provide non-parametric flexibility but require large amounts of data to generalize well. We propose a simple semi-parametric approach that combines the tractability of pictorial structure inference with the flexibility of non-parametric methods by expressing a subset of model parameters as kernel regression estimates from a learned sparse set of exemplars. This yields query-specific, image-dependent pose priors. We develop an effective shape-based kernel for upper-body pose similarity and propose a leave-one-out loss function for learning a sparse subset of exemplars for kernel regression. We apply our techniques to two challenging datasets of human figure parsing and advance the state-of-the-art (from 80 to 86 on the Buffy dataset [8]), while using only 15 of the training data as exemplars.",
"",
"We describe a method for generating N-best configurations from part-based models, ensuring that they do not overlap according to some user-provided definition of overlap. We extend previous N-best algorithms from the speech community to incorporate non-maximal suppression cues, such that pixel-shifted copies of a single configuration are not returned. We use approximate algorithms that perform nearly identical to their exact counterparts, but are orders of magnitude faster. Our approach outperforms standard methods for generating multiple object configurations in an image. We use our method to generate multiple pose hypotheses for the problem of human pose estimation from video sequences. We present quantitative results that demonstrate that our framework significantly improves the accuracy of a state-of-the-art pose estimation algorithm.",
"We present a method for estimating articulated human pose from a single static image based on a graphical model with novel pairwise relations that make adaptive use of local image measurements. More precisely, we specify a graphical model for human pose which exploits the fact the local image measurements can be used both to detect parts (or joints) and also to predict the spatial relationships between them (Image Dependent Pairwise Relations). These spatial relationships are represented by a mixture model. We use Deep Convolutional Neural Networks (DCNNs) to learn conditional probabilities for the presence of parts and their spatial relationships within image patches. Hence our model combines the representational flexibility of graphical models with the efficiency and statistical power of DCNNs. Our method significantly outperforms the state of the art methods on the LSP and FLIC datasets and also performs very well on the Buffy dataset without any training.",
"",
"In this paper, we present a method to estimate a sequence of human poses in unconstrained videos. In contrast to the commonly employed graph optimization framework, which is NP-hard and needs approximate solutions, we formulate this problem into a unified two stage tree-based optimization problem for which an efficient and exact solution exists. Although the proposed method finds an exact solution, it does not sacrifice the ability to model the spatial and temporal constraints between body parts in the video frames, indeed it even models the symmetric parts better than the existing methods. The proposed method is based on two main ideas: 'Abstraction' and 'Association' to enforce the intra-and inter-frame body part constraints respectively without inducing extra computational complexity to the polynomial time solution. Using the idea of 'Abstraction', a new concept of 'abstract body part' is introduced to model not only the tree based body part structure similar to existing methods, but also extra constraints between symmetric parts. Using the idea of 'Association', the optimal tracklets are generated for each abstract body part, in order to enforce the spatiotemporal constraints between body parts in adjacent frames. Finally, a sequence of the best poses is inferred from the abstract body part tracklets through the tree-based optimization. We evaluated the proposed method on three publicly available video based human pose estimation datasets, and obtained dramatically improved performance compared to the state-of-the-art methods.",
"",
"",
"In this paper, we present a method for estimating articulated human poses in videos. We cast this as an optimization problem defined on body parts with spatio-temporal links between them. The resulting formulation is unfortunately intractable and previous approaches only provide approximate solutions. Although such methods perform well on certain body parts, e.g., head, their performance on lower arms, i.e., elbows and wrists, remains poor. We present a new approximate scheme with two steps dedicated to pose estimation. First, our approach takes into account temporal links with subsequent frames for the less-certain parts, namely elbows and wrists. Second, our method decomposes poses into limbs, generates limb sequences across time, and recomposes poses by mixing these body part sequences. We introduce a new dataset \"Poses in the Wild\", which is more challenging than the existing ones, with sequences containing background clutter, occlusions, and severe camera motion. We experimentally compare our method with recent approaches on this new dataset as well as on two other benchmark datasets, and show significant improvement.",
"The human body is structurally symmetric. Tracking by detection approaches for human pose suffer from double counting, where the same image evidence is used to explain two separate but symmetric parts, such as the left and right feet. Double counting, if left unaddressed can critically affect subsequent processes, such as action recognition, affordance estimation, and pose reconstruction. In this work, we present an occlusion aware algorithm for tracking human pose in an image sequence, that addresses the problem of double counting. Our key insight is that tracking human pose can be cast as a multi-target tracking problem where the ”targets” are related by an underlying articulated structure. The human body is modeled as a combination of singleton parts (such as the head and neck) and symmetric pairs of parts (such as the shoulders, knees, and feet). Symmetric body parts are jointly tracked with mutual exclusion constraints to prevent double counting by reasoning about occlusion. We evaluate our algorithm on an outdoor dataset with natural background clutter, a standard indoor dataset (HumanEva-I), and compare against a state of the art pose estimation algorithm."
]
} |
1603.04037 | 2298866288 | In this work we propose to utilize information about human actions to improve pose estimation in monocular videos. To this end, we present a pictorial structure model that exploits high-level information about activities to incorporate higher-order part dependencies by modeling action specific appearance models and pose priors. However, instead of using an additional expensive action recognition framework, the action priors are efficiently estimated by our pose estimation framework. This is achieved by starting with a uniform action prior and updating the action prior during pose estimation. We also show that learning the right amount of appearance sharing among action classes improves the pose estimation. Our proposed model achieves state-of-the-art performance on two challenging datasets for pose estimation and action recognition with over 80,000 test images. | The closest to our work is the recent approach of @cite_23 that jointly estimates the action classes and refines human poses. The approach first estimates human poses at each video frame and decomposes them into sub-parts. These sub-parts are then tracked across video frames based on action specific spatio-temporal constraints. Finally, the action labels and joint locations are inferred from the part tracks that maximize a defined objective function. While the approach shows promising results, it does not re-estimate the parts but only re-combines them over frames , only the temporal constraints are influenced by an activity. Moreover, it relies on two additional activity recognition approaches based on optical flow and appearance features to obtain good action recognition accuracy that results in a very large computational overhead as compared to an approach that estimates activities using only the pose information. In this work, we show that additional action recognition approaches are not required, but predict the activities directly from a sequence of poses. In contrast to @cite_23 , we condition the pose model itself on activities and re-estimate the entire pose per frame. | {
"cite_N": [
"@cite_23"
],
"mid": [
"1912967058"
],
"abstract": [
"Action recognition and pose estimation from video are closely related tasks for understanding human motion, most methods, however, learn separate models and combine them sequentially. In this paper, we propose a framework to integrate training and testing of the two tasks. A spatial-temporal And-Or graph model is introduced to represent action at three scales. Specifically the action is decomposed into poses which are further divided to mid-level ST-parts and then parts. The hierarchical structure of our model captures the geometric and appearance variations of pose at each frame and lateral connections between ST-parts at adjacent frames capture the action-specific motion information. The model parameters for three scales are learned discriminatively, and action labels and poses are efficiently inferred by dynamic programming. Experiments demonstrate that our approach achieves state-of-art accuracy in action recognition while also improving pose estimation."
]
} |
1603.03795 | 2302255789 | Game balancing is an important part of the (computer) game design process, in which designers adapt a game prototype so that the resulting gameplay is as entertaining as possible. In industry, the evaluation of a game is often based on costly playtests with human players. It suggests itself to automate this process using surrogate models for the prediction of gameplay and outcome. In this paper, the feasibility of automatic balancing using simulation- and deck-based objectives is investigated for the card game top trumps. Additionally, the necessity of a multi-objective approach is asserted by a comparison with the only known (single-objective) method. We apply a multi-objective evolutionary algorithm to obtain decks that optimise objectives, e.g. win rate and average number of tricks, developed to express the fairness and the excitement of a game of top trumps. The results are compared with decks from published top trumps decks using simulation-based objectives. The possibility to generate decks better or at least as good as decks from published top trumps decks in terms of these objectives is demonstrated. Our results indicate that automatic balancing with the presented approach is feasible even for more complex games such as real-time strategy games. | Cardona2014 use an evolutionary algorithm to select cards for top trumps games from open data @cite_12 . The focus of their research, however, is the potential to teach players about data and learn about it using games. The authors develop and use a single-objective dominance-related measure to evaluate the balance of a given deck. This measure is used as a reference in this paper (cf. @math in Sec. ). | {
"cite_N": [
"@cite_12"
],
"mid": [
"1574482857"
],
"abstract": [
"We present Open Trumps, a version of the popular card game Top Trumps with decks that are procedurally generated based on open data. The game is played among multiple players through drawing cards and selecting the feature that is most likely to trump the same feature on the other players’ cards. Players can generate their own decks through choosing a suitable dataset and setting certain attributes; the generator then generates a balanced and playable deck using evolutionary computation. In the example dataset, each card represents a country and the features represent such entities as GDP per capita, mortality rate or tomato production, but in principle any dataset organised as instances with numerical features could be used. We also report the results of an evaluation intended to investigate both player experience and the hypothesis that players learn about the data underlying the deck they play with, since understanding the data is key to playing well. The results show that players enjoy playing the game, are enthusiastic about its potential and answer questions related to decks they have played significantly better than questions related to decks they have not played."
]
} |
1603.03795 | 2302255789 | Game balancing is an important part of the (computer) game design process, in which designers adapt a game prototype so that the resulting gameplay is as entertaining as possible. In industry, the evaluation of a game is often based on costly playtests with human players. It suggests itself to automate this process using surrogate models for the prediction of gameplay and outcome. In this paper, the feasibility of automatic balancing using simulation- and deck-based objectives is investigated for the card game top trumps. Additionally, the necessity of a multi-objective approach is asserted by a comparison with the only known (single-objective) method. We apply a multi-objective evolutionary algorithm to obtain decks that optimise objectives, e.g. win rate and average number of tricks, developed to express the fairness and the excitement of a game of top trumps. The results are compared with decks from published top trumps decks using simulation-based objectives. The possibility to generate decks better or at least as good as decks from published top trumps decks in terms of these objectives is demonstrated. Our results indicate that automatic balancing with the presented approach is feasible even for more complex games such as real-time strategy games. | Jaffe2013 introduces a technique called restricted play that is supposed to enable designer to express balancing goals in terms of the win rate of a suitably restricted agent @cite_3 . However, this approach necessitates expert knowledge about the game as well as an AI and several potentially computationally expensive simulations. In contrast, we explore other possibilities to express design goals and utilise non-simulation based metrics. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2346665340"
],
"abstract": [
"Game balancing is the fine-tuning phase in which a functioning game is adjusted to be deep, fair, and interesting. Balancing is difficult and time-consuming, as designers must repeatedly tweak parameters and run lengthy playtests to evaluate the effects of these changes. Only recently has computer science played a role in balancing, through quantitative balance analysis. Such methods take two forms: analytics for repositories of real gameplay, and the study of simulated players. In this work I rectify a deficiency of prior work: largely ignoring the players themselves. I argue that variety among players is the main source of depth in many games, and that analysis should be contextualized by the behavioral properties of players. Concretely, I present a formalization of diverse forms of game balance. This formulation, called 'restricted play', reveals the connection between balancing concerns, by effectively reducing them to the fairness of games with restricted players. Using restricted play as a foundation, I contribute four novel methods of quantitative balance analysis. I first show how game balance be estimated without players, using simulated agents under algorithmic restrictions. I then present a set of guidelines for using domain-specific models to guide data exploration, with a case study of my design work on a major competitive video game. I extend my work on this game with novel data visualization techniques, which overcome limitations of existing work by decomposing data in terms of player skill. I finally present an advanced formulation of fairness in games—the first to take into account a game's metagame, or player community. These contributions are supported by a detailed exploration of common understandings of game balance, a survey of prior work in quantitative balance analysis, a discussion of the social benefit of this work, and a vision of future games that quantitative balance analysis might one day make possible."
]
} |
1603.03795 | 2302255789 | Game balancing is an important part of the (computer) game design process, in which designers adapt a game prototype so that the resulting gameplay is as entertaining as possible. In industry, the evaluation of a game is often based on costly playtests with human players. It suggests itself to automate this process using surrogate models for the prediction of gameplay and outcome. In this paper, the feasibility of automatic balancing using simulation- and deck-based objectives is investigated for the card game top trumps. Additionally, the necessity of a multi-objective approach is asserted by a comparison with the only known (single-objective) method. We apply a multi-objective evolutionary algorithm to obtain decks that optimise objectives, e.g. win rate and average number of tricks, developed to express the fairness and the excitement of a game of top trumps. The results are compared with decks from published top trumps decks using simulation-based objectives. The possibility to generate decks better or at least as good as decks from published top trumps decks in terms of these objectives is demonstrated. Our results indicate that automatic balancing with the presented approach is feasible even for more complex games such as real-time strategy games. | Chen2014 intend to solve the balance problem of massively multiplayer online role-playing games using co-e -vo -lu -tion -a -ry programming @cite_8 . However, they focus on level progression and ignore any balancing concerns apart from equalising the win-rates of different in-game characters. Yet, most work involving the evaluation of a game configuration is related to procedural content generation, specifically map or level generation. Several papers focus on issuing guarantees, e.g. with regards to playability @cite_15 , solvability @cite_5 , or diversity @cite_7 @cite_0 . Other research areas include dynamic difficulty adaptation for single-player games @cite_10 , the generation of rules @cite_4 @cite_11 , and more interactive versions of game design, e.g. mixed-initiative @cite_2 . | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_10",
"@cite_11"
],
"mid": [
"2646036860",
"1970527505",
"2027281017",
"",
"",
"2395654457",
"2092735940",
"",
"2044598283"
],
"abstract": [
"Many new board games are designed each year, ranging from the unplayable to the truly exceptional. For each successful design there are untold numbers of failures; game design is something of an art. Players generally agree on some basic properties that indicate the quality and viability of a game, however these properties have remained subjective and open to interpretation. The aims of this thesis are to determine whether such quality criteria may be precisely defined and automatically measured through self-play in order to estimate the likelihood that a given game will be of interest to human players, and whether this information may be used to direct an automated search for new games of high quality. Combinatorial games provide an excellent test bed for this purpose as they are typically deep yet described by simple welldefined rule sets. To test these ideas, a game description language was devised to express such games and a general game system implemented to play, measure and explore them. Key features of the system include modules for measuring statistical aspects of self-play and synthesising new games through the evolution of existing rule sets. Experiments were conducted to determine whether automated game measurements correlate with rankings of games by human players, and whether such correlations could be used to inform the automated search for new high quality games. The results support both hypotheses and demonstrate the emergence of interesting new rule combinations.",
"In procedural content generation, one is often interested in generating a large number of artifacts that are not only of high quality but also diverse, in terms of gameplay, visual impression or some other criterion. We investigate several search-based approaches to creating good and diverse game content, in particular approaches based on evolution strategies with or without diversity preservation mechanisms, novelty search and random search. The content domain is game levels, more precisely map sketches for strategy games, which are meant to be used as suggestions in the Sentient Sketchbook design tool. Several diversity metrics are possible for this type of content: we investigate tile-based, objective-based and visual impression distance. We find that evolution with diversity preservation mechanisms can produce both good and diverse content, but only when using appropriate distance measures. Reversely, we can draw conclusions about the suitability of these distance measures for the domain from the comparison of diversity preserving versus blind restart evolutionary algorithms.",
"In massively multiplayer online role-playing games (MMORPGs), each race holds some attributes and skills. Each skill contains several abilities such as physical damage and hit rate. All those attributes and abilities are functions of the character's level, which are called Ability-Increasing Functions (AIFs). A well-balanced MMORPG is characterized by having a set of well-balanced AIFs. In this paper, we propose a coevolutionary design method, including integration with the modified probabilistic incremental program evolution (PIPE) and the cooperative coevolutionary algorithm (CCEA), to solve the balance problem of MMORPGs. Moreover, we construct a simplest turn-based game model and perform a series of experiments based on it. The results indicate that the proposed method is able to obtain a set of well-balanced AIFs more efficiently, compared with the simple genetic algorithm (SGA), the simulated annealing algorithm (SAA) and the hybrid discrete particle swarm optimization (HDPSO) algorithm. The results also show that the performance of PIPE has been significantly improved through the modification works.",
"",
"",
"Motivated by our ongoing efforts in the development of Refraction 2, a puzzle game targeting mathematics education, we realized that the quality of a puzzle is critically sensitive to the presence of alternative solutions with undesirable properties. Where, in our game, we seek a way to automatically synthesize puzzles that can only be solved if the player demonstrates specific concepts, concern for the possibility of undesirable play touches other interactive design domains. To frame this problem (and our solution to it) in a general context, we formalize the problem of generating solvable puzzles that admit no undesirable solutions as an NPcomplete search problem. By making two design-oriented extensions to answer set programming (a technology that has been recently applied to constrained game content generation problems) we offer a general way to declaratively pose and automatically solve the high-complexity problems coming from this formulation. Applying this technique to Refraction, we demonstrate a qualitative leap in the kind of puzzles we can reliably generate. This work opens up new possibilities for quality-focused content generators that guarantee properties over their entire combinatorial space of play.",
"This paper shows how multiobjective evolutionary algorithms can be used to procedurally generate complete and playable maps for real-time strategy (RTS) games. We devise heuristic objective functions that measure properties of maps that impact important aspects of gameplay experience. To show the generality of our approach, we design two different evolvable map representations, one for an imaginary generic strategy game based on heightmaps, and one for the classic RTS game StarCraft. The effect of combining tuples or triples of the objective functions are investigated in systematic experiments, in particular which of the objectives are partially conflicting. A selection of generated maps are visually evaluated by a population of skilled StarCraft players, confirming that most of our objectives correspond to perceived gameplay qualities. Our method could be used to completely automate in-game controlled map generation, enabling player-adaptive games, or as a design support tool for human designers.",
"",
"Variations Forever is a novel game in which the player explores a vast design space of mini-games. In this paper, we present the procedural content generation research which makes the automatic generation of suitable game rulesets possible. Our generator, operating in the domain of code-like game content exploits answer-set programming as a means to declaratively represent a generative space as distinct from the domain-independent solvers which we use to enumerate it. Our generative spaces are powerfully sculptable using concise, declarative rules, allowing us to embed significant design knowledge into our ruleset generator as an important step towards a more serious automation of whole game design process."
]
} |
1603.04012 | 2297083297 | The Death and Life of Great American Cities was written in 1961 and is now one of the most influential book in city planning. In it, Jane Jacobs proposed four conditions that promote life in a city. However, these conditions have not been empirically tested until recently. This is mainly because it is hard to collect data about "city life". The city of Seoul recently collected pedestrian activity through surveys at an unprecedented scale, with an effort spanning more than a decade, allowing researchers to conduct the first study successfully testing Jacobs's conditions. In this paper, we identify a valuable alternative to the lengthy and costly collection of activity survey data: mobile phone data. We extract human activity from such data, collect land use and socio-demographic information from the Italian Census and Open Street Map, and test the four conditions in six Italian cities. Although these cities are very different from the places for which Jacobs's conditions were spelled out (i.e., great American cities) and from the places in which they were recently tested (i.e., the Asian city of Seoul), we find those conditions to be indeed associated with urban life in Italy as well. Our methodology promises to have a great impact on urban studies, not least because, if replicated, it will make it possible to test Jacobs's theories at scale. | Our work is best placed in an emerging interdisciplinary field called urban computing" @cite_38 . This combines computer science approaches with more traditional fields like urban planning, urban economy, and urban sociology. | {
"cite_N": [
"@cite_38"
],
"mid": [
"2112738128"
],
"abstract": [
"Urbanization's rapid progress has modernized many people's lives but also engendered big issues, such as traffic congestion, energy consumption, and pollution. Urban computing aims to tackle these issues by using the data that has been generated in cities (e.g., traffic flow, human mobility, and geographical data). Urban computing connects urban sensing, data management, data analytics, and service providing into a recurrent process for an unobtrusive and continuous improvement of people's lives, city operation systems, and the environment. Urban computing is an interdisciplinary field where computer sciences meet conventional city-related fields, like transportation, civil engineering, environment, economy, ecology, and sociology in the context of urban spaces. This article first introduces the concept of urban computing, discussing its general framework and key challenges from the perspective of computer sciences. Second, we classify the applications of urban computing into seven categories, consisting of urban planning, transportation, the environment, energy, social, economy, and public safety and security, presenting representative scenarios in each category. Third, we summarize the typical technologies that are needed in urban computing into four folds, which are about urban sensing, urban data management, knowledge fusion across heterogeneous data, and urban data visualization. Finally, we give an outlook on the future of urban computing, suggesting a few research topics that are somehow missing in the community."
]
} |
1603.04012 | 2297083297 | The Death and Life of Great American Cities was written in 1961 and is now one of the most influential book in city planning. In it, Jane Jacobs proposed four conditions that promote life in a city. However, these conditions have not been empirically tested until recently. This is mainly because it is hard to collect data about "city life". The city of Seoul recently collected pedestrian activity through surveys at an unprecedented scale, with an effort spanning more than a decade, allowing researchers to conduct the first study successfully testing Jacobs's conditions. In this paper, we identify a valuable alternative to the lengthy and costly collection of activity survey data: mobile phone data. We extract human activity from such data, collect land use and socio-demographic information from the Italian Census and Open Street Map, and test the four conditions in six Italian cities. Although these cities are very different from the places for which Jacobs's conditions were spelled out (i.e., great American cities) and from the places in which they were recently tested (i.e., the Asian city of Seoul), we find those conditions to be indeed associated with urban life in Italy as well. Our methodology promises to have a great impact on urban studies, not least because, if replicated, it will make it possible to test Jacobs's theories at scale. | The idea of testing urban theories using novel sources of data (e.g., social media, online images and videos, mobile phone data) has received increasing attention @cite_43 @cite_9 @cite_30 . The urban sociologist Kevin Lynch showed that people living in an urban environment create their own personal mental map'' of the city based on features such as the areas they visit and the routes they use @cite_31 . Hence, he hypothesized that, the more recognizable a city, the more navigable the city. To test Lynch's theory, Quercia built a web game that crowd-sourced Londoners' mental images of the city @cite_9 . They showed that areas suffering from social problems such as poor living conditions and crime are rarely present in residents' mental images. | {
"cite_N": [
"@cite_30",
"@cite_43",
"@cite_9",
"@cite_31"
],
"mid": [
"",
"2068883041",
"9248908",
"1529253181"
],
"abstract": [
"",
"In the 1960s, Lynch's 'The Image of the City' explored what impression US city neighborhoods left on its inhabitants. The scale of urban perception studies until recently was considerably constrained by the limited number of study participants. We here present a crowdsourcing project that aims to investigate, at scale, which visual aspects of London neighborhoods make them appear beautiful, quiet, and or happy. We collect votes from over 3.3K individuals and translate them into quantitative measures of urban perception. In so doing, we quantify each neighborhood's aesthetic capital. By then using state-of-the-art image processing techniques, we determine visual cues that may cause a street to be perceived as being beautiful, quiet, or happy. We identify effects of color, texture and visual words. For example, the amount of greenery is the most positively associated visual cue with each of three qualities; by contrast, broad streets, fortress-like buildings, and council houses tend to be associated with the opposite qualities (ugly, noisy, and unhappy).",
"Planners and social psychologists have suggested that the recognizability of the urban environment is linked to people's socio-economic well-being. We build a web game that puts the recognizability of London's streets to the test. It follows as closely as possible one experiment done by Stanley Milgram in 1972. The game picks up random locations from Google Street View and tests users to see if they can judge the location in terms of closest subway station, borough, or region. Each participant dedicates only few minutes to the task (as opposed to 90 minutes in Milgram's). We collect data from 2,255 participants (one order of magnitude a larger sample) and build a recognizability map of London based on their responses. We find that some boroughs have little cognitive representation; that recognizability of an area is explained partly by its exposure to Flickr and Foursquare users and mostly by its exposure to subway passengers; and that areas with low recognizability do not fare any worse on the economic indicators of income, education, and employment, but they do significantly suffer from social problems of housing deprivation, poor living conditions, and crime. These results could not have been produced without analyzing life off- and online: that is, without considering the interactions between urban places in the physical world and their virtual presence on platforms such as Flickr and Foursquare. This line of work is at the crossroad of two emerging themes in computing research - a crossroad where \"web science\" meets the \"smart city\" agenda.",
"What does the city's form actually mean to the people who live there? What can the city planner do to make the city's image more vivid and memorable to the city dweller? To answer these questions, Mr. Lynch, supported by studies of Los Angeles, Boston, and Jersey City, formulates a new criterion -- imageability -- and shows its potential value as a guide for the building and rebuilding of cities. The wide scope of this study leads to an original and vital method for the evaluation of city form. The architect, the planner, and certainly the city dweller will all want to read this book."
]
} |
1603.04012 | 2297083297 | The Death and Life of Great American Cities was written in 1961 and is now one of the most influential book in city planning. In it, Jane Jacobs proposed four conditions that promote life in a city. However, these conditions have not been empirically tested until recently. This is mainly because it is hard to collect data about "city life". The city of Seoul recently collected pedestrian activity through surveys at an unprecedented scale, with an effort spanning more than a decade, allowing researchers to conduct the first study successfully testing Jacobs's conditions. In this paper, we identify a valuable alternative to the lengthy and costly collection of activity survey data: mobile phone data. We extract human activity from such data, collect land use and socio-demographic information from the Italian Census and Open Street Map, and test the four conditions in six Italian cities. Although these cities are very different from the places for which Jacobs's conditions were spelled out (i.e., great American cities) and from the places in which they were recently tested (i.e., the Asian city of Seoul), we find those conditions to be indeed associated with urban life in Italy as well. Our methodology promises to have a great impact on urban studies, not least because, if replicated, it will make it possible to test Jacobs's theories at scale. | Researchers also investigated which urban elements people use to visually judge a street to be safe, wealthy, and attractive using web crowdsourcing games @cite_41 @cite_43 @cite_10 , and they also studied how to identify walkable streets using the social media data of Flickr and Foursquare (e.g., unsafe streets tended to be photographed during the day, while walkable streets were tagged with walkability-related keywords @cite_22 ). | {
"cite_N": [
"@cite_41",
"@cite_43",
"@cite_10",
"@cite_22"
],
"mid": [
"2028979196",
"2068883041",
"1968147892",
"2101254446"
],
"abstract": [
"Cities' visual appearance plays a central role in shaping human perception and response to the surrounding urban environment. For example, the visual qualities of urban spaces affect the psychological states of their inhabitants and can induce negative social outcomes. Hence, it becomes critically important to understand people's perceptions and evaluations of urban spaces. Previous works have demonstrated that algorithms can be used to predict high level attributes of urban scenes (e.g. safety, attractiveness, uniqueness), accurately emulating human perception. In this paper we propose a novel approach for predicting the perceived safety of a scene from Google Street View Images. Opposite to previous works, we formulate the problem of learning to predict high level judgments as a ranking task and we employ a Convolutional Neural Network (CNN), significantly improving the accuracy of predictions over previous methods. Interestingly, the proposed CNN architecture relies on a novel pooling layer, which permits to automatically discover the most important areas of the images for predicting the concept of perceived safety. An extensive experimental evaluation, conducted on the publicly available Place Pulse dataset, demonstrates the advantages of the proposed approach over state-of-the-art methods.",
"In the 1960s, Lynch's 'The Image of the City' explored what impression US city neighborhoods left on its inhabitants. The scale of urban perception studies until recently was considerably constrained by the limited number of study participants. We here present a crowdsourcing project that aims to investigate, at scale, which visual aspects of London neighborhoods make them appear beautiful, quiet, and or happy. We collect votes from over 3.3K individuals and translate them into quantitative measures of urban perception. In so doing, we quantify each neighborhood's aesthetic capital. By then using state-of-the-art image processing techniques, we determine visual cues that may cause a street to be perceived as being beautiful, quiet, or happy. We identify effects of color, texture and visual words. For example, the amount of greenery is the most positively associated visual cue with each of three qualities; by contrast, broad streets, fortress-like buildings, and council houses tend to be associated with the opposite qualities (ugly, noisy, and unhappy).",
"A traveler visiting Rio, Manila or Caracas does not need a report to learn that these cities are unequal; she can see it directly from the taxicab window. This is because in most cities inequality is conspicuous, but also, because cities express different forms of inequality that are evident to casual observers. Cities are highly heterogeneous and often unequal with respect to the income of their residents, but also with respect to the cleanliness of their neighborhoods, the beauty of their architecture, and the liveliness of their streets, among many other evaluative dimensions. Until now, however, our ability to understand the effect of a city's built environment on social and economic outcomes has been limited by the lack of quantitative data on urban perception. Here, we build on the intuition that inequality is partly conspicuous to create quantitative measure of a city's contrasts. Using thousands of geo-tagged images, we measure the perception of safety, class and uniqueness; in the cities of Boston and New York in the United States, and Linz and Salzburg in Austria, finding that the range of perceptions elicited by the images of New York and Boston is larger than the range of perceptions elicited by images from Linz and Salzburg. We interpret this as evidence that the cityscapes of Boston and New York are more contrasting, or unequal, than those of Linz and Salzburg. Finally, we validate our measures by exploring the connection between them and homicides, finding a significant correlation between the perceptions of safety and class and the number of homicides in a NYC zip code, after controlling for the effects of income, population, area and age. Our results show that online images can be used to create reproducible quantitative measures of urban perception and characterize the inequality of different cities.",
"Walkability has many health, environmental, and economic benefits. That is why web and mobile services have been offering ways of computing walkability scores of individual street segments. Those scores are generally computed from survey data and manual counting (of even trees). However, that is costly, owing to the high time, effort, and financial costs. To partly automate the computation of those scores, we explore the possibility of using the social media data of Flickr and Foursquare to automatically identify safe and walkable streets. We find that unsafe streets tend to be photographed during the day, while walkable streets are tagged with walkability-related keywords. These results open up practical opportunities (for, e.g., room booking services, urban route recommenders, and real-estate sites) and have theoretical implications for researchers who might resort to the use social media data to tackle previously unanswered questions in the area of walkability."
]
} |
1603.04012 | 2297083297 | The Death and Life of Great American Cities was written in 1961 and is now one of the most influential book in city planning. In it, Jane Jacobs proposed four conditions that promote life in a city. However, these conditions have not been empirically tested until recently. This is mainly because it is hard to collect data about "city life". The city of Seoul recently collected pedestrian activity through surveys at an unprecedented scale, with an effort spanning more than a decade, allowing researchers to conduct the first study successfully testing Jacobs's conditions. In this paper, we identify a valuable alternative to the lengthy and costly collection of activity survey data: mobile phone data. We extract human activity from such data, collect land use and socio-demographic information from the Italian Census and Open Street Map, and test the four conditions in six Italian cities. Although these cities are very different from the places for which Jacobs's conditions were spelled out (i.e., great American cities) and from the places in which they were recently tested (i.e., the Asian city of Seoul), we find those conditions to be indeed associated with urban life in Italy as well. Our methodology promises to have a great impact on urban studies, not least because, if replicated, it will make it possible to test Jacobs's theories at scale. | The recent availability of large-scale data sets, such as those automatically collected by mobile phone networks, opens new possibilities for studying city dynamics at finer and unprecedented granularity @cite_21 . Mobile phone data represents a highly valuable proxy for human mobility patterns @cite_44 @cite_42 @cite_19 . Such data was recently used to map functional uses @cite_37 @cite_26 , to identify places that play a major role in the life of citizens @cite_15 , to compare cities based on their spatial similarities and differences @cite_20 , and to predict socio-economic indicators @cite_18 @cite_36 , including crime @cite_1 @cite_27 @cite_33 . | {
"cite_N": [
"@cite_37",
"@cite_26",
"@cite_18",
"@cite_33",
"@cite_36",
"@cite_21",
"@cite_42",
"@cite_1",
"@cite_44",
"@cite_19",
"@cite_27",
"@cite_15",
"@cite_20"
],
"mid": [
"1883704353",
"",
"1996221647",
"",
"",
"2137558058",
"2024220066",
"1951757437",
"1982300822",
"1987228002",
"",
"1809720746",
"2129343844"
],
"abstract": [
"This chapter examines the possibility to analyze and compare human activities in an urban environment based on the detection of mobile phone usage patterns. Thanks to an unprecedented collection of counter data recording the number of calls, SMS, and data transfers resolved both in time and space, we confirm the connection between temporal activity profile and land usage in three global cities: New York, London, and Hong Kong. By comparing whole cities’ typical patterns, we provide insights on how cultural, technological, and economical factors shape human dynamics. At a more local scale, we use clustering analysis to identify locations with similar patterns within a city. Our research reveals a universal structure of cities, with core financial centers all sharing similar activity patterns and commercial or residential areas with more city-specific patterns. These findings hint that as the economy becomes more global, common patterns emerge in business areas of different cities across the globe, while the impact of local conditions still remains recognizable on the level of routine people activity.",
"",
"Social networks form the backbone of social and economic life. Until recently, however, data have not been available to study the social impact of a national network structure. To that end, we combined the most complete record of a national communication network with national census data on the socioeconomic well-being of communities. These data make possible a population-level investigation of the relation between the structure of social networks and access to socioeconomic opportunity. We find that the diversity of individuals’ relationships is strongly correlated with the economic development of communities.",
"",
"",
"In this paper, we review some advances made recently in the study of mobile phone datasets. This area of research has emerged a decade ago, with the increasing availability of large-scale anonymized datasets, and has grown into a stand-alone topic. We survey the contributions made so far on the social networks that can be constructed with such data, the study of personal mobility, geographical partitioning, urban planning, and help towards development as well as security and privacy issues.",
"Home-work commuting has always attracted significant research attention because of its impact on human mobility. One of the key assumptions in this domain of study is the universal uniformity of commute times. However, a true comparison of commute patterns has often been hindered by the intrinsic differences in data collection methods, which make observation from different countries potentially biased and unreliable. In the present work, we approach this problem through the use of mobile phone call detail records (CDRs), which offers a consistent method for investigating mobility patterns in wholly different parts of the world. We apply our analysis to a broad range of datasets, at both the country (Portugal, Ivory Coast, and Saudi Arabia), and city (Boston) scale. Additionally, we compare these results with those obtained from vehicle GPS traces in Milan. While different regions have some unique commute time characteristics, we show that the home-work time distributions and average values within a single region are indeed largely independent of commute distance or country (Portugal, Ivory Coast, and Boston)–despite substantial spatial and infrastructural differences. Furthermore, our comparative analysis demonstrates that such distance-independence holds true only if we consider multimodal commute behaviors–as consistent with previous studies. In car-only (Milan GPS traces) and car-heavy (Saudi Arabia) commute datasets, we see that commute time is indeed influenced by commute distance. Finally, we put forth a testable hypothesis and suggest ways for future work to make more accurate and generalizable statements about human commute behaviors.",
"Abstract The wealth of information provided by real-time streams of data has paved the way for life-changing technological advancements, improving the quality of life of people in many ways, from facilitating knowledge exchange to self-understanding and self-monitoring. Moreover, the analysis of anonymized and aggregated large-scale human behavioral data offers new possibilities to understand global patterns of human behavior and helps decision makers tackle problems of societ al importance. In this article, we highlight the potential societ al benefits derived from big data applications with a focus on citizen safety and crime prevention. First, we introduce the emergent new research area of big data for social good. Next, we detail a case study tackling the problem of crime hotspot classification, that is, the classification of which areas in a city are more likely to witness crimes based on past data. In the proposed approach we use demographic information along with human mobility characteristics as der...",
"This study used a sample of 100,000 mobile phone users whose trajectory was tracked for six months to study human mobility patterns. Displacements across all users suggest behaviour close to the Levy-flight-like pattern observed previously based on the motion of marked dollar bills, but with a cutoff in the distribution. The origin of the Levy patterns observed in the aggregate data appears to be population heterogeneity and not Levy patterns at the level of the individual.",
"A range of applications, from predicting the spread of human and electronic viruses to city planning and resource management in mobile communications, depend on our ability to foresee the whereabouts and mobility of individuals, raising a fundamental question: To what degree is human behavior predictable? Here we explore the limits of predictability in human dynamics by studying the mobility patterns of anonymized mobile phone users. By measuring the entropy of each individual’s trajectory, we find a 93 potential predictability in user mobility across the whole user base. Despite the significant differences in the travel patterns, we find a remarkable lack of variability in predictability, which is largely independent of the distance users cover on a regular basis.",
"",
"People spend most of their time at a few key locations, such as home and work. Being able to identify how the movements of people cluster around these \"important places\" is crucial for a range of technology and policy decisions in areas such as telecommunications and transportation infrastructure deployment. In this paper, we propose new techniques based on clustering and regression for analyzing anonymized cellular network data to identify generally important locations, and to discern semantically meaningful locations such as home and work. Starting with temporally sparse and spatially coarse location information, we propose a new algorithm to identify important locations. We test this algorithm on arbitrary cellphone users, including those with low call rates, and find that we are within 3 miles of ground truth for 88 of volunteer users. Further, after locating home and work, we achieve commute distance estimates that are within 1 mile of equivalent estimates derived from government census data. Finally, we perform carbon footprint analyses on hundreds of thousands of anonymous users as an example of how our data and algorithms can form an accurate and efficient underpinning for policy and infrastructure studies.",
"Pervasive infrastructures, such as cell phone networks, enable to capture large amounts of human behavioral data but also provide information about the structure of cities and their dynamical properties. In this article, we focus on these last aspects by studying phone data recorded during 55 days in 31 Spanish cities. We first define an urban dilatation index which measures how the average distance between individuals evolves during the day, allowing us to highlight different types of city structure. We then focus on hotspots, the most crowded places in the city. We propose a parameter free method to detect them and to test the robustness of our results. The number of these hotspots scales sublinearly with the population size, a result in agreement with previous theoretical arguments and measures on employment datasets. We study the lifetime of these hotspots and show in particular that the hierarchy of permanent ones, which constitute the ‘heart' of the city, is very stable whatever the size of the city. The spatial structure of these hotspots is also of interest and allows us to distinguish different categories of cities, from monocentric and “segregated” where the spatial distribution is very dependent on land use, to polycentric where the spatial mixing between land uses is much more important. These results point towards the possibility of a new, quantitative classification of cities using high resolution spatio-temporal data."
]
} |
1603.04064 | 2299801417 | A large number of problems in optimization, machine learning, signal processing can be effectively addressed by suitable semidefinite programming (SDP) relaxations. Unfortunately, generic SDP solvers hardly scale beyond instances with a few hundreds variables (in the underlying combinatorial problem). On the other hand, it has been observed empirically that an effective strategy amounts to introducing a (non-convex) rank constraint, and solving the resulting smooth optimization problem by ascent methods. This non-convex problem has --generically-- a large number of local maxima, and the reason for this success is therefore unclear. This paper provides rigorous support for this approach. For the problem of maximizing a linear functional over the elliptope, we prove that all local maxima are within a small gap from the SDP optimum. In several problems of interest, arbitrarily small relative error can be achieved by taking the rank constraint @math to be of order one, independently of the problem size. | Burer and Monteiro @cite_10 introduced the the idea of constraining the rank and solving the PSD constraint thus obtaining a smooth non-convex problem. They also proved that, taking @math , and under suitable conditions on @math , the resulting non-convex problem has no local maxima, except for the global one. Their result actually extend to more general SDPs than Eq. ). While interesting, this result does not clarify the empirical finding that @math is sufficient for some problems with @math as large as @math @cite_9 . | {
"cite_N": [
"@cite_9",
"@cite_10"
],
"mid": [
"2174754147",
"2143075842"
],
"abstract": [
"Statistical inference problems arising within signal processing, data mining, and machine learning naturally give rise to hard combinatorial optimization problems. These problems become intractable when the dimensionality of the data is large, as is often the case for modern datasets. A popular idea is to construct convex relaxations of these combinatorial problems, which can be solved efficiently for large-scale datasets. Semidefinite programming (SDP) relaxations are among the most powerful methods in this family and are surprisingly well suited for a broad range of problems where data take the form of matrices or graphs. It has been observed several times that when the statistical noise is small enough, SDP relaxations correctly detect the underlying combinatorial structures. In this paper we develop asymptotic predictions for several detection thresholds, as well as for the estimation error above these thresholds. We study some classical SDP relaxations for statistical problems motivated by graph synchronization and community detection in networks. We map these optimization problems to statistical mechanics models with vector spins and use nonrigorous techniques from statistical mechanics to characterize the corresponding phase transitions. Our results clarify the effectiveness of SDP relaxations in solving high-dimensional statistical problems.",
"In this paper, we present a nonlinear programming algorithm for solving semidefinite programs (SDPs) in standard form. The algorithm's distinguishing feature is a change of variables that replaces the symmetric, positive semidefinite variable X of the SDP with a rectangular variable R according to the factorization X=RR T . The rank of the factorization, i.e., the number of columns of R, is chosen minimally so as to enhance computational speed while maintaining equivalence with the SDP. Fundamental results concerning the convergence of the algorithm are derived, and encouraging computational results on some large-scale test problems are also presented."
]
} |
1603.04064 | 2299801417 | A large number of problems in optimization, machine learning, signal processing can be effectively addressed by suitable semidefinite programming (SDP) relaxations. Unfortunately, generic SDP solvers hardly scale beyond instances with a few hundreds variables (in the underlying combinatorial problem). On the other hand, it has been observed empirically that an effective strategy amounts to introducing a (non-convex) rank constraint, and solving the resulting smooth optimization problem by ascent methods. This non-convex problem has --generically-- a large number of local maxima, and the reason for this success is therefore unclear. This paper provides rigorous support for this approach. For the problem of maximizing a linear functional over the elliptope, we prove that all local maxima are within a small gap from the SDP optimum. In several problems of interest, arbitrarily small relative error can be achieved by taking the rank constraint @math to be of order one, independently of the problem size. | Bandeira, Boumal and Voroninski recently considered the extreme case @math , in the specific example of the @math synchronization problem @cite_11 . They proved that, if the signal is strong enough, then this approach can effectively recover the underlying signal. Namely, all local minima are correlated with the signal. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2279984491"
],
"abstract": [
"To address difficult optimization problems, convex relaxations based on semidefinite programming are now common place in many fields. Although solvable in polynomial time, large semidefinite programs tend to be computationally challenging. Over a decade ago, exploiting the fact that in many applications of interest the desired solutions are low rank, Burer and Monteiro proposed a heuristic to solve such semidefinite programs by restricting the search space to low-rank matrices. The accompanying theory does not explain the extent of the empirical success. We focus on Synchronization and Community Detection problems and provide theoretical guarantees shedding light on the remarkable efficiency of this heuristic."
]
} |
1603.04064 | 2299801417 | A large number of problems in optimization, machine learning, signal processing can be effectively addressed by suitable semidefinite programming (SDP) relaxations. Unfortunately, generic SDP solvers hardly scale beyond instances with a few hundreds variables (in the underlying combinatorial problem). On the other hand, it has been observed empirically that an effective strategy amounts to introducing a (non-convex) rank constraint, and solving the resulting smooth optimization problem by ascent methods. This non-convex problem has --generically-- a large number of local maxima, and the reason for this success is therefore unclear. This paper provides rigorous support for this approach. For the problem of maximizing a linear functional over the elliptope, we prove that all local maxima are within a small gap from the SDP optimum. In several problems of interest, arbitrarily small relative error can be achieved by taking the rank constraint @math to be of order one, independently of the problem size. | Finally, there has been growing interest in non-convex methods for solving high-dimensional statistical estimation problems. Examples include matrix completion @cite_4 , phase retrieval @cite_0 , regression with missing entries @cite_7 , and many others. These papers provide rigorous guarantees under the assumption that the noise in the data is small enough.' Under such conditions, a very good initialization can be constructed, e.g. by a spectral method, and it is sufficient to prove that the optimization problem is well behaved in a neighborhood of the optimum. | {
"cite_N": [
"@cite_0",
"@cite_4",
"@cite_7"
],
"mid": [
"221278985",
"2616032753",
"2099210013"
],
"abstract": [
"We consider the fundamental problem of solving quadratic systems of equations in @math variables, where @math , @math and @math is unknown. We propose a novel method, which starting with an initial guess computed by means of a spectral method, proceeds by minimizing a nonconvex functional as in the Wirtinger flow approach. There are several key distinguishing features, most notably, a distinct objective functional and novel update rules, which operate in an adaptive fashion and drop terms bearing too much influence on the search direction. These careful selection rules provide a tighter initial guess, better descent directions, and thus enhanced practical performance. On the theoretical side, we prove that for certain unstructured models of quadratic systems, our algorithms return the correct solution in linear time, i.e. in time proportional to reading the data @math and @math as soon as the ratio @math between the number of equations and unknowns exceeds a fixed numerical constant. We extend the theory to deal with noisy systems in which we only have @math and prove that our algorithms achieve a statistical accuracy, which is nearly un-improvable. We complement our theoretical study with numerical examples showing that solving random quadratic systems is both computationally and statistically not much harder than solving linear systems of the same size---hence the title of this paper. For instance, we demonstrate empirically that the computational cost of our algorithm is about four times that of solving a least-squares problem of the same size.",
"Given a matrix M of low-rank, we consider the problem of reconstructing it from noisy observations of a small, random subset of its entries. The problem arises in a variety of applications, from collaborative filtering (the 'Netflix problem') to structure-from-motion and positioning. We study a low complexity algorithm introduced by Keshavan, Montanari, and Oh (2010), based on a combination of spectral techniques and manifold optimization, that we call here OPTSPACE. We prove performance guarantees that are order-optimal in a number of circumstances.",
"Although the standard formulations of prediction problems involve fully-observed and noiseless data drawn in an i.i.d. manner, many applications involve noisy and or missing data, possibly involving dependence, as well. We study these issues in the context of high-dimensional sparse linear regression, and propose novel estimators for the cases of noisy, missing and or dependent data. Many standard approaches to noisy or missing data, such as those using the EM algorithm, lead to optimization problems that are inherently nonconvex, and it is difficult to establish theoretical guarantees on practical algorithms. While our approach also involves optimizing nonconvex programs, we are able to both analyze the statistical error associated with any global optimum, and more surprisingly, to prove that a simple algorithm based on projected gradient descent will converge in polynomial time to a small neighborhood of the set of all global minimizers. On the statistical side, we provide nonasymptotic bounds that hold with high probability for the cases of noisy, missing and or dependent data. On the computational side, we prove that under the same types of conditions required for statistical consistency, the projected gradient descent algorithm is guaranteed to converge at a geometric rate to a near-global minimizer. We illustrate these theoretical predictions with simulations, showing close agreement with the predicted scalings."
]
} |
1603.03915 | 2298368322 | Recognizing text in natural images is a challenging task with many unsolved problems. Different from those in documents, words in natural images often possess irregular shapes, which are caused by perspective distortion, curved character placement, etc. We propose RARE (Robust text recognizer with Automatic REctification), a recognition model that is robust to irregular text. RARE is a specially-designed deep neural network, which consists of a Spatial Transformer Network (STN) and a Sequence Recognition Network (SRN). In testing, an image is firstly rectified via a predicted Thin-Plate-Spline (TPS) transformation, into a more "readable" image for the following SRN, which recognizes text through a sequence recognition approach. We show that the model is able to recognize several types of irregular text, including perspective text and curved text. RARE is end-to-end trainable, requiring only images and associated text labels, making it convenient to train and deploy the model in practical systems. State-of-the-art or highly-competitive performance achieved on several benchmarks well demonstrates the effectiveness of the proposed model. | Although being common in the tasks of scene text detection and recognition, the issue of irregular text is relatively less addressed in explicit ways. Yao @cite_48 firstly propose the multi-oriented text detection problem, and deal with it by carefully designing rotation-invariant region descriptors. Zhang @cite_1 propose a character rectification method that leverages the low-rank structures of text. Phan propose to explicitly rectify perspective distortions via SIFT @cite_14 descriptor matching. The above-mentioned work brings insightful ideas into this issue. However, most methods deal with only one type of irregular text with specifically designed schemes. Our method rectifies several types of irregular text in a unified way. Moreover, it does not require extra annotations for the rectification process, since the STN is supervised by the SRN during training. | {
"cite_N": [
"@cite_48",
"@cite_14",
"@cite_1"
],
"mid": [
"1972065312",
"2151103935",
"2906621894"
],
"abstract": [
"With the increasing popularity of practical vision systems and smart phones, text detection in natural scenes becomes a critical yet challenging task. Most existing methods have focused on detecting horizontal or near-horizontal texts. In this paper, we propose a system which detects texts of arbitrary orientations in natural images. Our algorithm is equipped with a two-level classification scheme and two sets of features specially designed for capturing both the intrinsic characteristics of texts. To better evaluate our algorithm and compare it with other competing algorithms, we generate a new dataset, which includes various texts in diverse real-world scenarios; we also propose a protocol for performance evaluation. Experiments on benchmark datasets and the proposed dataset demonstrate that our algorithm compares favorably with the state-of-the-art algorithms when handling horizontal texts and achieves significantly enhanced performance on texts of arbitrary orientations in complex natural scenes.",
"This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.",
"In this paper, we propose a new tool to efficiently extract a class of \"low-rank textures\" in a 3D scene from user-specified windows in 2D images despite significant corruptions and warping. The low-rank textures capture geometrically meaningful structures in an image, which encompass conventional local features such as edges and corners as well as many kinds of regular, symmetric patterns ubiquitous in urban environments and man-made objects. Our approach to finding these low-rank textures leverages the recent breakthroughs in convex optimization that enable robust recovery of a high-dimensional low-rank matrix despite gross sparse errors. In the case of planar regions with significant affine or projective deformation, our method can accurately recover both the intrinsic low-rank texture and the unknown transformation, and hence both the geometry and appearance of the associated planar region in 3D. Extensive experimental results demonstrate that this new technique works effectively for many regular and near-regular patterns or objects that are approximately low-rank, such as symmetrical patterns, building facades, printed text, and human faces."
]
} |
1603.03461 | 2951149168 | We consider a multi agent optimization problem where a set of agents collectively solves a global optimization problem with the objective function given by the sum of locally known convex functions. We focus on the case when information exchange among agents takes place over a directed network and propose a distributed subgradient algorithm in which each agent performs local processing based on information obtained from his incoming neighbors. Our algorithm uses weight balancing to overcome the asymmetries caused by the directed communication network, i.e., agents scale their outgoing information with dynamically updated weights that converge to balancing weights of the graph. We show that both the objective function values and the consensus violation, at the ergodic average of the estimates generated by the algorithm, converge with rate @math , where @math is the number of iterations. A special case of our algorithm provides a new distributed method to compute average consensus over directed graphs. | Our work also contributes to the vast literature on the consensus problem, where agents have a more specific goal of aligning their estimates (see @cite_34 @cite_39 @cite_11 @cite_10 for consensus over undirected graphs and @cite_7 @cite_51 @cite_55 for consensus over directed graphs). In particular, a special case of our algorithm provides a new distributed method for computing average of initial values over a directed graph. Out contribution here is most closely related to @cite_2 and @cite_23 , which proposed distributed algorithms for average consensus over directed graphs using balancing weights. Reference @cite_2 builds on earlier work @cite_15 , which presented a distributed algorithm for computing balancing node weights based on approximating the left eigenvector associated with the zero eigenvalue of the Laplacian matrix of the underlying directed network. In @cite_2 , the authors used this algorithm for updating weights in the same time scale as the update for estimates and showed convergence to average of initial values. Reference @cite_23 provides a similar algorithm for average consensus based on an earlier work @cite_14 , which proposes an update rule for computing balancing edge weights. | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_55",
"@cite_39",
"@cite_23",
"@cite_2",
"@cite_51",
"@cite_15",
"@cite_34",
"@cite_10",
"@cite_11"
],
"mid": [
"1969847947",
"2110100895",
"",
"",
"1976010987",
"2064403618",
"",
"2040933229",
"2165744313",
"",
""
],
"abstract": [
"A weighted digraph is balanced if, for each node, the sum of the weights of the edges outgoing from that node is equal to the sum of the weights of the edges incoming to that node. Weight-balanced digraphs play a key role in a number of applications, including cooperative control, distributed optimization, and distributed averaging problems. We address the weight-balance problem for a distributed system whose components (nodes) can exchange information via interconnection links (edges) that form an arbitrary, possibly directed, communication topology (digraph). We develop two iterative algorithms, a centralized one and a distributed one, both of which can be used to reach weight-balance, as long as the underlying communication topology forms a strongly connected digraph (or is a collection of strongly connected digraphs). The centralized algorithm is shown to reach weight-balance after a finite number of iterations (bounded by the number of nodes in the graph). The distributed algorithm operates by having each node adapt the weights on its outgoing edges and is shown to asymptotically lead to weight-balance. We also analyze the rate of convergence of the proposed distributed algorithm and obtain a (graph-dependent) worst-case bound for it. Finally, we provide examples to illustrate the operation, performance, and potential advantages of the proposed algorithms.",
"Over the last decade, we have seen a revolution in connectivity between computers, and a resulting paradigm shift from centralized to highly distributed systems. With massive scale also comes massive instability, as node and link failures become the norm rather than the exception. For such highly volatile systems, decentralized gossip-based protocols are emerging as an approach to maintaining simplicity and scalability while achieving fault-tolerant information dissemination. In this paper, we study the problem of computing aggregates with gossip-style protocols. Our first contribution is an analysis of simple gossip-based protocols for the computation of sums, averages, random samples, quantiles, and other aggregate functions, and we show that our protocols converge exponentially fast to the true answer when using uniform gossip. Our second contribution is the definition of a precise notion of the speed with which a node's data diffuses through the network. We show that this diffusion speed is at the heart of the approximation guarantees for all of the above problems. We analyze the diffusion speed of uniform gossip in the presence of node and link failures, as well as for flooding-based mechanisms. The latter expose interesting connections to random walks on graphs.",
"",
"",
"We propose a class of distributed iterative algorithms that enable the asymptotic scaling of a primitive column stochastic matrix, with a given sparsity structure, to a doubly stochastic form. We also demonstrate the application of these algorithms to the average consensus problem in networked multi-component systems. More specifically, we consider a setting where each node is in charge of assigning weights on its outgoing edges based on the weights on its incoming edges. We establish that, as long as the (generally directed) graph that describes the communication links between components is strongly connected, each of the proposed matrix scaling algorithms allows the system components to asymptotically assign, in a distributed fashion, weights that comprise a primitive doubly stochastic matrix. We also show that the nodes can asymptotically reach average consensus by executing a linear iteration that uses the time-varying weights (as they result at the end of each iteration of the chosen matrix scaling algorithm).",
"In this work we propose a distributed algorithm to solve the discrete-time average consensus problem on strongly connected weighted digraphs (SCWDs). The key idea is to couple the computation of the average with the estimation of the left eigenvector associated with the zero eigenvalue of the Laplacian matrix according to the protocol described in (2012). The major contribution is the removal of the requirement of the knowledge of the out-neighborhood of an agent, thus paving the way for a simple implementation based on a pure broadcast-based communication scheme.",
"",
"In this work we propose a decentralized algorithm for balancing a strongly connected weighted digraph. This algorithm relies on the decentralized estimation of the left eigenvector associated to the zero structural eigenvalue of the Laplacian matrix. The estimation is performed through the distributed computation of the powers of the Laplacian matrix itself. This information can be locally used by each agent to modify the weights of its incoming edges so that their sum is equal to the sum of the weights outgoing this agent, i.e., the weighted digraph is balanced. Simulation results are proposed to corroborate the theoretical results.",
"In a recent Physical Review Letters article, propose a simple but compelling discrete-time model of n autonomous agents (i.e., points or particles) all moving in the plane with the same speed but with different headings. Each agent's heading is updated using a local rule based on the average of its own heading plus the headings of its \"neighbors.\" In their paper, provide simulation results which demonstrate that the nearest neighbor rule they are studying can cause all agents to eventually move in the same direction despite the absence of centralized coordination and despite the fact that each agent's set of nearest neighbors change with time as the system evolves. This paper provides a theoretical explanation for this observed behavior. In addition, convergence results are derived for several other similarly inspired models. The Vicsek model proves to be a graphic example of a switched linear system which is stable, but for which there does not exist a common quadratic Lyapunov function.",
"",
""
]
} |
1603.03627 | 2301276134 | The present study introduces a method for improving the classification performance of imbalanced multiclass data streams from wireless body worn sensors. Data imbalance is an inherent problem in activity recognition caused by the irregular time distribution of activities, which are sequential and dependent on previous movements. We use conditional random fields (CRF), a graphical model for structured classification, to take advantage of dependencies between activities in a sequence. However, CRFs do not consider the negative effects of class imbalance during training. We propose a class-wise dynamically weighted CRF (dWCRF) where weights are automatically determined during training by maximizing the expected overall F-score. Our results based on three case studies from a healthcare application using a batteryless body worn sensor, demonstrate that our method, in general, improves overall and minority class F-score when compared to other CRF based classifiers and achieves similar or better overall and class-wise performance when compared to SVM based classifiers under conditions of limited training data. We also confirm the performance of our approach using an additional battery powered body worn sensor dataset, achieving similar results in cases of high class imbalance. | This section reviews previous methods developed for improving the classification of imbalanced data, such as data re-sampling @cite_4 @cite_32 @cite_7 @cite_37 , adjusting decision thresholds @cite_22 or the inclusion of cost parameters or weights into the classification algorithm @cite_27 @cite_21 @cite_3 @cite_34 @cite_11 @cite_25 @cite_15 @cite_30 . The approach presented in this article is based on the latter. | {
"cite_N": [
"@cite_30",
"@cite_37",
"@cite_4",
"@cite_22",
"@cite_7",
"@cite_21",
"@cite_32",
"@cite_3",
"@cite_27",
"@cite_15",
"@cite_34",
"@cite_25",
"@cite_11"
],
"mid": [
"",
"2103050290",
"1993220166",
"2123977051",
"",
"",
"2074888575",
"2018186689",
"",
"2109098835",
"2154134600",
"2125927049",
""
],
"abstract": [
"",
"Many established classifiers fail to identify the minority class when it is much smaller than the majority class. To tackle this problem, researchers often first rebalance the class sizes in the training dataset, through oversampling the minority class or undersampling the majority class, and then use the rebalanced data to train the classifiers. This leads to interesting empirical patterns. In particular, using the rebalanced training data can often improve the area under the receiver operating characteristic curve (AUC) for the original, unbalanced test data. The AUC is a widely-used quantitative measure of classification performance, but the property that it increases with rebalancing has, as yet, no theoretical explanation. In this note, using Gaussian-based linear discriminant analysis (LDA) as the classifier, we demonstrate that, at least for LDA, there is an intrinsic, positive relationship between the rebalancing of class sizes and the improvement of AUC. We show that the largest improvement of AUC is achieved, asymptotically, when the two classes are fully rebalanced to be of equal sizes.",
"There are several aspects that might influence the performance achieved by existing learning systems. It has been reported that one of these aspects is related to class imbalance in which examples in training data belonging to one class heavily outnumber the examples in the other class. In this situation, which is found in real world data describing an infrequent but important event, the learning system may have difficulties to learn the concept related to the minority class. In this work we perform a broad experimental evaluation involving ten methods, three of them proposed by the authors, to deal with the class imbalance problem in thirteen UCI data sets. Our experiments provide evidence that class imbalance does not systematically hinder the performance of learning systems. In fact, the problem seems to be related to learning with too few minority class examples in the presence of other complicating factors, such as class overlapping. Two of our proposed methods deal with these conditions directly, allying a known over-sampling method with data cleaning methods in order to produce better-defined class clusters. Our comparative experiments show that, in general, over-sampling methods provide more accurate results than under-sampling methods considering the area under the ROC curve (AUC). This result seems to contradict results previously published in the literature. Two of our proposed methods, Smote + Tomek and Smote + ENN, presented very good results for data sets with a small number of positive examples. Moreover, Random over-sampling, a very simple over-sampling method, is very competitive to more complex over-sampling methods. Since the over-sampling methods provided very good performance results, we also measured the syntactic complexity of the decision trees induced from over-sampled data. Our results show that these trees are usually more complex then the ones induced from original data. Random over-sampling usually produced the smallest increase in the mean number of induced rules and Smote + ENN the smallest increase in the mean number of conditions per rule, when compared among the investigated over-sampling methods.",
"The problem of learning from imbalanced data sets, while not the same problem as learning when misclassication costs are unequal and unknown, can be handled in a similar manner. That is, in both contexts, we can use techniques from roc analysis to help with classier design. We present results from two studies in which we dealt with skewed data sets and unequal, but unknown costs of error. We also compare for one domain these results to those obtained by over-sampling and under-sampling the data set. The operations of sampling, moving the decision threshold, and adjusting the cost matrix produced sets of classiers that fell on the same roc curve.",
"",
"",
"In this paper, a novel inverse random under sampling (IRUS) method is proposed for the class imbalance problem. The main idea is to severely under sample the majority class thus creating a large number of distinct training sets. For each training set we then find a decision boundary which separates the minority class from the majority class. By combining the multiple designs through fusion, we construct a composite boundary between the majority class and the minority class. The proposed methodology is applied on 22 UCI data sets and experimental results indicate a significant increase in performance when compared with many existing class-imbalance learning methods. We also present promising results for multi-label classification, a challenging research problem in many modern applications such as music, text and image categorization.",
"Rescaling is possibly the most popular approach to cost-sensitive learning. This approach works by rebalancing the classes according to their costs, and it can be realized in different ways, for example, re-weighting or resampling the training examples in proportion to their costs, moving the decision boundaries of classifiers faraway from high-cost classes in proportion to costs, etc. This approach is very effective in dealing with two-class problems, yet some studies showed that it is often not so helpful on multi-class problems. In this article, we try to explore why the rescaling approach is often helpless on multi-class problems. Our analysis discloses that the rescaling approach works well when the costs are consistent, while directly applying it to multi-class problems with inconsistent costs may not be a good choice. Based on this recognition, we advocate that before applying the rescaling approach, the consistency of the costs must be examined at first. If the costs are consistent, the rescaling approach can be conducted directly; otherwise it is better to apply rescaling after decomposing the multi-class problem into a series of two-class problems. An empirical study involving 20 multi-class data sets and seven types of cost-sensitive learners validates our proposal. Moreover, we show that the proposal is also helpful for class-imbalance learning.",
"",
"In the standard support vector machines for classification, training sets with uneven class sizes results in classification biases towards the class with the large training size. That is to say, the larger the training sample size for one class is, the smaller its corresponding classification error rate is, while the smaller the sample size, the larger the classification error rate. The main causes lie in that the penalty of misclassification for each training sample is considered equally. Weighted support vector machines for classification are proposed in this paper where penalty of misclassification for each training sample is different. By setting the equal penalty for the training samples belonging to same class, and setting the ratio of penalties for different classes to the inverse ratio of the training class sizes, the obtained weighted support vector machines compensate for the undesirable effects caused by the uneven training class size, and the classification accuracy for the class with small training size is improved. Experimental simulations on breast cancer diagnosis show the effectiveness of the proposed methods.",
"a b s t r a c t Linear Proximal Support Vector Machines (LPSVMs), like decision trees, classic SVM, etc. are originally not equipped to handle drifting data streams that exhibit high and varying degrees of class imbalance. For online classification of data streams with imbalanced class distribution, we propose a dynamic class imbalance learning (DCIL) approach to incremental LPSVM (IncLPSVM) modeling. In doing so, we simplify a computationally non-renewable weighted LPSVM to several core matrices multiplying two simple weight coefficients. When data addition and or retirement occurs, the proposed DCIL-IncLPSVM 1 accommodates newly presented class imbalance by a simple matrix and coefficient updating, meanwhile ensures no discriminative information lost throughout the learning process. Experiments on benchmark datasets indicate that the proposed DCIL-IncLPSVM outperforms classic IncSVM and IncLPSVM in terms of F -measure and G-mean metrics. Moreover, our application to online face membership authentication shows that the proposed DCIL-IncLPSVM remains effective in the presence of highly dynamic class imbalance, which usually poses serious problems to previous approaches.",
"Abstract Cost-sensitive learning has received increased attention in recent years. However, in existing studies, most of the works are devoted to make decision trees cost-sensitive and very few works discuss cost-sensitive Bayesian network classifiers. In this paper, an instance weighting method is incorporated into various Bayesian network classifiers. The probability estimation of Bayesian network classifiers is modified by the instance weighting method, which makes Bayesian network classifiers cost-sensitive. The experimental results on 36 UCI data sets show that when cost ratio is large, the cost-sensitive Bayesian network classifiers perform well in terms of the total misclassification costs and the number of high cost errors. When cost ratio is small, the advantage of cost-sensitive Bayesian network classifiers is not so obvious in terms of the total misclassification costs, but still obvious in terms of the number of high cost errors, compared to the original cost-insensitive Bayesian network classifiers.",
""
]
} |
1603.03627 | 2301276134 | The present study introduces a method for improving the classification performance of imbalanced multiclass data streams from wireless body worn sensors. Data imbalance is an inherent problem in activity recognition caused by the irregular time distribution of activities, which are sequential and dependent on previous movements. We use conditional random fields (CRF), a graphical model for structured classification, to take advantage of dependencies between activities in a sequence. However, CRFs do not consider the negative effects of class imbalance during training. We propose a class-wise dynamically weighted CRF (dWCRF) where weights are automatically determined during training by maximizing the expected overall F-score. Our results based on three case studies from a healthcare application using a batteryless body worn sensor, demonstrate that our method, in general, improves overall and minority class F-score when compared to other CRF based classifiers and achieves similar or better overall and class-wise performance when compared to SVM based classifiers under conditions of limited training data. We also confirm the performance of our approach using an additional battery powered body worn sensor dataset, achieving similar results in cases of high class imbalance. | The main issue with re-sampling techniques @cite_4 @cite_32 @cite_7 is that the removal or introduction of data can modify the sequence structure and its meaning. This is an issue in some real world applications that require maintaining the original data structure. For example, modifying parts of a sentence for text classification can effectively change the meaning of the message. Similarly, in human activity recognition the time sequence is important to determine the flow of movement or activities and introducing data can change the sequence of activities and the way it is analyzed, affecting transition probabilities in Markov chains. | {
"cite_N": [
"@cite_4",
"@cite_32",
"@cite_7"
],
"mid": [
"1993220166",
"2074888575",
""
],
"abstract": [
"There are several aspects that might influence the performance achieved by existing learning systems. It has been reported that one of these aspects is related to class imbalance in which examples in training data belonging to one class heavily outnumber the examples in the other class. In this situation, which is found in real world data describing an infrequent but important event, the learning system may have difficulties to learn the concept related to the minority class. In this work we perform a broad experimental evaluation involving ten methods, three of them proposed by the authors, to deal with the class imbalance problem in thirteen UCI data sets. Our experiments provide evidence that class imbalance does not systematically hinder the performance of learning systems. In fact, the problem seems to be related to learning with too few minority class examples in the presence of other complicating factors, such as class overlapping. Two of our proposed methods deal with these conditions directly, allying a known over-sampling method with data cleaning methods in order to produce better-defined class clusters. Our comparative experiments show that, in general, over-sampling methods provide more accurate results than under-sampling methods considering the area under the ROC curve (AUC). This result seems to contradict results previously published in the literature. Two of our proposed methods, Smote + Tomek and Smote + ENN, presented very good results for data sets with a small number of positive examples. Moreover, Random over-sampling, a very simple over-sampling method, is very competitive to more complex over-sampling methods. Since the over-sampling methods provided very good performance results, we also measured the syntactic complexity of the decision trees induced from over-sampled data. Our results show that these trees are usually more complex then the ones induced from original data. Random over-sampling usually produced the smallest increase in the mean number of induced rules and Smote + ENN the smallest increase in the mean number of conditions per rule, when compared among the investigated over-sampling methods.",
"In this paper, a novel inverse random under sampling (IRUS) method is proposed for the class imbalance problem. The main idea is to severely under sample the majority class thus creating a large number of distinct training sets. For each training set we then find a decision boundary which separates the minority class from the majority class. By combining the multiple designs through fusion, we construct a composite boundary between the majority class and the minority class. The proposed methodology is applied on 22 UCI data sets and experimental results indicate a significant increase in performance when compared with many existing class-imbalance learning methods. We also present promising results for multi-label classification, a challenging research problem in many modern applications such as music, text and image categorization.",
""
]
} |
1603.03627 | 2301276134 | The present study introduces a method for improving the classification performance of imbalanced multiclass data streams from wireless body worn sensors. Data imbalance is an inherent problem in activity recognition caused by the irregular time distribution of activities, which are sequential and dependent on previous movements. We use conditional random fields (CRF), a graphical model for structured classification, to take advantage of dependencies between activities in a sequence. However, CRFs do not consider the negative effects of class imbalance during training. We propose a class-wise dynamically weighted CRF (dWCRF) where weights are automatically determined during training by maximizing the expected overall F-score. Our results based on three case studies from a healthcare application using a batteryless body worn sensor, demonstrate that our method, in general, improves overall and minority class F-score when compared to other CRF based classifiers and achieves similar or better overall and class-wise performance when compared to SVM based classifiers under conditions of limited training data. We also confirm the performance of our approach using an additional battery powered body worn sensor dataset, achieving similar results in cases of high class imbalance. | Decision threshold methods such as that of @cite_22 achieved similar results to re-sampling techniques and used receiver operating characteristic (ROC) curves to decide which decision threshold produces the best performance. However, ROC curves depend on measuring specificity which does not reflect the errors in imbalanced data; this is due to specificity of the minority class being conditioned to its true negative measurement which includes the true positives of the majority class and thus leading to over optimistic results. | {
"cite_N": [
"@cite_22"
],
"mid": [
"2123977051"
],
"abstract": [
"The problem of learning from imbalanced data sets, while not the same problem as learning when misclassication costs are unequal and unknown, can be handled in a similar manner. That is, in both contexts, we can use techniques from roc analysis to help with classier design. We present results from two studies in which we dealt with skewed data sets and unequal, but unknown costs of error. We also compare for one domain these results to those obtained by over-sampling and under-sampling the data set. The operations of sampling, moving the decision threshold, and adjusting the cost matrix produced sets of classiers that fell on the same roc curve."
]
} |
1603.03627 | 2301276134 | The present study introduces a method for improving the classification performance of imbalanced multiclass data streams from wireless body worn sensors. Data imbalance is an inherent problem in activity recognition caused by the irregular time distribution of activities, which are sequential and dependent on previous movements. We use conditional random fields (CRF), a graphical model for structured classification, to take advantage of dependencies between activities in a sequence. However, CRFs do not consider the negative effects of class imbalance during training. We propose a class-wise dynamically weighted CRF (dWCRF) where weights are automatically determined during training by maximizing the expected overall F-score. Our results based on three case studies from a healthcare application using a batteryless body worn sensor, demonstrate that our method, in general, improves overall and minority class F-score when compared to other CRF based classifiers and achieves similar or better overall and class-wise performance when compared to SVM based classifiers under conditions of limited training data. We also confirm the performance of our approach using an additional battery powered body worn sensor dataset, achieving similar results in cases of high class imbalance. | The method of @cite_15 introduced a fixed set of weights for each class for a binary SVM algorithm. The binary weights ratio was inversely proportional to their respective class population ratio in the training data. This achieved a marginal improvement for the minority class accuracy at the cost of possible overall accuracy reduction. In @cite_25 , weights calculated from the misclassification cost of each class were introduced into Bayesian network classifiers. | {
"cite_N": [
"@cite_15",
"@cite_25"
],
"mid": [
"2109098835",
"2125927049"
],
"abstract": [
"In the standard support vector machines for classification, training sets with uneven class sizes results in classification biases towards the class with the large training size. That is to say, the larger the training sample size for one class is, the smaller its corresponding classification error rate is, while the smaller the sample size, the larger the classification error rate. The main causes lie in that the penalty of misclassification for each training sample is considered equally. Weighted support vector machines for classification are proposed in this paper where penalty of misclassification for each training sample is different. By setting the equal penalty for the training samples belonging to same class, and setting the ratio of penalties for different classes to the inverse ratio of the training class sizes, the obtained weighted support vector machines compensate for the undesirable effects caused by the uneven training class size, and the classification accuracy for the class with small training size is improved. Experimental simulations on breast cancer diagnosis show the effectiveness of the proposed methods.",
"Abstract Cost-sensitive learning has received increased attention in recent years. However, in existing studies, most of the works are devoted to make decision trees cost-sensitive and very few works discuss cost-sensitive Bayesian network classifiers. In this paper, an instance weighting method is incorporated into various Bayesian network classifiers. The probability estimation of Bayesian network classifiers is modified by the instance weighting method, which makes Bayesian network classifiers cost-sensitive. The experimental results on 36 UCI data sets show that when cost ratio is large, the cost-sensitive Bayesian network classifiers perform well in terms of the total misclassification costs and the number of high cost errors. When cost ratio is small, the advantage of cost-sensitive Bayesian network classifiers is not so obvious in terms of the total misclassification costs, but still obvious in terms of the number of high cost errors, compared to the original cost-insensitive Bayesian network classifiers."
]
} |
1603.03627 | 2301276134 | The present study introduces a method for improving the classification performance of imbalanced multiclass data streams from wireless body worn sensors. Data imbalance is an inherent problem in activity recognition caused by the irregular time distribution of activities, which are sequential and dependent on previous movements. We use conditional random fields (CRF), a graphical model for structured classification, to take advantage of dependencies between activities in a sequence. However, CRFs do not consider the negative effects of class imbalance during training. We propose a class-wise dynamically weighted CRF (dWCRF) where weights are automatically determined during training by maximizing the expected overall F-score. Our results based on three case studies from a healthcare application using a batteryless body worn sensor, demonstrate that our method, in general, improves overall and minority class F-score when compared to other CRF based classifiers and achieves similar or better overall and class-wise performance when compared to SVM based classifiers under conditions of limited training data. We also confirm the performance of our approach using an additional battery powered body worn sensor dataset, achieving similar results in cases of high class imbalance. | The study of @cite_24 , focused on improving the classification performance by modifying costs according to specific performance tasks (task-wise) such as improving recall, precision or both (as in ) @cite_24 as opposed to classification error minimization as in previous studies. The method itself did not consider class specific parameters and parameter calculations required an extensive validation process as parameters were not learned in training. The introduction of weights in CRF (WCRF) is not new; however, previous approaches only considered using a fixed set of weights during training for optimization @cite_30 ; however, finding an optimal set of weights @cite_24 @cite_25 require an extensive validation process. | {
"cite_N": [
"@cite_24",
"@cite_25",
"@cite_30"
],
"mid": [
"2169819436",
"2125927049",
""
],
"abstract": [
"We describe a method of incorporating task-specific cost functions into standard conditional log-likelihood (CLL) training of linear structured prediction models. Recently introduced in the speech recognition community, we describe the method generally for structured models, highlight connections to CLL and max-margin learning for structured prediction (, 2003), and show that the method optimizes a bound on risk. The approach is simple, efficient, and easy to implement, requiring very little change to an existing CLL implementation. We present experimental results comparing with several commonly-used methods for training structured predictors for named-entity recognition.",
"Abstract Cost-sensitive learning has received increased attention in recent years. However, in existing studies, most of the works are devoted to make decision trees cost-sensitive and very few works discuss cost-sensitive Bayesian network classifiers. In this paper, an instance weighting method is incorporated into various Bayesian network classifiers. The probability estimation of Bayesian network classifiers is modified by the instance weighting method, which makes Bayesian network classifiers cost-sensitive. The experimental results on 36 UCI data sets show that when cost ratio is large, the cost-sensitive Bayesian network classifiers perform well in terms of the total misclassification costs and the number of high cost errors. When cost ratio is small, the advantage of cost-sensitive Bayesian network classifiers is not so obvious in terms of the total misclassification costs, but still obvious in terms of the number of high cost errors, compared to the original cost-insensitive Bayesian network classifiers.",
""
]
} |
1603.03627 | 2301276134 | The present study introduces a method for improving the classification performance of imbalanced multiclass data streams from wireless body worn sensors. Data imbalance is an inherent problem in activity recognition caused by the irregular time distribution of activities, which are sequential and dependent on previous movements. We use conditional random fields (CRF), a graphical model for structured classification, to take advantage of dependencies between activities in a sequence. However, CRFs do not consider the negative effects of class imbalance during training. We propose a class-wise dynamically weighted CRF (dWCRF) where weights are automatically determined during training by maximizing the expected overall F-score. Our results based on three case studies from a healthcare application using a batteryless body worn sensor, demonstrate that our method, in general, improves overall and minority class F-score when compared to other CRF based classifiers and achieves similar or better overall and class-wise performance when compared to SVM based classifiers under conditions of limited training data. We also confirm the performance of our approach using an additional battery powered body worn sensor dataset, achieving similar results in cases of high class imbalance. | These previously mentioned methods @cite_25 @cite_30 @cite_24 require empirical calculation of parameters. This process can be cumbersome and computationally expensive. For example, in an extensive grid search for suitable weights, the number of validation operations is of the form @math , where @math is the cardinality of the parameters' value range and @math the number of parameters. In addition, objective function optimization based on classification error (1-accuracy) minimization is not suitable for imbalanced data. This is because the resulting measure, accuracy, is largely favoured by the dominant class and does not provide performance information regarding the predicted minority class @cite_14 . | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_25",
"@cite_24"
],
"mid": [
"",
"1766594731",
"2125927049",
"2169819436"
],
"abstract": [
"",
"A dataset is imbalanced if the classification categories are not approximately equally represented. Recent years brought increased interest in applying machine learning techniques to difficult “real-world” problems, many of which are characterized by imbalanced data. Additionally the distribution of the testing data may differ from that of the training data, and the true misclassification costs may be unknown at learning time. Predictive accuracy, a popular choice for evaluating performance of a classifier, might not be appropriate when the data is imbalanced and or the costs of different errors vary markedly. In this Chapter, we discuss some of the sampling techniques used for balancing the datasets, and the performance measures more appropriate for mining imbalanced datasets.",
"Abstract Cost-sensitive learning has received increased attention in recent years. However, in existing studies, most of the works are devoted to make decision trees cost-sensitive and very few works discuss cost-sensitive Bayesian network classifiers. In this paper, an instance weighting method is incorporated into various Bayesian network classifiers. The probability estimation of Bayesian network classifiers is modified by the instance weighting method, which makes Bayesian network classifiers cost-sensitive. The experimental results on 36 UCI data sets show that when cost ratio is large, the cost-sensitive Bayesian network classifiers perform well in terms of the total misclassification costs and the number of high cost errors. When cost ratio is small, the advantage of cost-sensitive Bayesian network classifiers is not so obvious in terms of the total misclassification costs, but still obvious in terms of the number of high cost errors, compared to the original cost-insensitive Bayesian network classifiers.",
"We describe a method of incorporating task-specific cost functions into standard conditional log-likelihood (CLL) training of linear structured prediction models. Recently introduced in the speech recognition community, we describe the method generally for structured models, highlight connections to CLL and max-margin learning for structured prediction (, 2003), and show that the method optimizes a bound on risk. The approach is simple, efficient, and easy to implement, requiring very little change to an existing CLL implementation. We present experimental results comparing with several commonly-used methods for training structured predictors for named-entity recognition."
]
} |
1603.03234 | 2296447001 | Similarity-preserving hashing is a commonly used method for nearest neighbor search in large-scale image retrieval. For image retrieval, deep-network-based hashing methods are appealing, since they can simultaneously learn effective image representations and compact hash codes. This paper focuses on deep-network-based hashing for multi-label images, each of which may contain objects of multiple categories. In most existing hashing methods, each image is represented by one piece of hash code, which is referred to as semantic hashing. This setting may be suboptimal for multi-label image retrieval. To solve this problem, we propose a deep architecture that learns instance-aware image representations for multi-label image data, which are organized in multiple groups, with each group containing the features for one category. The instance-aware representations not only bring advantages to semantic hashing but also can be used in category-aware hashing, in which an image is represented by multiple pieces of hash codes and each piece of code corresponds to a category. Extensive evaluations conducted on several benchmark data sets demonstrate that for both the semantic hashing and the category-aware hashing, the proposed method shows substantial improvement over the state-of-the-art supervised and unsupervised hashing methods. | Hashing methods can be divided into data independent hashing and data dependent hashing. The early efforts mainly focus on data independent hashing. For example, the notable Locality-Sensitive Hashing (LSH) @cite_23 method constructs hash functions by random projections or random permutations that are independent of the data points. The main limitation of data independent methods is that they usually require long hash codes to obtain good performance. However, long hash codes lead to inefficient search due to the required large storage space and the low recall rates. | {
"cite_N": [
"@cite_23"
],
"mid": [
"1502916507"
],
"abstract": [
"The nearestor near-neighbor query problems arise in a large variety of database applications, usually in the context of similarity searching. Of late, there has been increasing interest in building search index structures for performing similarity search over high-dimensional data, e.g., image databases, document collections, time-series databases, and genome databases. Unfortunately, all known techniques for solving this problem fall prey to the of dimensionality.\" That is, the data structures scale poorly with data dimensionality; in fact, if the number of dimensions exceeds 10 to 20, searching in k-d trees and related structures involves the inspection of a large fraction of the database, thereby doing no better than brute-force linear search. It has been suggested that since the selection of features and the choice of a distance metric in typical applications is rather heuristic, determining an approximate nearest neighbor should su ce for most practical purposes. In this paper, we examine a novel scheme for approximate similarity search based on hashing. The basic idea is to hash the points Supported by NAVY N00014-96-1-1221 grant and NSF Grant IIS-9811904. Supported by Stanford Graduate Fellowship and NSF NYI Award CCR-9357849. Supported by ARO MURI Grant DAAH04-96-1-0007, NSF Grant IIS-9811904, and NSF Young Investigator Award CCR9357849, with matching funds from IBM, Mitsubishi, Schlumberger Foundation, Shell Foundation, and Xerox Corporation. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and or special permission from the Endowment. Proceedings of the 25th VLDB Conference, Edinburgh, Scotland, 1999. from the database so as to ensure that the probability of collision is much higher for objects that are close to each other than for those that are far apart. We provide experimental evidence that our method gives signi cant improvement in running time over other methods for searching in highdimensional spaces based on hierarchical tree decomposition. Experimental results also indicate that our scheme scales well even for a relatively large number of dimensions (more than 50)."
]
} |
1603.03234 | 2296447001 | Similarity-preserving hashing is a commonly used method for nearest neighbor search in large-scale image retrieval. For image retrieval, deep-network-based hashing methods are appealing, since they can simultaneously learn effective image representations and compact hash codes. This paper focuses on deep-network-based hashing for multi-label images, each of which may contain objects of multiple categories. In most existing hashing methods, each image is represented by one piece of hash code, which is referred to as semantic hashing. This setting may be suboptimal for multi-label image retrieval. To solve this problem, we propose a deep architecture that learns instance-aware image representations for multi-label image data, which are organized in multiple groups, with each group containing the features for one category. The instance-aware representations not only bring advantages to semantic hashing but also can be used in category-aware hashing, in which an image is represented by multiple pieces of hash codes and each piece of code corresponds to a category. Extensive evaluations conducted on several benchmark data sets demonstrate that for both the semantic hashing and the category-aware hashing, the proposed method shows substantial improvement over the state-of-the-art supervised and unsupervised hashing methods. | Unsupervised methods try to learn a set of similarity-preserving hash functions only from the unlabeled data. Representative methods in this category include Kernelized LSH (KLSH) @cite_19 , Semantic hashing @cite_21 , Spectral hashing @cite_29 , Anchor Graph Hashing @cite_11 , and Iterative Quantization (ITQ) @cite_25 . Kernelized LSH (KLSH) @cite_19 generalizes LSH to accommodate arbitrary kernel functions, making it possible to learn hash functions which preserve data points' similarity in a kernel space. Semantic hashing @cite_21 generates hash functions by a deep auto-encoder via stacking multiple restricted Boltzmann machines (RBMs). Graph-based hashing methods, such as Spectral hashing @cite_29 and Anchor Graph Hashing @cite_11 , learn non-linear mappings as hash functions which try to preserve the similarities within the data neighborhood graph. In order to reduce the quantization errors, Iterative Quantization (ITQ) @cite_25 seeks to learn an orthogonal rotation matrix which is applied to the data matrix after principal component analysis projections. | {
"cite_N": [
"@cite_29",
"@cite_21",
"@cite_19",
"@cite_25",
"@cite_11"
],
"mid": [
"",
"205159212",
"2171790913",
"2084363474",
"2251864938"
],
"abstract": [
"",
"A dental model trimmer having an easily replaceable abrasive surfaced member. The abrasive surfaced member is contained within a housing and is releasably coupled onto a back plate assembly which is driven by a drive motor. The housing includes a releasably coupled cover plate providing access to the abrasive surfaced member. An opening formed in the cover plate exposes a portion of the abrasive surface so that a dental model workpiece can be inserted into the opening against the abrasive surface to permit work on the dental model workpiece. A tilting work table beneath the opening supports the workpiece during the operation. A stream of water is directed through the front cover onto the abrasive surface and is redirected against this surface by means of baffles positioned inside the cover plate. The opening includes a beveled boundary and an inwardly directed lip permitting angular manipulation of the workpiece, better visibility of the workpiece and maximum safety.",
"Fast retrieval methods are critical for large-scale and data-driven vision applications. Recent work has explored ways to embed high-dimensional features or complex distance functions into a low-dimensional Hamming space where items can be efficiently searched. However, existing methods do not apply for high-dimensional kernelized data when the underlying feature embedding for the kernel is unknown. We show how to generalize locality-sensitive hashing to accommodate arbitrary kernel functions, making it possible to preserve the algorithm's sub-linear time similarity search guarantees for a wide class of useful similarity functions. Since a number of successful image-based kernels have unknown or incomputable embeddings, this is especially valuable for image retrieval tasks. We validate our technique on several large-scale datasets, and show that it enables accurate and fast performance for example-based object classification, feature matching, and content-based retrieval.",
"This paper addresses the problem of learning similarity-preserving binary codes for efficient retrieval in large-scale image collections. We propose a simple and efficient alternating minimization scheme for finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube. This method, dubbed iterative quantization (ITQ), has connections to multi-class spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). Our experiments show that the resulting binary coding schemes decisively outperform several other state-of-the-art methods.",
"Hashing is becoming increasingly popular for efficient nearest neighbor search in massive databases. However, learning short codes that yield good search performance is still a challenge. Moreover, in many cases real-world data lives on a low-dimensional manifold, which should be taken into account to capture meaningful nearest neighbors. In this paper, we propose a novel graph-based hashing method which automatically discovers the neighborhood structure inherent in the data to learn appropriate compact codes. To make such an approach computationally feasible, we utilize Anchor Graphs to obtain tractable low-rank adjacency matrices. Our formulation allows constant time hashing of a new data point by extrapolating graph Laplacian eigenvectors to eigenfunctions. Finally, we describe a hierarchical threshold learning procedure in which each eigenfunction yields multiple bits, leading to higher search accuracy. Experimental comparison with the other state-of-the-art methods on two large datasets demonstrates the efficacy of the proposed method."
]
} |
1603.03234 | 2296447001 | Similarity-preserving hashing is a commonly used method for nearest neighbor search in large-scale image retrieval. For image retrieval, deep-network-based hashing methods are appealing, since they can simultaneously learn effective image representations and compact hash codes. This paper focuses on deep-network-based hashing for multi-label images, each of which may contain objects of multiple categories. In most existing hashing methods, each image is represented by one piece of hash code, which is referred to as semantic hashing. This setting may be suboptimal for multi-label image retrieval. To solve this problem, we propose a deep architecture that learns instance-aware image representations for multi-label image data, which are organized in multiple groups, with each group containing the features for one category. The instance-aware representations not only bring advantages to semantic hashing but also can be used in category-aware hashing, in which an image is represented by multiple pieces of hash codes and each piece of code corresponds to a category. Extensive evaluations conducted on several benchmark data sets demonstrate that for both the semantic hashing and the category-aware hashing, the proposed method shows substantial improvement over the state-of-the-art supervised and unsupervised hashing methods. | In supervised hashing methods for image retrieval, an emerging stream is the deep-networks-based methods @cite_16 @cite_33 @cite_36 @cite_27 which learn image representations as well as binary hash codes. Xia @cite_33 proposed Convolutional-Neural-Networks-based Hashing (CNNH), which is a two-stage method. In its first stage, approximate hash codes are learned from the supervised information. Then, in the second stage, hash functions are learned based on those approximate hash codes via deep convolutional networks. Lai @cite_36 proposed a one-stage hashing method that generates bitwise hash codes via a carefully designed deep architecture. Zhao @cite_27 proposed a ranking based hashing method for learning hash functions that preserve multi-level semantic similarity between images, via deep convolutional networks. Lin @cite_10 proposed to learn the hash codes and image representations in a point-wised manner, which is suitable for large-scale datasets. Wang @cite_34 proposed Deep Multimodal Hashing with Orthogonal Regularization (DMHOR) method for multimodal data. All of these methods generate one piece of hash code for each image, which may be inappropriate for multi-label image retrieval. Different from the existing methods, the proposed method can generate multiple pieces of hash codes for an image, each piece corresponding to a(n) instance category. | {
"cite_N": [
"@cite_33",
"@cite_36",
"@cite_34",
"@cite_27",
"@cite_16",
"@cite_10"
],
"mid": [
"2293824885",
"1939575207",
"2180844455",
"1923967535",
"2154956324",
"1913628733"
],
"abstract": [
"Hashing is a popular approximate nearest neighbor search approach for large-scale image retrieval. Supervised hashing, which incorporates similarity dissimilarity information on entity pairs to improve the quality of hashing function learning, has recently received increasing attention. However, in the existing supervised hashing methods for images, an input image is usually encoded by a vector of handcrafted visual features. Such hand-crafted feature vectors do not necessarily preserve the accurate semantic similarities of images pairs, which may often degrade the performance of hashing function learning. In this paper, we propose a supervised hashing method for image retrieval, in which we automatically learn a good image representation tailored to hashing as well as a set of hash functions. The proposed method has two stages. In the first stage, given the pairwise similarity matrix S over training images, we propose a scalable coordinate descent method to decompose S into a product of HHT where H is a matrix with each of its rows being the approximate hash code associated to a training image. In the second stage, we propose to simultaneously learn a good feature representation for the input images as well as a set of hash functions, via a deep convolutional network tailored to the learned hash codes in H and optionally the discrete class labels of the images. Extensive empirical evaluations on three benchmark datasets with different kinds of images show that the proposed method has superior performance gains over several state-of-the-art supervised and unsupervised hashing methods.",
"Similarity-preserving hashing is a widely-used method for nearest neighbour search in large-scale image retrieval tasks. For most existing hashing methods, an image is first encoded as a vector of hand-engineering visual features, followed by another separate projection or quantization step that generates binary codes. However, such visual feature vectors may not be optimally compatible with the coding process, thus producing sub-optimal hashing codes. In this paper, we propose a deep architecture for supervised hashing, in which images are mapped into binary codes via carefully designed deep neural networks. The pipeline of the proposed deep architecture consists of three building blocks: 1) a sub-network with a stack of convolution layers to produce the effective intermediate image features; 2) a divide-and-encode module to divide the intermediate image features into multiple branches, each encoded into one hash bit; and 3) a triplet ranking loss designed to characterize that one image is more similar to the second image than to the third one. Extensive evaluations on several benchmark image datasets show that the proposed simultaneous feature learning and hash coding pipeline brings substantial improvements over other state-of-the-art supervised or unsupervised hashing methods.",
"Hashing is an important method for performing efficient similarity search. With the explosive growth of multimodal data, how to learn hashing-based compact representations for multimodal data becomes highly non-trivial. Compared with shallow-structured models, deep models present superiority in capturing multimodal correlations due to their high nonlinearity. However, in order to make the learned representation more accurate and compact, how to reduce the redundant information lying in the multimodal representations and incorporate different complexities of different modalities in the deep models is still an open problem. In this paper, we propose a novel deep multimodal hashing method, namely Deep Multimodal Hashing with Orthogonal Regularization (DMHOR), which fully exploits intra-modality and inter-modality correlations. In particular, to reduce redundant information, we impose orthogonal regularizer on the weighting matrices of the model, and theoretically prove that the learned representation is guaranteed to be approximately orthogonal. Moreover, we find that a better representation can be attained with different numbers of layers for different modalities, due to their different complexities. Comprehensive experiments on WIKI and NUS-WIDE, demonstrate a substantial gain of DMHOR compared with state-of-the-art methods.",
"With the rapid growth of web images, hashing has received increasing interests in large scale image retrieval. Research efforts have been devoted to learning compact binary codes that preserve semantic similarity based on labels. However, most of these hashing methods are designed to handle simple binary similarity. The complex multi-level semantic structure of images associated with multiple labels have not yet been well explored. Here we propose a deep semantic ranking based method for learning hash functions that preserve multilevel semantic similarity between multi-label images. In our approach, deep convolutional neural network is incorporated into hash functions to jointly learn feature representations and mappings from them to hash codes, which avoids the limitation of semantic representation power of hand-crafted features. Meanwhile, a ranking list that encodes the multilevel similarity information is employed to guide the learning of such deep hash functions. An effective scheme based on surrogate loss is used to solve the intractable optimization problem of nonsmooth and multivariate ranking measures involved in the learning procedure. Experimental results show the superiority of our proposed approach over several state-of-the-art hashing methods in term of ranking evaluation metrics when tested on multi-label image datasets.",
"The Internet contains billions of images, freely available online. Methods for efficiently searching this incredibly rich resource are vital for a large number of applications. These include object recognition, computer graphics, personal photo collections, online image search tools. In this paper, our goal is to develop efficient image search and scene matching techniques that are not only fast, but also require very little memory, enabling their use on standard hardware or even on handheld devices. Our approach uses recently developed machine learning techniques to convert the Gist descriptor (a real valued vector that describes orientation energies at different scales and orientations within an image) to a compact binary code, with a few hundred bits per image. Using our scheme, it is possible to perform real-time searches with millions from the Internet using a single large PC and obtain recognition results comparable to the full descriptor. Using our codes on high quality labeled images from the LabelMe database gives surprisingly powerful recognition results using simple nearest neighbor techniques.",
"Approximate nearest neighbor search is an efficient strategy for large-scale image retrieval. Encouraged by the recent advances in convolutional neural networks (CNNs), we propose an effective deep learning framework to generate binary hash codes for fast image retrieval. Our idea is that when the data labels are available, binary codes can be learned by employing a hidden layer for representing the latent concepts that dominate the class labels. The utilization of the CNN also allows for learning image representations. Unlike other supervised methods that require pair-wised inputs for binary code learning, our method learns hash codes and image representations in a point-wised manner, making it suitable for large-scale datasets. Experimental results show that our method outperforms several state-of-the-art hashing algorithms on the CIFAR-10 and MNIST datasets. We further demonstrate its scalability and efficacy on a large-scale dataset of 1 million clothing images."
]
} |
1603.03099 | 2296105616 | In this paper, we propose a framework to infer the topic preferences of Donald Trump's followers on Twitter. We first use latent Dirichlet allocation (LDA) to derive the weighted mixture of topics for each Trump tweet. Then we use negative binomial regression to model the "likes," with the weights of each topic serving as explanatory variables. Our study shows that attacking Democrats such as President Obama and former Secretary of State Hillary Clinton earns Trump the most "likes." Our framework of inference is generalizable to the study of other politicians. | There are also quite a few studies modeling individual behaviors in social media. @cite_3 models the decision to retweet, using Twitter user features such as agreeableness, number of tweets posted, and daily tweeting patterns. @cite_2 models individuals' waiting time before replying to a tweet based on their previous replying patterns. Our study models the number of likes'' that a Trump's tweet receives. Our innovation is to use tweet-specific features instead of individual-specific features, as done in the above-cited literature. | {
"cite_N": [
"@cite_3",
"@cite_2"
],
"mid": [
"2025310159",
"2223407974"
],
"abstract": [
"There has been much effort on studying how social media sites, such as Twitter, help propagate information in different situations, including spreading alerts and SOS messages in an emergency. However, existing work has not addressed how to actively identify and engage the right strangers at the right time on social media to help effectively propagate intended information within a desired time frame. To address this problem, we have developed three models: (1) a feature-based model that leverages people's exhibited social behavior, including the content of their tweets and social interactions, to characterize their willingness and readiness to propagate information on Twitter via the act of retweeting; (2) a wait-time model based on a user's previous retweeting wait times to predict his or her next retweeting time when asked; and (3) a subset selection model that automatically selects a subset of people from a set of available people using probabilities predicted by the feature-based model and maximizes retweeting rate. Based on these three models, we build a recommender system that predicts the likelihood of a stranger to retweet information when asked, within a specific time window, and recommends the top-N qualified strangers to engage with. Our experiments, including live studies in the real world, demonstrate the effectiveness of our work.",
"We present a study analyzing the response times of users to questions on Twitter. We investigate estimating these response times using an exponential distribution-based wait time model learned from users’ previous responses. Our analysis considers several different model building approaches, including personalized models for each user, general models built for all users, and time-sensitive models specific to a day of the week or hour of the day. Our evaluation using a real world question-answer dataset shows the effectiveness of our approach."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.