aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1602.01125
2261888986
In this paper we explore the problem of fitting a 3D morphable model to single face images using only sparse geometric features (edges and landmark points). Previous approaches to this problem are based on nonlinear optimisation of an edge-derived cost that can be viewed as forming soft correspondences between model and image edges. We propose a novel approach, that explicitly computes hard correspondences. The resulting objective function is non-convex but we show that a good initialisation can be obtained efficiently using alternating linear least squares in a manner similar to the iterated closest point algorithm. We present experimental results on both synthetic and real images and show that our approach outperforms methods that use soft correspondence and other recent methods that rely solely on geometric features.
Fitting a 3DMM to a 2D image using only geometric features (i.e. landmarks and edges) is essentially a non-rigid alignment problem. Surprisingly, the idea of employing an iterated closest point @cite_19 approach with hard edge correspondences (in a similar manner to ASM fitting) has been discounted in the literature @cite_32 . In this paper, we pursue this idea and develop an iterative 3DMM fitting algorithm that is fully automatic, simple and efficient (and we make our implementation available Matlab implementation: http: github.com waps101 3DMM_edges github.com waps101 3DMM ). Instead of working in a transformed distance-to-edge space and treating correspondences as soft'', we compute an explicit correspondence between model and image edges. This allows us to treat the model edge vertices as a landmark with known 2D position, for which optimal pose or shape estimates can be easily computed.
{ "cite_N": [ "@cite_19", "@cite_32" ], "mid": [ "2049981393", "2155211928" ], "abstract": [ "The authors describe a general-purpose, representation-independent method for the accurate and computationally efficient registration of 3-D shapes including free-form curves and surfaces. The method handles the full six degrees of freedom and is based on the iterative closest point (ICP) algorithm, which requires only a procedure to find the closest point on a geometric entity to a given point. The ICP algorithm always converges monotonically to the nearest local minimum of a mean-square distance metric, and the rate of convergence is rapid during the first few iterations. Therefore, given an adequate set of initial rotations and translations for a particular class of objects with a certain level of 'shape complexity', one can globally minimize the mean-square distance metric over all six degrees of freedom by testing each initial registration. One important application of this method is to register sensed data from unfixtured rigid objects with an ideal geometric model, prior to shape inspection. Experimental results show the capabilities of the registration algorithm on point sets, curves, and surfaces. >", "We present a novel algorithm aiming to estimate the 3D shape, the texture of a human face, along with the 3D pose and the light direction from a single photograph by recovering the parameters of a 3D morphable model. Generally, the algorithms tackling the problem of 3D shape estimation from image data use only the pixels intensity as input to drive the estimation process. This was previously achieved using either a simple model, such as the Lambertian reflectance model, leading to a linear fitting algorithm. Alternatively, this problem was addressed using a more precise model and minimizing a non-convex cost function with many local minima. One way to reduce the local minima problem is to use a stochastic optimization algorithm. However, the convergence properties (such as the radius of convergence) of such algorithms, are limited. Here, as well as the pixel intensity, we use various image features such as the edges or the location of the specular highlights. The 3D shape, texture and imaging parameters are then estimated by maximizing the posterior of the parameters given these image features. The overall cost function obtained is smoother and, hence, a stochastic optimization algorithm is not needed to avoid the local minima problem. This leads to the multi-features fitting algorithm that has a wider radius of convergence and a higher level of precision. This is shown on some example photographs, and on a recognition experiment performed on the CMU-PIE image database." ] }
1602.01125
2261888986
In this paper we explore the problem of fitting a 3D morphable model to single face images using only sparse geometric features (edges and landmark points). Previous approaches to this problem are based on nonlinear optimisation of an edge-derived cost that can be viewed as forming soft correspondences between model and image edges. We propose a novel approach, that explicitly computes hard correspondences. The resulting objective function is non-convex but we show that a good initialisation can be obtained efficiently using alternating linear least squares in a manner similar to the iterated closest point algorithm. We present experimental results on both synthetic and real images and show that our approach outperforms methods that use soft correspondence and other recent methods that rely solely on geometric features.
State of the art The most recent face shape estimation methods are able to obtain considerably higher quality results than the purely model-based approaches above. They do so by using pixel-wise shading or motion information to apply finescale refinement to an initial shape estimate. For example, Suwajanakorn al @cite_2 use photo collections to build an average model of an individual which is then fitted to a video and finescale detail added by optical flow and shape-from-shading. Cao al @cite_27 take a machine learning approach and train a regressor that predicts high resolution shape detail from local appearance.
{ "cite_N": [ "@cite_27", "@cite_2" ], "mid": [ "2062712751", "125358319" ], "abstract": [ "We present the first real-time high-fidelity facial capture method. The core idea is to enhance a global real-time face tracker, which provides a low-resolution face mesh, with local regressors that add in medium-scale details, such as expression wrinkles. Our main observation is that although wrinkles appear in different scales and at different locations on the face, they are locally very self-similar and their visual appearance is a direct consequence of their local shape. We therefore train local regressors from high-resolution capture data in order to predict the local geometry from local appearance at runtime. We propose an automatic way to detect and align the local patches required to train the regressors and run them efficiently in real-time. Our formulation is particularly designed to enhance the low-resolution global tracker with exactly the missing expression frequencies, avoiding superimposing spatial frequencies in the result. Our system is generic and can be applied to any real-time tracker that uses a global prior, e.g. blend-shapes. Once trained, our online capture approach can be applied to any new user without additional training, resulting in high-fidelity facial performance reconstruction with person-specific wrinkle details from a monocular video camera in real-time.", "We present an approach that takes a single video of a person’s face and reconstructs a high detail 3D shape for each video frame. We target videos taken under uncontrolled and uncalibrated imaging conditions, such as youtube videos of celebrities. In the heart of this work is a new dense 3D flow estimation method coupled with shape from shading. Unlike related works we do not assume availability of a blend shape model, nor require the person to participate in a training capturing process. Instead we leverage the large amounts of photos that are available per individual in personal or internet photo collections. We show results for a variety of video sequences that include various lighting conditions, head poses, and facial expressions." ] }
1602.00753
2263785794
Human vision greatly benefits from the information about sizes of objects. The role of size in several visual reasoning tasks has been thoroughly explored in human perception and cognition. However, the impact of the information about sizes of objects is yet to be determined in AI. We postulate that this is mainly attributed to the lack of a comprehensive repository of size information. In this paper, we introduce a method to automatically infer object sizes, leveraging visual and textual information from web. By maximizing the joint likelihood of textual and visual observations, our method learns reliable relative size estimates, with no explicit human supervision. We introduce the relative size dataset and show that our method outperforms competitive textual and visual baselines in reasoning about size comparisons.
A few researchers @cite_16 @cite_1 use manually curated commonsense knowledge base such as OpenCyc @cite_32 for answering questions about numerical information. These knowledge resources (e.g., ConceptNet @cite_20 ) usually consist of taxonomic assertions or generic relations, but do not include size information. Manual annotations of such knowledge is not scalable. Our efforts will result in extracting size information to populate such knowledge bases (esp. ConceptNet) with size information at scale.
{ "cite_N": [ "@cite_16", "@cite_1", "@cite_32", "@cite_20" ], "mid": [ "2101338408", "135648563", "2107658650", "" ], "abstract": [ "Summary Our analysis of the contribution of Cyc this year showed that the major limiting factor is still in the area of coverage. Major manual effort is required both to generate appropriate semantic forms and to map to Cyc’s predicates, and also to add instance information into Cyc. With the current state of the system, Cyc helps more to improve our answer confidences (not a part of the evaluation this year) than to get answers right. The major novelty in our system this year was the implementation of QA-by-Dossier to answer Definition questions. Here, a collection of predetermined factoid questions are asked about the subject in order to gather facts that seem to be typically mentioned in definitional articles in newspapers and reference works. An advantage of this method over others which locate definitional syntactic constructs is that our system “knows” the nature of the relationship of the retrieved item to the subject. In the evaluation, we felt that our system had performed relatively well according to our expectations of what was required, but we were very disappointed to find that the NIST assessors had different opinions regarding acceptable answers.", "Question answering systems can benefit from the incorporation of a broad range of technologies, including natural language processing, machine learning, information retrieval, knowledge representation, and automated reasoning. We have designed an architecture that identifies the essential roles of components in a question answering system. This architecture greatly facilitates experimentation by enabling comparisons between different choices for filling the component roles, and also provides a framework for exploring hybridization of techniques ‐ that is, combining different approaches to question answering. We present results from an initial experiment that illustrate substantial performance improvement by combining statistical and linguistic approaches to question answering. We also present preliminary and encouraging results involving the incorporation of a large knowledge base.", "Since 1984, a person-century of effort has gone into building CYC, a universal schema of roughly 10 5 general concepts spanning human reality. Most of the time has been spent codifying knowledge about these concepts; approximately 10 6 commonsense axioms have been handcrafted for and entered into CYC's knowledge base, and millions more have been inferred and cached by CYC. This article examines the fundamental assumptions of doing such a large-scale project, reviews the technical lessons learned by the developers, and surveys the range of applications that are or soon will be enabled by the technology.", "" ] }
1602.00753
2263785794
Human vision greatly benefits from the information about sizes of objects. The role of size in several visual reasoning tasks has been thoroughly explored in human perception and cognition. However, the impact of the information about sizes of objects is yet to be determined in AI. We postulate that this is mainly attributed to the lack of a comprehensive repository of size information. In this paper, we introduce a method to automatically infer object sizes, leveraging visual and textual information from web. By maximizing the joint likelihood of textual and visual observations, our method learns reliable relative size estimates, with no explicit human supervision. We introduce the relative size dataset and show that our method outperforms competitive textual and visual baselines in reasoning about size comparisons.
Identifying numerical attributes about objects has been addressed in NLP recently. The common theme in the recent work @cite_28 @cite_18 @cite_3 @cite_2 @cite_26 is to use search query templates with other textual cues (e.g., more than, at least, as many as, etc), collect numerical values, and model sizes as a normal distribution. However, the quality and scale of such extraction is somewhat limiting. Similar to previous work that show textual and visual information are complementary across different domains @cite_4 @cite_6 @cite_23 , we show that a successful size estimation method should also take advantage of both modalities. In particular, our experiments show that textual observations about the relative sizes of objects are very limited, and relative size comparisons are better collected through visual data. In addition, we show that log-normal distribution is a better model for representing sizes than normal distributions.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_4", "@cite_28", "@cite_3", "@cite_6", "@cite_23", "@cite_2" ], "mid": [ "2124104135", "2251322640", "2250564385", "1979144169", "2130508301", "1964763677", "2188036686", "1862719289" ], "abstract": [ "We present a novel framework for automated extraction and approximation of numerical object attributes such as height and weight from the Web. Given an object-attribute pair, we discover and analyze attribute information for a set of comparable objects in order to infer the desired value. This allows us to approximate the desired numerical values even when no exact values can be found in the text. Our framework makes use of relation defining patterns and WordNet similarity information. First, we obtain from the Web and WordNet a list of terms similar to the given object. Then we retrieve attribute values for each term in this list, and information that allows us to compare different objects in the list and to infer the attribute value range. Finally, we combine the retrieved data for all terms from the list to select or approximate the requested value. We evaluate our method using automated question answering, WordNet enrichment, and comparison with answers given in Wikipedia and by leading search engines. In all of these, our framework provides a significant improvement.", "This paper presents novel methods for modeling numerical common sense: the ability to infer whether a given number (e.g., three billion) is large, small, or normal for a given context (e.g., number of people facing a water shortage). We first discuss the necessity of numerical common sense in solving textual entailment problems. We explore two approaches for acquiring numerical common sense. Both approaches start with extracting numerical expressions and their context from the Web. One approach estimates the distribution of numbers co-occurring within a context and examines whether a given value is large, small, or normal, based on the distribution. Another approach utilizes textual patterns with which speakers explicitly expresses their judgment about the value of a numerical expression. Experimental results demonstrate the effectiveness of both approaches.", "This paper introduces GEOS, the first automated system to solve unaltered SAT geometry questions by combining text understanding and diagram interpretation. We model the problem of understanding geometry questions as submodular optimization, and identify a formal problem description likely to be compatible with both the question text and diagram. GEOS then feeds the description to a geometric solver that attempts to determine the correct answer. In our experiments, GEOS achieves a 49 score on official SAT questions, and a score of 61 on practice questions. 1 Finally, we show that by integrating textual and visual information, GEOS boosts the accuracy of dependency and semantic parsing of the question text.", "Although researchers have shown increasing interest in extracting classifying semantic relations, most previous studies have basically relied on lexical patterns between terms. This paper proposes a novel way to accomplish the task: a system that captures a physical size of an entity. Experimental results revealed that our proposed method is feasible and prevents the problems inherent in other methods.", "Textual entailment recognition is the task of deciding, when given two text fragments, whether the meaning of one text is entailed from the other text. This year, at our second participation in the RTE competition, we improve the system built for the RTE3 competition. The main idea of our system is to map every word from hypothesis to one or more words from the text. For that, we transform the hypothesis making use of extensive semantic knowledge from sources like DIRT, WordNet, VerbOcean, Wikipedia and Acronyms database. After the mapping process, we associate a local fitness value to every word from hypothesis, which is used to calculate a global fitness value for current fragments of text. The global fitness value is decreased in cases in which a word from hypothesis cannot be map to one word from the text or when we have different forms of negations for mapped verbs. In the end, using thresholds identified in the training step for global fitness values, we decide for every pair from test data if we have entailment or not.", "We propose NEIL (Never Ending Image Learner), a computer program that runs 24 hours per day and 7 days per week to automatically extract visual knowledge from Internet data. NEIL uses a semi-supervised learning algorithm that jointly discovers common sense relationships (e.g., \"Corolla is a kind of looks similar to Car\", \"Wheel is a part of Car\") and labels instances of the given visual categories. It is an attempt to develop the world's largest visual structured knowledge base with minimum human labeling effort. As of 10th October 2013, NEIL has been continuously running for 2.5 months on 200 core cluster (more than 350K CPU hours) and has an ontology of 1152 object categories, 1034 scene categories and 87 attributes. During this period, NEIL has discovered more than 1700 relationships and has labeled more than 400K visual instances.", "We introduce Segment-Phrase Table (SPT), a large collection of bijective associations between textual phrases and their corresponding segmentations. Leveraging recent progress in object recognition and natural language semantics, we show how we can successfully build a high-quality segment-phrase table using minimal human supervision. More importantly, we demonstrate the unique value unleashed by this rich bimodal resource, for both vision as well as natural language understanding. First, we show that fine-grained textual labels facilitate contextual reasoning that helps in satisfying semantic constraints across image segments. This feature enables us to achieve state-of-the-art segmentation results on benchmark datasets. Next, we show that the association of high-quality segmentations to textual phrases aids in richer semantic understanding and reasoning of these textual phrases. Leveraging this feature, we motivate the problem of visual entailment and visual paraphrasing, and demonstrate its utility on a large dataset.", "Applications are increasingly expected to make smart decisions based on what humans consider basic commonsense. An often overlooked but essential form of commonsense involves comparisons, e.g. the fact that bears are typically more dangerous than dogs, that tables are heavier than chairs, or that ice is colder than water. In this paper, we first rely on open information extraction methods to obtain large amounts of comparisons from the Web. We then develop a joint optimization model for cleaning and disambiguating this knowledge with respect to WordNet. This model relies on integer linear programming and semantic coherence scores. Experiments show that our model outperforms strong baselines and allows us to obtain a large knowledge base of disambiguated commonsense assertions." ] }
1602.00753
2263785794
Human vision greatly benefits from the information about sizes of objects. The role of size in several visual reasoning tasks has been thoroughly explored in human perception and cognition. However, the impact of the information about sizes of objects is yet to be determined in AI. We postulate that this is mainly attributed to the lack of a comprehensive repository of size information. In this paper, we introduce a method to automatically infer object sizes, leveraging visual and textual information from web. By maximizing the joint likelihood of textual and visual observations, our method learns reliable relative size estimates, with no explicit human supervision. We introduce the relative size dataset and show that our method outperforms competitive textual and visual baselines in reasoning about size comparisons.
In computer vision, size information manually extracted from furniture catalogs, has shown to be effective in indoor scenes understanding and reconstruction @cite_13 . However, size information is not playing a major role in mainstream computer vision tasks yet. This might be due to the fact that there is no unified and comprehensive resource for objects sizes. The visual size of the objects depends on multiple factors including the distance to the objects and the viewpoint. Single image depth estimation has been an active topic in computer vision @cite_10 @cite_0 @cite_24 @cite_12 @cite_5 . In this paper, we use @cite_29 for single image depth estimation.
{ "cite_N": [ "@cite_13", "@cite_29", "@cite_0", "@cite_24", "@cite_5", "@cite_10", "@cite_12" ], "mid": [ "2126836862", "2951234442", "2534523274", "2026203852", "1992178727", "2109443835", "2158211626" ], "abstract": [ "We propose a method for understanding the 3D geometry of indoor environments (e.g. bedrooms, kitchens) while simultaneously identifying objects in the scene (e.g. beds, couches, doors). We focus on how modeling the geometry and location of specific objects is helpful for indoor scene understanding. For example, beds are shorter than they are wide, and are more likely to be in the center of the room than cabinets, which are tall and narrow. We use a generative statistical model that integrates a camera model, an enclosing room “box”, frames (windows, doors, pictures), and objects (beds, tables, couches, cabinets), each with their own prior on size, relative dimensions, and locations. We fit the parameters of this complex, multi-dimensional statistical model using an MCMC sampling approach that combines discrete changes (e.g, adding a bed), and continuous parameter changes (e.g., making the bed larger). We find that introducing object category leads to state-of-the-art performance on room layout estimation, while also enabling recognition based only on geometry.", "Predicting depth is an essential component in understanding the 3D geometry of a scene. While for stereo images local correspondence suffices for estimation, finding depth relations from a single image is less straightforward, requiring integration of both global and local information from various cues. Moreover, the task is inherently ambiguous, with a large source of uncertainty coming from the overall scale. In this paper, we present a new method that addresses this task by employing two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally. We also apply a scale-invariant error to help measure depth relations rather than scale. By leveraging the raw datasets as large sources of training data, our method achieves state-of-the-art results on both NYU Depth and KITTI, and matches detailed depth boundaries without the need for superpixelation.", "In this paper, we consider the problem of recovering the spatial layout of indoor scenes from monocular images. The presence of clutter is a major problem for existing single-view 3D reconstruction algorithms, most of which rely on finding the ground-wall boundary. In most rooms, this boundary is partially or entirely occluded. We gain robustness to clutter by modeling the global room space with a parameteric 3D “box” and by iteratively localizing clutter and refitting the box. To fit the box, we introduce a structured learning algorithm that chooses the set of parameters to minimize error, based on global perspective cues. On a dataset of 308 images, we demonstrate the ability of our algorithm to recover spatial layout in cluttered rooms and show several examples of estimated free space.", "We consider the problem of estimating the depth of each pixel in a scene from a single monocular image. Unlike traditional approaches [18, 19], which attempt to map from appearance features to depth directly, we first perform a semantic segmentation of the scene and use the semantic labels to guide the 3D reconstruction. This approach provides several advantages: By knowing the semantic class of a pixel or region, depth and geometry constraints can be easily enforced (e.g., “sky” is far away and “ground” is horizontal). In addition, depth can be more readily predicted by measuring the difference in appearance with respect to a given semantic class. For example, a tree will have more uniform appearance in the distance than it does close up. Finally, the incorporation of semantic features allows us to achieve state-of-the-art results with a significantly simpler model than previous works.", "The limitations of current state-of-the-art methods for single-view depth estimation and semantic segmentations are closely tied to the property of perspective geometry, that the perceived size of the objects scales inversely with the distance. In this paper, we show that we can use this property to reduce the learning of a pixel-wise depth classifier to a much simpler classifier predicting only the likelihood of a pixel being at an arbitrarily fixed canonical depth. The likelihoods for any other depths can be obtained by applying the same classifier after appropriate image manipulations. Such transformation of the problem to the canonical depth removes the training data bias towards certain depths and the effect of perspective. The approach can be straight-forwardly generalized to multiple semantic classes, improving both depth estimation and semantic segmentation performance by directly targeting the weaknesses of independent approaches. Conditioning the semantic label on the depth provides a way to align the data to their physical scale, allowing to learn a more discriminative classifier. Conditioning depth on the semantic class helps the classifier to distinguish between ambiguities of the otherwise ill-posed problem. We tested our algorithm on the KITTI road scene dataset and NYU2 indoor dataset and obtained obtained results that significantly outperform current state-of-the-art in both single-view depth and semantic segmentation domain.", "When we look at a picture, our prior knowledge about the world allows us to resolve some of the ambiguities that are inherent to monocular vision, and thereby infer 3d information about the scene. We also recognize different objects, decide on their orientations, and identify how they are connected to their environment. Focusing on the problem of autonomous 3d reconstruction of indoor scenes, in this paper we present a dynamic Bayesian network model capable of resolving some of these ambiguities and recovering 3d information for many images. Our model assumes a \"floorwall\" geometry on the scene and is trained to recognize the floor-wall boundary in each column of the image. When the image is produced under perspective geometry, we show that this model can be used for 3d reconstruction from a single image. To our knowledge, this was the first monocular approach to automatically recover 3d reconstructions from single indoor images.", "We consider the task of depth estimation from a single monocular image. We take a supervised learning approach to this problem, in which we begin by collecting a training set of monocular images (of unstructured outdoor environments which include forests, trees, buildings, etc.) and their corresponding ground-truth depthmaps. Then, we apply supervised learning to predict the depthmap as a function of the image. Depth estimation is a challenging problem, since local features alone are insufficient to estimate depth at a point, and one needs to consider the global context of the image. Our model uses a discriminatively-trained Markov Random Field (MRF) that incorporates multiscale local- and global-image features, and models both depths at individual points as well as the relation between depths at different points. We show that, even on unstructured scenes, our algorithm is frequently able to recover fairly accurate depthmaps." ] }
1602.00802
2272612507
Co-existence between unlicensed networks that share a spectrum spatio-temporally with terrestrial (e.g., Air Traffic Control) and shipborne radars1 in 3 GHz band is attracting significant interest. Similar to every primary-secondary coexistence scenario, interference from unlicensed devices to a primary receiver must be within acceptable bounds. In this work, we formulate the spectrum sharing problem between a pulsed, search radar (primary) and 802.11 wireless local area network (WLAN) as the secondary.We compute the protection region for such a search radar for 1) a single secondary user (initially) as well as 2) a random spatial distribution of multiple secondary users. Furthermore, we also analyze the interference to the Wi-Fi devices from the radar's transmissions to estimate the impact on achievable WLAN throughput as a function of distance to the primary radar.
There is growing interest in radar spectrum sharing from both regulators and researchers @cite_13 @cite_20 @cite_23 @cite_16 @cite_31 @cite_12 @cite_7 @cite_22 @cite_25 @cite_3 @cite_19 @cite_9 @cite_1 @cite_28 @cite_21 @cite_11 @cite_24 @cite_17 . SSPARC program from DARPA @cite_20 is a good example that seeks to support two types of sharing: a) Military military sharing between military radars and military communication systems to increase capabilities of both and b) Military commercial sharing between military radars and commercial communication systems to preserve radar capabilities while meeting the need for increased capacity of commercial networks.
{ "cite_N": [ "@cite_25", "@cite_22", "@cite_7", "@cite_28", "@cite_9", "@cite_21", "@cite_1", "@cite_17", "@cite_3", "@cite_24", "@cite_19", "@cite_23", "@cite_31", "@cite_16", "@cite_13", "@cite_12", "@cite_20", "@cite_11" ], "mid": [ "2085179442", "1985004753", "2111155782", "", "211255588", "338146060", "", "1601290867", "2164987783", "2002582936", "", "2161176671", "2070482374", "120895409", "2001875440", "2027519294", "", "247432931" ], "abstract": [ "In this paper, we have studied the potential for secondary usage of radar bands by 3GPP LTE eNB devices in different scenarios, such as HeNB transmitters located at street level, HeNB transmitters located at high-rise buildings, macro LTE transmitter, etc. Different pathloss models are used to best suite the scenarios. By using different types of radar characteristics (e.g. radio navigations radars, meteorological radars, etc) and a protection requirement of −10dB Interference-to-Noise Ratio (INR), we have shown that in some scenarios, the required distances for adjacent channel radar usage in 2.7–2.9GHz band are quite reasonable. This means, in those scenarios, it could be possible to utilize the radar bands for secondary LTE systems. A protection margin of 18 to 20dB can be added for capturing the aggregate interference effects from multiple secondary interferers for downlink direction. More detailed system level investigations are required in this direction for further understanding the secondary usage in this band.", "Spectrum sharing is a promising solution for the problem of spectrum congestion. We consider a spectrum sharing scenario between a multiple-input multiple-output (MIMO) radar and Long Term Evolution (LTE) Advanced cellular system. In this paper, we consider resource allocation optimization problem with carrier aggregation. The LTE Advanced system has N BS base stations (BS) which it operates in the radar band on a sharing basis. Our objective is to allocate resources from the LTE Advanced carrier and the MIMO radar carrier to each user equipment (UE) in an LTE Advanced cell based on the running application of UE. Each user application is assigned a utility function based on the type of application. We propose a carrier aggregation resource allocation algorithm to allocate the LTE Advanced and the radar carriers' resources optimally among users based on the type of user application. The algorithm gives priority to users running inelastic traffic when allocating resources. Finally we present simulation results on the performance of the proposed carrier aggregation resource allocation algorithm.", "In this paper, we investigate the impact of aggregate interference in a secondary spectrum access system. Particularly, meteorological radar operating in 5.6 GHz band is considered to be a primary user. Secondary users are WLAN devices spreading in a large area which induce aggregate interference to the radar. We develop a mathematical model to derive the probability distribution function (PDF) of the aggregate interference. The derivation considers dynamic frequency selection (DFS) mechanism for the protection of the radar such that the transmission of each WLAN is regulated by an interference threshold. Numerical experiments are performed with various propagation environments and densities of WLAN devices. It is observed that the effect of aggregate interference is severe in a rural environment. The interference threshold for individual WLAN should be much lower than the maximum tolerable interference at the radar. Thus, only a limited number of WLANs can transmit at the same time. On the other hand, adverse effect of the aggregate interference is not shown in an urban environment, where up to 10 WLANs per square kilometer can use the radar spectrum without considering the aggregate interference. The framework discussed in this paper can readily be adapted to assess the aggregate interference for other types of radars.", "", "In response to proposals to introduce new radio systems into 3550–3650 MHz radio spectrum in the United States, the authors have performed measurements and analysis on effects of interference from a variety of radar waveforms to the performance of a Long Term Evolution (LTE) base station receiver. This work has been prompted by the possibility that LTE base station receivers may eventually share spectrum with radar operations in this spectrum range. The base station receiver that was tested used time division duplex (TDD) modulation. Radar pulse parameters used in this testing spanned the range of both existing and anticipated future radar systems in the 3100–3650 MHz spectrum range. LTE base station receiver data throughput rates, block error rates (BLER), and internal noise levels have been measured as functions of radar pulse parameters and the incident power level of radar pulses in the base station receiver. The authors do not determine the acceptability of radar interference effects on LTE base station performance. Rather, these data are presented for the use of spectrum managers and engineers who can use this information as a building block in the construction of frequency-and-distance separation curves for radar transmitters and LTE base station receivers, supporting possible future spectrum sharing at 3.5 GHz. Note: This report was reissued in May 2014 to correct the duty cycles of four radar interference waveforms that were misstated in the original version of this report. The error was due to a mistake in the equations on page 8, now corrected, in which a pulse repetition rate (PRR) variable was used instead of a pulse repetition interval (PRI) variable. The waveforms’ pulse widths, pulse repetition rates, and chirp bandwidths were correctly reported.", "This report describes the methodology and results of an investigation into the source, mechanism, and solutions for radiofrequency (RF) interference to WSR-88D Next-Generation Weather Radars (NEXRADs). It shows that the interference source is nearby base stations transmitters in the Broadband Radio Service (BRS) and the Educational Broadband Service (EBS) and that their out-of-band (OOB) emissions can cause interference on NEXRAD receiver frequencies. The methodology for determining interference power levels and mitigation solutions is described. Several technical solutions that can mitigate the problem are shown to be effective. Trade-offs between effectiveness, difficulty, and costs of various solutions are described, but it is shown that there is always at least one effective technical solution. The report shows that careful planning and coordination between communication system service providers and Federal agencies operating nearby radars is important in the implementation of these interference-mitigation techniques. A number of the report’s interference mitigation options have already been implemented in several United States cities served by a BRS EBS licensee, at licensee WiMAX stations where NEXRAD radar operations are located nearby. As of the date of this report’s release, interference from the licensee’s WiMAX links to NEXRAD receivers in those markets has been successfully mitigated using the techniques described herein.", "", "This report describes the results of interference tests and measurements that have been performed on radar receivers that have various missions in several spectrum bands. Radar target losses have been measured under controlled conditions in the presence of radio frequency (RF) interference. Radar types that have been examined include short range and long range air traffic control; weather surveillance; and maritime navigation and surface search. Radar receivers experience loss of desired targets when interference from high duty cycle (more than about 1-3 ) communication-type signals is as low as -10 dB to -6 dB relative to radar receiver inherent noise levels. Conversely, radars perform robustly in the presence of low duty cycle (less than 1-3 ) signals such as those emitted by other radars. Target losses at low levels are insidious because they do not cause overt indications such as strobes on displays. Therefore operators are usually unaware that they are losing targets due to low-level interference. Interference can cause the loss of targets at any range. Low interference thresholds for communication-type signals, insidious behavior of target losses, and potential loss of targets at any range all combine to make low-level interference to radar receivers a very serious problem.", "The theoretical feasibility is explored of spectrum-sharing between radar and wireless communications systems via an interference mitigation processing approach. The new approach allows radar and wireless systems to operate at the same carrier frequency if the radar possesses a multiple-input multiple-output (MIMO) structure. A novel signal processing approach is developed for coherent MIMO radar that effectively minimizes the arbitrary interferences generated by wireless systems from any direction, while operating at the same frequency using cognitive radio technology. Various theoretical aspects of the new approach are investigated, and its effectiveness is further validated through simulation.", "The radio-frequency (RF) electromagnetic spectrum, extending from below 1 MHz to above 100 GHz, represents a precious resource. It is used for a wide range of purposes, including communications, radio and television broadcasting, radionavigation, and sensing. Radar represents a fundamentally important use of the electromagnetic (EM) spectrum, in applications which include air traffic control, geophysical monitoring of Earth resources from space, automotive safety, severe weather tracking, and surveillance for defense and security. Nearly all services have a need for greater bandwidth, which means that there will be ever-greater competition for this finite resource. The paper explains the nature of the spectrum congestion problem from a radar perspective, and describes a number of possible approaches to its solution both from technical and regulatory points of view. These include improved transmitter spectral purity, passive radar, and intelligent, cognitive approaches that dynamically optimize spectrum use.", "", "This paper considers opportunistic primary-secondary spectrum sharing when the primary is a rotating radar. A secondary device is allowed to transmit when its resulting interference will not exceed the radar's tolerable level, in contrast to current approaches that prohibit secondary transmissions if radar signals are detected at any time. We consider the case where an OFDMA based secondary system operates in non-contiguous cells, as might occur with a broadband hotspot service, or a cellular system that uses spectrum shared with radar to supplement its dedicated spectrum. It is shown that even fairly close to a radar, extensive secondary transmissions are possible, although with some interruptions and fluctuations as the radar rotates. For example, at 27 of the distance at which secondary transmissions will not affect the radar, on average, the achievable secondary data rates in down- and upstreams are around 100 and 63 of the one that will be achieved in dedicated spectrum, respectively. Moreover, extensive secondary transmissions are still possible even at different values of key system parameters, including cell radius, transmit power, tolerable interference level, and radar rotating period. By evaluating quality of service, it is found that spectrum shared with radar could be used efficiently for applications such as non-interactive video on demand, peer-to-peer file sharing, file transfers, automatic meter reading, and web browsing, but not for applications such as real-time transfers of small files and VoIP.", "This paper considers gray-space spectrum sharing when rotating radars are primary spectrum users, and multiple cells from one or more cellular networks are secondary users. A cellular network may share spectrum to supplement its dedicated spectrum, or provide a broadband hotspot service. A secondary device is allowed to transmit as long as cumulative interference is not harmful to nearby radars, probably because no radar is pointing its directional antenna at the device at this moment. This paper presents mechanisms that would support such sharing, and quantifies performance when spectrum is considered 100 utilized under traditional spectrum management. It is shown that the sharing allows cells to sustain significant mean data rates. For example, if 5 of a cellular network's cells need more capacity than dedicated spectrum can provide, a cell can get almost 1.2 bps Hz on average from shared spectrum. By evaluating quality of service, it is found that shared spectrum could be used efficiently for applications such as non-interactive video streaming, peer-to-peer file sharing, large file transfers, and web browsing, but not for applications such as real-time transfers of small files, and VoIP.", "Abstract This dissertation considers gray-space primary-secondary spectrum sharing, in which secondary devices are allowed to transmit when primary transmissions are strong enough that additional interference is tolerable. Various novel sharing mechanisms are proposed for two different types of primary system: cellular systems, and rotating radars. Both cases when primary and secondary systems cooperate (cooperative sharing), and when they do not (coexistent sharing) are considered. Even in the scenario where radars are densely packed, a secondary transmitter can get almost 1.2 bps Hz on average, when 5 of the transmitters are competing for the shared spectrum. One also shows the potential of sharing models in which a secondary system has information about a primary system, but does not cooperate in real time. It is found that even with fluctuations and interruptions in secondary transmissions while radars rotate, the shared spectrum could be used efficiently for applications that generate much of the traffic on mobile Internet, but not for real-time. For sharing with cellular systems, the efficiency of cooperative and coexistent sharing is compared. When both achievable secondary transmissions and primary power consumption are of concern, coexistent sharing is found to be as effective as cooperative sharing.", "In this paper, we quantify the temporal opportunities for secondary access to radar spectrum. Secondary users are assumed to be WLANs which opportunistically share the radar frequency band under the constraint that the aggregate interference does not harm radar operation. Each WLAN device employs dynamic frequency selection (DFS) as a mechanism to protect the radar from the interference. We also consider an advanced interference protection mechanism, which is termed temporal DFS. It exploits the temporal variation of interference power due to the periodic rotation of radar antenna. It is observed that the probability of accessing the radar spectrum is significantly higher when the temporal DFS is used compared to the conventional DFS. As a consequence, more WLANs can utilize the radar spectrum when the temporal DFS mechanism is considered. This shows that having better knowledge of the primary user activity can bring about the increased opportunity of secondary spectrum access to radar band, and thus improve the spectrum utilization.", "Radar bands have been suggested as one of the most promising candidates for spectrum sharing, as they occupy a considerable amount of spectrum, despite their usage efficiency being generally low. Although geo-location database and dynamic frequency selection were chosen by regulators as the preferred methods for enabling the coexistence between radar systems and low-power devices, their static nature and lack of flexibility may not enable an efficient utilization of this spectrum. In order to overcome this issue, we propose in this article a hybrid spectrum access technique, called database-aided sensing. We describe and test the ability of this technique to discern both spatial and temporal spectrum opportunities that arise from circularly and sector scanning radars. The obtained results were encouraging, taking into account the sensing sensitivity levels required to protect radar systems, even when the database awareness about the radio environment is limited.", "", "This technical memorandum documents the analysis methodology that NTIA developed and used in assessing interference from radio local area networks to 5 GHz radar systems." ] }
1602.00828
2285479881
Recognizing human actions from unknown and unseen (novel) views is a challenging problem. We propose a Robust Non-Linear Knowledge Transfer Model (R-NKTM) for human action recognition from novel views. The proposed R-NKTM is a deep fully-connected neural network that transfers knowledge of human actions from any unknown view to a shared high-level virtual view by finding a non-linear virtual path that connects the views. The R-NKTM is learned from dense trajectories of synthetic 3D human models fitted to real motion capture data and generalizes to real videos of human actions. The strength of our technique is that we learn a single R-NKTM for all actions and all viewpoints for knowledge transfer of any real human action video without the need for re-training or fine-tuning the model. Thus, R-NKTM can efficiently scale to incorporate new action classes. R-NKTM is learned with dummy labels and does not require knowledge of the camera viewpoint at any stage. Experiments on three benchmark cross-view human action datasets show that our method outperforms existing state-of-the-art.
The majority of existing literature @cite_11 @cite_13 @cite_46 @cite_26 @cite_45 @cite_33 @cite_22 @cite_15 @cite_53 @cite_32 @cite_60 @cite_39 @cite_10 @cite_0 @cite_29 @cite_24 @cite_58 @cite_30 deals with action recognition from a common viewpoint. While these approaches are quite successful in recognizing actions captured from similar viewpoints, their performance drops sharply as the viewpoint changes due to the inherent view dependence of the features used by these methods. To tackle this problem, geometry based methods have been proposed for cross-view action recognition. @cite_27 introduced an action representation to capture the dramatic changes of actions using view-invariant spatio-temporal curvature of 2D trajectories. This method uses a single point (e.g. hand centroid) trajectory. Yilmaz and Shah @cite_4 extended this approach by tracking the 2D points on human contours. Given the human contours for each frame of a video, they generate an action volume by computing point correspondences between consecutive contours. Maximum and minimum curvatures on the spatio-temporal action volume are used as view-invariant action descriptors. However, these methods require robust interest points detection and tracking, which are still challenging problems.
{ "cite_N": [ "@cite_30", "@cite_22", "@cite_29", "@cite_15", "@cite_58", "@cite_10", "@cite_4", "@cite_60", "@cite_39", "@cite_46", "@cite_26", "@cite_32", "@cite_27", "@cite_33", "@cite_53", "@cite_0", "@cite_24", "@cite_45", "@cite_13", "@cite_11" ], "mid": [ "", "1534763723", "2010676632", "1874503286", "2217325140", "", "", "2117082993", "2068611653", "2156135524", "", "", "2125854396", "2020163092", "2110142955", "2105101328", "", "2533739470", "2547599103", "2010399676" ], "abstract": [ "", "Over the years, several spatio-temporal interest point detectors have been proposed. While some detectors can only extract a sparse set of scale-invariant features, others allow for the detection of a larger amount of features at user-defined scales. This paper presents for the first time spatio-temporal interest points that are at the same time scale-invariant (both spatially and temporally) and densely cover the video content. Moreover, as opposed to earlier work, the features can be computed efficiently. Applying scale-space theory, we show that this can be achieved by using the determinant of the Hessian as the saliency measure. Computations are speeded-up further through the use of approximative box-filter operations on an integral video structure. A quantitative evaluation and experimental results on action recognition show the strengths of the proposed detector in terms of repeatability, accuracy and speed, in comparison with previously proposed detectors.", "We propose an algorithm which combines the discriminative information from depth images as well as from 3D joint positions to achieve high action recognition accuracy. To avoid the suppression of subtle discriminative information and also to handle local occlusions, we compute a vector of many independent local features. Each feature encodes spatiotemporal variations of depth and depth gradients at a specific space-time location in the action volume. Moreover, we encode the dominant skeleton movements by computing a local 3D joint position difference histogram. For each joint, we compute a 3D space-time motion volume which we use as an importance indicator and incorporate in the feature vector for improved action discrimination. To retain only the discriminant features, we train a random decision forest (RDF). The proposed algorithm is evaluated on three standard datasets and compared with nine state-of-the-art algorithms. Experimental results show that, on the average, the proposed algorithm outperform all other algorithms in accuracy and have a processing speed of over 112 frames second.", "Existing techniques for 3D action recognition are sensitive to viewpoint variations because they extract features from depth images which change significantly with viewpoint. In contrast, we directly process the pointclouds and propose a new technique for action recognition which is more robust to noise, action speed and viewpoint variations. Our technique consists of a novel descriptor and keypoint detection algorithm. The proposed descriptor is extracted at a point by encoding the Histogram of Oriented Principal Components (HOPC) within an adaptive spatio-temporal support volume around that point. Based on this descriptor, we present a novel method to detect Spatio-Temporal Key-Points (STKPs) in 3D pointcloud sequences. Experimental results show that the proposed descriptor and STKP detector outperform state-of-the-art algorithms on three benchmark human activity datasets. We also introduce a new multiview public dataset and show the robustness of our proposed method to viewpoint variations.", "The articulated and complex nature of human actions makes the task of action recognition difficult. One approach to handle this complexity is dividing it to the kinetics of body parts and analyzing the actions based on these partial descriptors. We propose a joint sparse regression based learning method which utilizes the structured sparsity to model each action as a combination of multimodal features from a sparse set of body parts. To represent dynamics and appearance of parts, we employ a heterogeneous set of depth and skeleton based features. The proper structure of multimodal multipart features are formulated into the learning framework via the proposed hierarchical mixed norm, to regularize the structured features of each part and to apply sparsity between them, in favor of a group feature selection. Our experimental results expose the effectiveness of the proposed learning method in which it outperforms other methods in all three tested datasets while saturating one of them by achieving perfect accuracy.", "", "", "Recognition of human actions in a video acquired by a moving camera typically requires standard preprocessing steps such as motion compensation, moving object detection and object tracking. The errors from the motion compensation step propagate to the object detection stage, resulting in miss-detections, which further complicates the tracking stage, resulting in cluttered and incorrect tracks. Therefore, action recognition from a moving camera is considered very challenging. In this paper, we propose a novel approach which does not follow the standard steps, and accordingly avoids the aforementioned difficulties. Our approach is based on Lagrangian particle trajectories which are a set of dense trajectories obtained by advecting optical flow over time, thus capturing the ensemble motions of a scene. This is done in frames of unaligned video, and no object detection is required. In order to handle the moving camera, we propose a novel approach based on low rank optimization, where we decompose the trajectories into their camera-induced and object-induced components. Having obtained the relevant object motion trajectories, we compute a compact set of chaotic invariant features which captures the characteristics of the trajectories. Consequently, a SVM is employed to learn and recognize the human actions using the computed motion features. We performed intensive experiments on multiple benchmark datasets and two new aerial datasets called ARG and APHill, and obtained promising results.", "This paper introduces a video representation based on dense trajectories and motion boundary descriptors. Trajectories capture the local motion information of the video. A dense representation guarantees a good coverage of foreground motion as well as of the surrounding context. A state-of-the-art optical flow algorithm enables a robust and efficient extraction of dense trajectories. As descriptors we extract features aligned with the trajectories to characterize shape (point coordinates), appearance (histograms of oriented gradients) and motion (histograms of optical flow). Additionally, we introduce a descriptor based on motion boundary histograms (MBH) which rely on differential optical flow. The MBH descriptor shows to consistently outperform other state-of-the-art descriptors, in particular on real-world videos that contain a significant amount of camera motion. We evaluate our video representation in the context of action classification on nine datasets, namely KTH, YouTube, Hollywood2, UCF sports, IXMAS, UIUC, Olympic Sports, UCF50 and HMDB51. On all datasets our approach outperforms current state-of-the-art results.", "3D human pose recovery is considered as a fundamental step in view-invariant human action recognition. However, inferring 3D poses from a single view usually is slow due to the large number of parameters that need to be estimated and recovered poses are often ambiguous due to the perspective projection. We present an approach that does not explicitly infer 3D pose at each frame. Instead, from existing action models we search for a series of actions that best match the input sequence. In our approach, each action is modeled as a series of synthetic 2D human poses rendered from a wide range of viewpoints. The constraints on transition of the synthetic poses is represented by a graph model called Action Net. Given the input, silhouette matching between the input frames and the key poses is performed first using an enhanced Pyramid Match Kernel algorithm. The best matched sequence of actions is then tracked using the Viterbi algorithm. We demonstrate this approach on a challenging video sets consisting of 15 complex action classes.", "", "", "Analysis of human perception of motion shows that information for representing the motion is obtained from the dramatic changes in the speed and direction of the trajectory. In this paper, we present a computational representation of human action to capture these dramatic changes using spatio-temporal curvature of 2-D trajectory. This representation is compact, view-invariant, and is capable of explaining an action in terms of meaningful action units called dynamic instants and intervals. A dynamic instant is an instantaneous entity that occurs for only one frame, and represents an important change in the motion characteristics. An interval represents the time period between two dynamic instants during which the motion characteristics do not change. Starting without a model, we use this representation for recognition and incremental learning of human actions. The proposed method can discover instances of the same action performed by different people from different view points. Experiments on 47 actions performed by 7 individuals in an environment with no constraints shows the robustness of the proposed method.", "Local image features or interest points provide compact and abstract representations of patterns in an image. In this paper, we extend the notion of spatial interest points into the spatio-temporal domain and show how the resulting features often reflect interesting events that can be used for a compact representation of video data as well as for interpretation of spatio-temporal events. To detect spatio-temporal events, we build on the idea of the Harris and Forstner interest point operators and detect local structures in space-time where the image values have significant local variations in both space and time. We estimate the spatio-temporal extents of the detected events by maximizing a normalized spatio-temporal Laplacian operator over spatial and temporal scales. To represent the detected events, we then compute local, spatio-temporal, scale-invariant N-jets and classify each event with respect to its jet descriptor. For the problem of human motion analysis, we illustrate how a video representation in terms of local space-time features allows for detection of walking people in scenes with occlusions and dynamic cluttered backgrounds.", "In this paper, we present a novel approach for automatically learning a compact and yet discriminative appearance-based human action model. A video sequence is represented by a bag of spatiotemporal features called video-words by quantizing the extracted 3D interest points (cuboids) from the videos. Our proposed approach is able to automatically discover the optimal number of video-word clusters by utilizing maximization of mutual information(MMI). Unlike the k-means algorithm, which is typically used to cluster spatiotemporal cuboids into video words based on their appearance similarity, MMI clustering further groups the video-words, which are highly correlated to some group of actions. To capture the structural information of the learnt optimal video-word clusters, we explore the correlation of the compact video-word clusters. We use the modified correlogram, which is not only translation and rotation invariant, but also somewhat scale invariant. We extensively test our proposed approach on two publicly available challenging datasets: the KTH dataset and IXMAS multiview dataset. To the best of our knowledge, we are the first to try the bag of video-words related approach on the multiview dataset. We have obtained very impressive results on both datasets.", "Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art.", "", "A common trend in object recognition is to detect and leverage the use of sparse, informative feature points. The use of such features makes the problem more manageable while providing increased robustness to noise and pose variation. In this work we develop an extension of these ideas to the spatio-temporal case. For this purpose, we show that the direct 3D counterparts to commonly used 2D interest point detectors are inadequate, and we propose an alternative. Anchoring off of these interest points, we devise a recognition algorithm based on spatio-temporally windowed data. We present recognition results on a variety of datasets including both human and rodent behavior.", "A prototype-based approach is introduced for action recognition. The approach represents an action as a sequence of prototypes for efficient and flexible action matching in long video sequences. During training, first, an action prototype tree is learned in a joint shape and motion space via hierarchical k-means clustering; then a lookup table of prototype-to-prototype distances is generated. During testing, based on a joint likelihood model of the actor location and action prototype, the actor is tracked while a frame-to-prototype correspondence is established by maximizing the joint likelihood, which is efficiently performed by searching the learned prototype tree; then actions are recognized using dynamic prototype sequence matching. Distance matrices used for sequence matching are rapidly obtained by look-up table indexing, which is an order of magnitude faster than brute-force computation of frame-to-frame distances. Our approach enables robust action matching in very challenging situations (such as moving cameras, dynamic backgrounds) and allows automatic alignment of action sequences. Experimental results demonstrate that our approach achieves recognition rates of 91.07 on a large gesture dataset (with dynamic backgrounds), 100 on the Weizmann action dataset and 95.77 on the KTH action dataset.", "Human action in video sequences can be seen as silhouettes of a moving torso and protruding limbs undergoing articulated motion. We regard human actions as three-dimensional shapes induced by the silhouettes in the space-time volume. We adopt a recent approach by (2004) for analyzing 2D shapes and generalize it to deal with volumetric space-time action shapes. Our method utilizes properties of the solution to the Poisson equation to extract space-time features such as local space-time saliency, action dynamics, shape structure and orientation. We show that these features are useful for action recognition, detection and clustering. The method is fast, does not require video alignment and is applicable in (but not limited to) many scenarios where the background is known. Moreover, we demonstrate the robustness of our method to partial occlusions, non-rigid deformations, significant changes in scale and viewpoint, high irregularities in the performance of an action and low quality video" ] }
1602.00828
2285479881
Recognizing human actions from unknown and unseen (novel) views is a challenging problem. We propose a Robust Non-Linear Knowledge Transfer Model (R-NKTM) for human action recognition from novel views. The proposed R-NKTM is a deep fully-connected neural network that transfers knowledge of human actions from any unknown view to a shared high-level virtual view by finding a non-linear virtual path that connects the views. The R-NKTM is learned from dense trajectories of synthetic 3D human models fitted to real motion capture data and generalizes to real videos of human actions. The strength of our technique is that we learn a single R-NKTM for all actions and all viewpoints for knowledge transfer of any real human action video without the need for re-training or fine-tuning the model. Thus, R-NKTM can efficiently scale to incorporate new action classes. R-NKTM is learned with dummy labels and does not require knowledge of the camera viewpoint at any stage. Experiments on three benchmark cross-view human action datasets show that our method outperforms existing state-of-the-art.
Recently, transfer learning approaches have been employed to address cross-view action recognition by exploring some form of statistical connections between view-dependent features extracted from different viewpoints. A notable example of this category is the work of @cite_2 , who employed Maximum Margin Clustering to generate split-based features in the source view, then trained a classifier to predict split-based features in the target view. Liu et. al. @cite_21 learned a cross-view bag of bilingual words using the simultaneous multiview observations of the same action. They represented the action videos by bilingual words in both views. Zheng @cite_9 proposed to build a transferable dictionary pair by forcing the videos of the same action to have the same sparse coefficients across different views. However, these methods require feature-to-feature correspondence at the frame-level or video-level during training, thereby limiting their applications.
{ "cite_N": [ "@cite_9", "@cite_21", "@cite_2" ], "mid": [ "2169560406", "2010243644", "" ], "abstract": [ "We present an approach to jointly learn a set of view-specific dictionaries and a common dictionary for cross-view action recognition. The set of view-specific dictionaries is learned for specific views while the common dictionary is shared across different views. Our approach represents videos in each view using both the corresponding view-specific dictionary and the common dictionary. More importantly, it encourages the set of videos taken from different views of the same action to have similar sparse representations. In this way, we can align view-specific features in the sparse feature spaces spanned by the view-specific dictionary set and transfer the view-shared features in the sparse feature space spanned by the common dictionary. Meanwhile, the incoherence between the common dictionary and the view-specific dictionary set enables us to exploit the discrimination information encoded in view-specific features and view-shared features separately. In addition, the learned common dictionary not only has the capability to represent actions from unseen views, but also makes our approach effective in a semi-supervised setting where no correspondence videos exist and only a few labels exist in the target view. Extensive experiments using the multi-view IXMAS dataset demonstrate that our approach outperforms many recent approaches for cross-view action recognition.", "In this paper, we present a novel approach to recognizing human actions from different views by view knowledge transfer. An action is originally modelled as a bag of visual-words (BoVW), which is sensitive to view changes. We argue that, as opposed to visual words, there exist some higher level features which can be shared across views and enable the connection of action models for different views. To discover these features, we use a bipartite graph to model two view-dependent vocabularies, then apply bipartite graph partitioning to co-cluster two vocabularies into visual-word clusters called bilingual-words (i.e., high-level features), which can bridge the semantic gap across view-dependent vocabularies. Consequently, we can transfer a BoVW action model into a bag-of-bilingual-words (BoBW) model, which is more discriminative in the presence of view changes. We tested our approach on the IXMAS data set and obtained very promising results. Moreover, to further fuse view knowledge from multiple views, we apply a Locally Weighted Ensemble scheme to dynamically weight transferred models based on the local distribution structure around each test example. This process can further improve the average recognition rate by about 7 .", "" ] }
1602.00828
2285479881
Recognizing human actions from unknown and unseen (novel) views is a challenging problem. We propose a Robust Non-Linear Knowledge Transfer Model (R-NKTM) for human action recognition from novel views. The proposed R-NKTM is a deep fully-connected neural network that transfers knowledge of human actions from any unknown view to a shared high-level virtual view by finding a non-linear virtual path that connects the views. The R-NKTM is learned from dense trajectories of synthetic 3D human models fitted to real motion capture data and generalizes to real videos of human actions. The strength of our technique is that we learn a single R-NKTM for all actions and all viewpoints for knowledge transfer of any real human action video without the need for re-training or fine-tuning the model. Thus, R-NKTM can efficiently scale to incorporate new action classes. R-NKTM is learned with dummy labels and does not require knowledge of the camera viewpoint at any stage. Experiments on three benchmark cross-view human action datasets show that our method outperforms existing state-of-the-art.
More recently, @cite_14 proposed cross-view action recognition by discovering discriminative 3D Poselets and learning the geometric relations among different views. However, they learn a separate transformation between different views using a linear SVM solver. Thus many linear transformations are learned for mapping between different views. For action recognition from unseen views, all learned transformations are used for exhaustive matching and the results are combined with an AND-OR Graph (AOG). This method also requires 3D skeleton data for training which is not always available. @cite_56 proposed to find the best match for each training video in large mocap sequences using a Non-linear Circular Temporary Encoding method. The best matched mocap sequence and its projections on different angles are then used to generate more synthetic training data making the process computationally expensive. Moreover, the success of this approach depends on the availability of a large mocap dataset which covers a wide range of human actions @cite_56 @cite_8 .
{ "cite_N": [ "@cite_8", "@cite_14", "@cite_56" ], "mid": [ "", "2949462896", "2057232399" ], "abstract": [ "", "Existing methods on video-based action recognition are generally view-dependent, i.e., performing recognition from the same views seen in the training data. We present a novel multiview spatio-temporal AND-OR graph (MST-AOG) representation for cross-view action recognition, i.e., the recognition is performed on the video from an unknown and unseen view. As a compositional model, MST-AOG compactly represents the hierarchical combinatorial structures of cross-view actions by explicitly modeling the geometry, appearance and motion variations. This paper proposes effective methods to learn the structure and parameters of MST-AOG. The inference based on MST-AOG enables action recognition from novel views. The training of MST-AOG takes advantage of the 3D human skeleton data obtained from Kinect cameras to avoid annotating enormous multi-view video frames, which is error-prone and time-consuming, but the recognition does not need 3D information and is based on 2D video input. A new Multiview Action3D dataset has been created and will be released. Extensive experiments have demonstrated that this new action representation significantly improves the accuracy and robustness for cross-view action recognition on 2D videos.", "We describe a new approach to transfer knowledge across views for action recognition by using examples from a large collection of unlabelled mocap data. We achieve this by directly matching purely motion based features from videos to mocap. Our approach recovers 3D pose sequences without performing any body part tracking. We use these matches to generate multiple motion projections and thus add view invariance to our action recognition model. We also introduce a closed form solution for approximate non-linear Circulant Temporal Encoding (nCTE), which allows us to efficiently perform the matches in the frequency domain. We test our approach on the challenging unsupervised modality of the IXMAS dataset, and use publicly available motion capture data for matching. Without any additional annotation effort, we are able to significantly outperform the current state of the art." ] }
1602.00577
2254313155
In this paper, we propose a fast deep learning method for object saliency detection using convolutional neural networks. In our approach, we use a gradient descent method to iteratively modify the input images based on the pixel-wise gradients to reduce a pre-defined cost function, which is defined to measure the class-specific objectness and clamp the class-irrelevant outputs to maintain image background. The pixel-wise gradients can be efficiently computed using the back-propagation algorithm. We further apply SLIC superpixels and LAB color based low level saliency features to smooth and refine the gradients. Our methods are quite computationally efficient, much faster than other deep learning based saliency methods. Experimental results on two benchmark tasks, namely Pascal VOC 2012 and MSRA10k, have shown that our proposed methods can generate high-quality salience maps, at least comparable with many slow and complicated deep learning methods. Comparing with the pure low-level methods, our approach excels in handling many difficult images, which contain complex background, highly-variable salient objects, multiple objects, and or very small salient objects.
Recently, some deep learning techniques have been proposed for image saliency detection and semantic image segmentation @cite_7 @cite_15 @cite_19 @cite_0 . These methods typically use DCNNs to examine a large number of region proposals from other algorithms, and use the features generated by DCNNs along with other post-stage classifiers to localize the target objects. And currently more and more methods tend to directly generate pixel-wise saliency maps or segmentation @cite_19 . For example, in @cite_0 , two DCNNs are applied to model the global context and local context for each superpixel in the input images, and the two levels of context are finally combined to generate the pixel-wise multi-context saliency maps.
{ "cite_N": [ "@cite_0", "@cite_19", "@cite_15", "@cite_7" ], "mid": [ "1942214758", "1507506748", "2102605133", "2963542991" ], "abstract": [ "Low-level saliency cues or priors do not produce good enough saliency detection results especially when the salient object presents in a low-contrast background with confusing visual appearance. This issue raises a serious problem for conventional approaches. In this paper, we tackle this problem by proposing a multi-context deep learning framework for salient object detection. We employ deep Convolutional Neural Networks to model saliency of objects in images. Global context and local context are both taken into account, and are jointly modeled in a unified multi-context deep learning framework. To provide a better initialization for training the deep neural networks, we investigate different pre-training strategies, and a task-specific pre-training scheme is designed to make the multi-context modeling suited for saliency detection. Furthermore, recently proposed contemporary deep models in the ImageNet Image Classification Challenge are tested, and their effectiveness in saliency detection are investigated. Our approach is extensively evaluated on five public datasets, and experimental results show significant and consistent improvements over the state-of-the-art methods.", "We aim to detect all instances of a category in an image and, for each instance, mark the pixels that belong to it. We call this task Simultaneous Detection and Segmentation (SDS). Unlike classical bounding box detection, SDS requires a segmentation and not just a box. Unlike classical semantic segmentation, we require individual object instances. We build on recent work that uses convolutional neural networks to classify category-independent region proposals (R-CNN [16]), introducing a novel architecture tailored for SDS. We then use category-specific, top-down figure-ground predictions to refine our bottom-up proposals. We show a 7 point boost (16 relative) over our baselines on SDS, a 5 point boost (10 relative) over state-of-the-art on semantic segmentation, and state-of-the-art performance in object detection. Finally, we provide diagnostic tools that unpack performance and provide directions for future work.", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "Abstract: We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat." ] }
1602.00749
2269723790
This paper proposes a new framework for RGB-D-based action recognition that takes advantages of hand-designed features from skeleton data and deeply learned features from depth maps, and exploits effectively both the local and global temporal information. Specifically, depth and skeleton data are firstly augmented for deep learning and making the recognition insensitive to view variance. Secondly, depth sequences are segmented using the hand-crafted features based on skeleton joints motion histogram to exploit the local temporal information. All training se gments are clustered using an Infinite Gaussian Mixture Model (IGMM) through Bayesian estimation and labelled for training Convolutional Neural Networks (ConvNets) on the depth maps. Thus, a depth sequence can be reliably encoded into a sequence of segment labels. Finally, the sequence of labels is fed into a joint Hidden Markov Model and Support Vector Machine (HMM-SVM) classifier to explore the global temporal information for final recognition.
Human action recognition from RGB-D data has been extensively researched and much progress has been made since the seminal work @cite_19 . One of the main advantages of depth data is that they can effectively capture 3D structural information. Up to date, many effective hand-crafted features have been proposed based on depth data, such as Action Graph (AG) @cite_19 , Depth Motion Maps (DMMs) @cite_22 , Histogram of Oriented 4D Normals (HON4D) @cite_11 , Depth Spatio-Temporal Interest Point (DSTIP) @cite_5 and Super Normal Vector (SNV) @cite_4 . Recent work @cite_15 also showed that features from depth maps can also be deeply learned using ConvNets.
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_19", "@cite_5", "@cite_15", "@cite_11" ], "mid": [ "2091911422", "2008824967", "2144380653", "2162415752", "2001696967", "2085735683" ], "abstract": [ "This paper presents a new framework for human activity recognition from video sequences captured by a depth camera. We cluster hypersurface normals in a depth sequence to form the polynormal which is used to jointly characterize the local motion and shape information. In order to globally capture the spatial and temporal orders, an adaptive spatio-temporal pyramid is introduced to subdivide a depth video into a set of space-time grids. We then propose a novel scheme of aggregating the low-level polynormals into the super normal vector (SNV) which can be seen as a simplified version of the Fisher kernel representation. In the extensive experiments, we achieve classification results superior to all previous published results on the four public benchmark datasets, i.e., MSRAction3D, MSRDailyActivity3D, MSRGesture3D, and MSRActionPairs3D.", "In this paper, we propose an effective method to recognize human actions from sequences of depth maps, which provide additional body shape and motion information for action recognition. In our approach, we project depth maps onto three orthogonal planes and accumulate global activities through entire video sequences to generate the Depth Motion Maps (DMM). Histograms of Oriented Gradients (HOG) are then computed from DMM as the representation of an action video. The recognition results on Microsoft Research (MSR) Action3D dataset show that our approach significantly outperforms the state-of-the-art methods, although our representation is much more compact. In addition, we investigate how many frames are required in our framework to recognize actions on the MSR Action3D dataset. We observe that a short sub-sequence of 30-35 frames is sufficient to achieve comparable results to that operating on entire video sequences.", "This paper presents a method to recognize human actions from sequences of depth maps. Specifically, we employ an action graph to model explicitly the dynamics of the actions and a bag of 3D points to characterize a set of salient postures that correspond to the nodes in the action graph. In addition, we propose a simple, but effective projection based sampling scheme to sample the bag of 3D points from the depth maps. Experimental results have shown that over 90 recognition accuracy were achieved by sampling only about 1 3D points from the depth maps. Compared to the 2D silhouette based recognition, the recognition errors were halved. In addition, we demonstrate the potential of the bag of points posture model to deal with occlusions through simulation.", "Local spatio-temporal interest points (STIPs) and the resulting features from RGB videos have been proven successful at activity recognition that can handle cluttered backgrounds and partial occlusions. In this paper, we propose its counterpart in depth video and show its efficacy on activity recognition. We present a filtering method to extract STIPs from depth videos (called DSTIP) that effectively suppress the noisy measurements. Further, we build a novel depth cuboid similarity feature (DCSF) to describe the local 3D depth cuboid around the DSTIPs with an adaptable supporting size. We test this feature on activity recognition application using the public MSRAction3D, MSRDailyActivity3D datasets and our own dataset. Experimental evaluation shows that the proposed approach outperforms state-of-the-art activity recognition algorithms on depth videos, and the framework is more widely applicable than existing approaches. We also give detailed comparisons with other features and analysis of choice of parameters as a guidance for applications.", "In this paper, we propose to adopt ConvNets to recognize human actions from depth maps on relatively small datasets based on Depth Motion Maps (DMMs). In particular, three strategies are developed to effectively leverage the capability of ConvNets in mining discriminative features for recognition. Firstly, different viewpoints are mimicked by rotating virtual cameras around subject represented by the 3D points of the captured depth maps. This not only synthesizes more data from the captured ones, but also makes the trained ConvNets view-tolerant. Secondly, DMMs are constructed and further enhanced for recognition by encoding them into Pseudo-RGB images, turning the spatial-temporal motion patterns into textures and edges. Lastly, through transferring learning the models originally trained over ImageNet for image classification, the three ConvNets are trained independently on the color-coded DMMs constructed in three orthogonal planes. The proposed algorithm was extensively evaluated on MSRAction3D, MSRAction3DExt and UTKinect-Action datasets and achieved the state-of-the-art results on these datasets.", "We present a new descriptor for activity recognition from videos acquired by a depth sensor. Previous descriptors mostly compute shape and motion features independently, thus, they often fail to capture the complex joint shape-motion cues at pixel-level. In contrast, we describe the depth sequence using a histogram capturing the distribution of the surface normal orientation in the 4D space of time, depth, and spatial coordinates. To build the histogram, we create 4D projectors, which quantize the 4D space and represent the possible directions for the 4D normal. We initialize the projectors using the vertices of a regular polychoron. Consequently, we refine the projectors using a discriminative density measure, such that additional projectors are induced in the directions where the 4D normals are more dense and discriminative. Through extensive experiments, we demonstrate that our descriptor better captures the joint shape-motion cues in the depth sequence, and thus outperforms the state-of-the-art on all relevant benchmarks." ] }
1602.00749
2269723790
This paper proposes a new framework for RGB-D-based action recognition that takes advantages of hand-designed features from skeleton data and deeply learned features from depth maps, and exploits effectively both the local and global temporal information. Specifically, depth and skeleton data are firstly augmented for deep learning and making the recognition insensitive to view variance. Secondly, depth sequences are segmented using the hand-crafted features based on skeleton joints motion histogram to exploit the local temporal information. All training se gments are clustered using an Infinite Gaussian Mixture Model (IGMM) through Bayesian estimation and labelled for training Convolutional Neural Networks (ConvNets) on the depth maps. Thus, a depth sequence can be reliably encoded into a sequence of segment labels. Finally, the sequence of labels is fed into a joint Hidden Markov Model and Support Vector Machine (HMM-SVM) classifier to explore the global temporal information for final recognition.
Skeleton data which is usually extracted from depth maps @cite_18 provides a high-level representation of human motion. Many hand-crafted skeleton based features have also been developed in the past. They include EigenJoints @cite_7 , Moving Pose @cite_0 , Histogram of Oriented Displacement (HOD) @cite_13 , Frequent Local Parts (FLPs) @cite_17 and Points in Lie Group (PLP) @cite_27 , which are all designed by hand. Recently, the work @cite_16 demonstrated that features from skeleton can also been directly learned by deep learning methods. However, skeleton data can be quite noisy especially when occlusion exists and the subjects are not in standing position facing the RGB-D camera.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_0", "@cite_27", "@cite_16", "@cite_13", "@cite_17" ], "mid": [ "2060280062", "2073139398", "1983592444", "2048821851", "1950788856", "1596216457", "2949938134" ], "abstract": [ "We propose a new method to quickly and accurately predict human pose---the 3D positions of body joints---from a single depth image, without depending on information from preceding frames. Our approach is strongly rooted in current object recognition strategies. By designing an intermediate representation in terms of body parts, the difficult pose estimation problem is transformed into a simpler per-pixel classification problem, for which efficient machine learning techniques exist. By using computer graphics to synthesize a very large dataset of training image pairs, one can train a classifier that estimates body part labels from test images invariant to pose, body shape, clothing, and other irrelevances. Finally, we generate confidence-scored 3D proposals of several body joints by reprojecting the classification result and finding local modes. The system runs in under 5ms on the Xbox 360. Our evaluation shows high accuracy on both synthetic and real test sets, and investigates the effect of several training parameters. We achieve state-of-the-art accuracy in our comparison with related work and demonstrate improved generalization over exact whole-skeleton nearest neighbor matching.", "In this paper, we propose an effective method to recognize human actions from 3D positions of body joints. With the release of RGBD sensors and associated SDK, human body joints can be extracted in real time with reasonable accuracy. In our method, we propose a new type of features based on position differences of joints, EigenJoints, which combine action information including static posture, motion, and offset. We further employ the Naive-Bayes-Nearest-Neighbor (NBNN) classifier for multi-class action classification. The recognition results on the Microsoft Research (MSR) Action3D dataset demonstrate that our approach significantly outperforms the state-of-the-art methods. In addition, we investigate how many frames are necessary for our method to recognize actions on the MSR Action3D dataset. We observe 15–20 frames are sufficient to achieve comparable results to that using the entire video sequences.", "Human action recognition under low observational latency is receiving a growing interest in computer vision due to rapidly developing technologies in human-robot interaction, computer gaming and surveillance. In this paper we propose a fast, simple, yet powerful non-parametric Moving Pose (MP) framework for low-latency human action and activity recognition. Central to our methodology is a moving pose descriptor that considers both pose information as well as differential quantities (speed and acceleration) of the human body joints within a short time window around the current frame. The proposed descriptor is used in conjunction with a modified kNN classifier that considers both the temporal location of a particular frame within the action sequence as well as the discrimination power of its moving pose descriptor compared to other frames in the training set. The resulting method is non-parametric and enables low-latency recognition, one-shot learning, and action detection in difficult unsegmented sequences. Moreover, the framework is real-time, scalable, and outperforms more sophisticated approaches on challenging benchmarks like MSR-Action3D or MSR-DailyActivities3D.", "Recently introduced cost-effective depth sensors coupled with the real-time skeleton estimation algorithm of [16] have generated a renewed interest in skeleton-based human action recognition. Most of the existing skeleton-based approaches use either the joint locations or the joint angles to represent a human skeleton. In this paper, we propose a new skelet al representation that explicitly models the 3D geometric relationships between various body parts using rotations and translations in 3D space. Since 3D rigid body motions are members of the special Euclidean group SE(3), the proposed skelet al representation lies in the Lie group SE(3)×…×SE(3), which is a curved manifold. Using the proposed representation, human actions can be modeled as curves in this Lie group. Since classification of curves in this Lie group is not an easy task, we map the action curves from the Lie group to its Lie algebra, which is a vector space. We then perform classification using a combination of dynamic time warping, Fourier temporal pyramid representation and linear SVM. Experimental results on three action datasets show that the proposed representation performs better than many existing skelet al representations. The proposed approach also outperforms various state-of-the-art skeleton-based human action recognition approaches.", "Human actions can be represented by the trajectories of skeleton joints. Traditional methods generally model the spatial structure and temporal dynamics of human skeleton with hand-crafted features and recognize human actions by well-designed classifiers. In this paper, considering that recurrent neural network (RNN) can model the long-term contextual information of temporal sequences well, we propose an end-to-end hierarchical RNN for skeleton based action recognition. Instead of taking the whole skeleton as the input, we divide the human skeleton into five parts according to human physical structure, and then separately feed them to five subnets. As the number of layers increases, the representations extracted by the subnets are hierarchically fused to be the inputs of higher layers. The final representations of the skeleton sequences are fed into a single-layer perceptron, and the temporally accumulated output of the perceptron is the final decision. We compare with five other deep RNN architectures derived from our model to verify the effectiveness of the proposed network, and also compare with several other methods on three publicly available datasets. Experimental results demonstrate that our model achieves the state-of-the-art performance with high computational efficiency.", "Creating descriptors for trajectories has many applications in robotics human motion analysis and video copy detection. Here, we propose a novel descriptor for 2D trajectories: Histogram of Oriented Displacements (HOD). Each displacement in the trajectory votes with its length in a histogram of orientation angles. 3D trajectories are described by the HOD of their three projections. We use HOD to describe the 3D trajectories of body joints to recognize human actions, which is a challenging machine vision task, with applications in human-robot machine interaction, interactive entertainment, multimedia information retrieval, and surveillance. The descriptor is fixed-length, scale-invariant and speed-invariant. Experiments on MSR-Action3D and HDM05 datasets show that the descriptor outperforms the state-of-the-art when using off-the-shelf classification tools.", "Recently, mid-level features have shown promising performance in computer vision. Mid-level features learned by incorporating class-level information are potentially more discriminative than traditional low-level local features. In this paper, an effective method is proposed to extract mid-level features from Kinect skeletons for 3D human action recognition. Firstly, the orientations of limbs connected by two skeleton joints are computed and each orientation is encoded into one of the 27 states indicating the spatial relationship of the joints. Secondly, limbs are combined into parts and the limb's states are mapped into part states. Finally, frequent pattern mining is employed to mine the most frequent and relevant (discriminative, representative and non-redundant) states of parts in continuous several frames. These parts are referred to as Frequent Local Parts or FLPs. The FLPs allow us to build powerful bag-of-FLP-based action representation. This new representation yields state-of-the-art results on MSR DailyActivity3D and MSR ActionPairs3D." ] }
1602.00749
2269723790
This paper proposes a new framework for RGB-D-based action recognition that takes advantages of hand-designed features from skeleton data and deeply learned features from depth maps, and exploits effectively both the local and global temporal information. Specifically, depth and skeleton data are firstly augmented for deep learning and making the recognition insensitive to view variance. Secondly, depth sequences are segmented using the hand-crafted features based on skeleton joints motion histogram to exploit the local temporal information. All training se gments are clustered using an Infinite Gaussian Mixture Model (IGMM) through Bayesian estimation and labelled for training Convolutional Neural Networks (ConvNets) on the depth maps. Thus, a depth sequence can be reliably encoded into a sequence of segment labels. Finally, the sequence of labels is fed into a joint Hidden Markov Model and Support Vector Machine (HMM-SVM) classifier to explore the global temporal information for final recognition.
Joint use of both depth maps and skeleton data have also been attempted. @cite_12 designed a 3D Local Occupancy Patterns (LOP) feature to describe the local depth appearance at joint locations to capture the information for subject-object interactions. In their subsequent work, @cite_24 proposed an Actionlet Ensemble Model (AEM) which combines both the LOP feature and Temporal Pyramid Fourier (TPF) feature. @cite_1 presented two sets of features extracted from depth maps and skeletons and they are fused at the kernel level by using Multiple Kernel Learning (MKL) technique. Wu and Shao @cite_20 adopted Deep Belief Networks (DBN) and 3D Convolutional Neural Networks (3DCNN) for skelet al and depth data respectively to extract high level spatial-temporal features.
{ "cite_N": [ "@cite_24", "@cite_20", "@cite_1", "@cite_12" ], "mid": [ "2110819057", "1539648621", "2077309078", "2143267104" ], "abstract": [ "Human action recognition is an important yet challenging task. Human actions usually involve human-object interactions, highly articulated motions, high intra-class variations, and complicated temporal structures. The recently developed commodity depth sensors open up new possibilities of dealing with this problem by providing 3D depth data of the scene. This information not only facilitates a rather powerful human motion capturing technique, but also makes it possible to efficiently model human-object interactions and intra-class variations. In this paper, we propose to characterize the human actions with a novel actionlet ensemble model, which represents the interaction of a subset of human joints. The proposed model is robust to noise, invariant to translational and temporal misalignment, and capable of characterizing both the human motion and the human-object interactions. We evaluate the proposed approach on three challenging action recognition datasets captured by Kinect devices, a multiview action recognition dataset captured with Kinect device, and a dataset captured by a motion capture system. The experimental evaluations show that the proposed approach achieves superior performance to the state-of-the-art algorithms.", "The purpose of this paper is to describe a novel method called Deep Dynamic Neural Networks(DDNN) for the Track 3 of the Chalearn Looking at People 2014 challenge [1]. A generalised semi-supervised hierarchical dynamic framework is proposed for simultaneous gesture segmentation and recognition taking both skeleton and depth images as input modules. First, Deep Belief Networks(DBN) and 3D Convolutional Neural Networks (3DCNN) are adopted for skelet al and depth data accordingly to extract high level spatio-temporal features. Then the learned representations are used for estimating emission probabilities of the Hidden Markov Models to infer an action sequence. The framework can be easily extended by including an ergodic state to segment and recognise video sequences by a frame-to-frame mechanism, rendering it possible for online segmentation and recognition for diverse input modules. Some normalisation details pertaining to preprocessing raw features are also discussed. This purely data-driven approach achieves 0.8162 score in this gesture spotting challenge. The performance is on par with a variety of the state-of-the-art hand-tuned-feature approaches and other learning-based methods, opening the doors for using deep learning techniques to explore time series multimodal data.", "This paper presents two sets of features, shape representation and kinematic structure, for human activity recognition using a sequence of RGB-D images. The shape features are extracted using the depth information in the frequency domain via spherical harmonics representation. The other features include the motion of the 3D joint positions (i.e. the end points of the distal limb segments) in the human body. Both sets of features are fused using the Multiple Kernel Learning (MKL) technique at the kernel level for human activity recognition. Our experiments on three publicly available datasets demonstrate that the proposed features are robust for human activity recognition and particularly when there are similarities among the actions.", "Human action recognition is an important yet challenging task. The recently developed commodity depth sensors open up new possibilities of dealing with this problem but also present some unique challenges. The depth maps captured by the depth cameras are very noisy and the 3D positions of the tracked joints may be completely wrong if serious occlusions occur, which increases the intra-class variations in the actions. In this paper, an actionlet ensemble model is learnt to represent each action and to capture the intra-class variance. In addition, novel features that are suitable for depth data are proposed. They are robust to noise, invariant to translational and temporal misalignments, and capable of characterizing both the human motion and the human-object interactions. The proposed approach is evaluated on two challenging action recognition datasets captured by commodity depth cameras, and another dataset captured by a MoCap system. The experimental evaluations show that the proposed approach achieves superior performance to the state of the art algorithms." ] }
1602.00904
2262056862
Brain-computer interfaces (BCIs) have been gaining momentum in making human-computer interaction more natural, especially for people with neuro-muscular disabilities. Among the existing solutions the systems relying on electroencephalograms (EEG) occupy the most prominent place due to their non-invasiveness. However, the process of translating EEG signals into computer commands is far from trivial, since it requires the optimization of many different parameters that need to be tuned jointly. In this report, we focus on the category of EEG-based BCIs that rely on Steady-State-Visual-Evoked Potentials (SSVEPs) and perform a comparative evaluation of the most promising algorithms existing in the literature. More specifically, we define a set of algorithms for each of the various different parameters composing a BCI system (i.e. filtering, artifact removal, feature extraction, feature selection and classification) and study each parameter independently by keeping all other parameters fixed. The results obtained from this evaluation process are provided together with a dataset consisting of the 256-channel, EEG signals of 11 subjects, as well as a processing toolbox for reproducing the results and supporting further experimentation. In this way, we manage to make available for the community a state-of-the-art baseline for SSVEP-based BCIs that can be used as a basis for introducing novel methods and approaches.
Finally, the decision step in SSVEP BCI system is performed by applying a classification procedure. More specifically, in @cite_39 classifiers such as the Support Vector Machine (SVM), the Linear Discriminant Analysis (LDA) and Extreme Learning Machines (ELM) are used. SVM and LDA are the most popular classifiers among SSVEP community and have been used in numerous works @cite_39 @cite_34 @cite_26 @cite_12 . Furthermore, the adaptive network based fuzzy inference system classifier is used in @cite_4 . Also, neural networks (NN) have been used in @cite_12 . In @cite_31 a statistic test is utilized in order to perform the decision, while in @cite_1 a set of rules is applied on spectral features. In addition at this stage of procedure the Canocical Correlation Analysis is used. More specifically, in @cite_3 correlation indexes using the Canonical Correlation Analysis (CCA) have been produced in order to perform the decision. Furthermore, in @cite_24 @cite_14 more advanced usage of CCA is adopted in order to produce similar indexes. Finally, a similar approach is proposed in @cite_30 where a sparse regression model was fitted to the EEG data and the regression coefficients are utilized for the decision.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_4", "@cite_14", "@cite_1", "@cite_3", "@cite_39", "@cite_24", "@cite_31", "@cite_34", "@cite_12" ], "mid": [ "2052894394", "2023666981", "", "2143183535", "2022562397", "2056983531", "", "2079223014", "2122515425", "", "2331156488" ], "abstract": [ "Abstract Steady-state visual evoked potential (SSVEP) has been increasingly used for the study of brain–computer interface (BCI). How to recognize SSVEP with shorter time and lower error rate is one of the key points to develop a more efficient SSVEP-based BCI. To achieve this goal, we make use of the sparsity constraint of the least absolute shrinkage and selection operator (LASSO) for the extraction of more discriminative features of SSVEP, and then we propose a LASSO model using the linear regression between electroencephalogram (EEG) recordings and the standard square-wave signals of different frequencies to recognize SSVEP without the training stage. In this study, we verified the proposed LASSO model offline with the EEG data of nine healthy subjects in contrast to canonical correlation analysis (CCA). In the experiment, when a shorter time window was used, we found that the LASSO model yielded better performance in extracting robust and detectable features of SSVEP, and the information transfer rate obtained by the LASSO model was significantly higher than that of the CCA. Our proposed method can assist to reduce the recording time without sacrificing the classification accuracy and is promising for a high-speed SSVEP-based BCI.", "Brain-computer interfaces (BCI) are communication systems that allow people to send messages or commands without movement. BCIs rely on different types of signals in the electroencephalogram (EEG), typically P300s, steady-state visually evoked potentials (SSVEP), or event-related desynchronization (ERD). Early BCI systems were often evaluated with a selected group of subjects. Also, many articles do not mention data from subjects who performed poorly. These and other factors have made it difficult to estimate how many people could use different BCIs. The present study explored how many subjects could use an SSVEP BCI. We recorded data from 53 subjects while they participated in 1 to 4 runs that were each 4 minutes long. During these runs, the subjects focused on one of four LEDs that each flickered at a different frequency. The 8 channel EEG data were analyzed with a minimum energy parameter estimation algorithm and classified with linear discriminant analysis into one of the four classes. On-line results showed that SSVEP BCIs could provide effective communication for all 53 subjects, resulting in a grand average accuracy of 95.5 . 96.2 of the subjects reached an accuracy above 80 , and nobody was below 60 . This study showed that SSVEP based BCI systems can reach very high accuracies after only a very short training period. The SSVEP approach worked for all participating subjects, who attained accuracy well above chance level. This is important because it shows that SSVEP BCIs could provide communication for some users when other approaches might not work for them.", "", "Objective. Recently, canonical correlation analysis (CCA) has been widely used in steady-state visual evoked potential (SSVEP)-based brain–computer interfaces (BCIs) due to its high efficiency, robustness, and simple implementation. However, a method with which to make use of harmonic SSVEP components to enhance the CCA-based frequency detection has not been well established. Approach. This study proposed a filter bank canonical correlation analysis (FBCCA) method to incorporate fundamental and harmonic frequency components to improve the detection of SSVEPs. A 40-target BCI speller based on frequency coding (frequency range: 8–15.8 Hz, frequency interval: 0.2 Hz) was used for performance evaluation. To optimize the filter bank design, three methods (M1: sub-bands with equally spaced bandwidths; M2: sub-bands corresponding to individual harmonic frequency bands; M3: sub-bands covering multiple harmonic frequency bands) were proposed for comparison. Classification accuracy and information transfer rate (ITR) of the three FBCCA methods and the standard CCA method were estimated using an offline dataset from 12 subjects. Furthermore, an online BCI speller adopting the optimal FBCCA method was tested with a group of 10 subjects. Main results. The FBCCA methods significantly outperformed the standard CCA method. The method M3 achieved the highest classification performance. At a spelling rate of 33.3 characters min, the online BCI speller obtained an average ITR of 151.18 ± 20.34 bits min−1. Significance. By incorporating the fundamental and harmonic SSVEP components in target identification, the proposed FBCCA method significantly improves the performance of the SSVEP-based BCI, and thereby facilitates its practical applications such as high-speed spelling.", "In this paper, a new brain computer interface (BCI) speller, named DTU BCI speller, is introduced. It is based on the steady-state visual evoked potential (SSVEP) and features dictionary support. The system focuses on simplicity and user friendliness by using a single electrode for the signal acquisition and displays stimuli on a liquid crystal display (LCD). Nine healthy subjects participated in writing full sentences after a five minutes introduction to the system, and obtained an information transfer rate (ITR) of 21.94 ± 15.63 bits min. The average amount of characters written per minute (CPM) is 4.90 ± 3.84 with a best case of 8.74 CPM. All subjects reported systematically on different user friendliness measures, and the overall results indicated the potentials of the DTU BCI Speller system. For subjects with high classification accuracies, the introduced dictionary approach greatly reduced the time it took to write full sentences.", "Platt's probabilistic outputs for Support Vector Machines (Platt, J. in Smola, A., et al (eds.) Advances in large margin classifiers. Cambridge, 2000) has been popular for applications that require posterior class probabilities. In this note, we propose an improved algorithm that theoretically converges and avoids numerical difficulties. A simple and ready-to-use pseudo code is included.", "", "Canonical correlation analysis (CCA) between recorded electroencephalogram (EEG) and designed reference signals of sine-cosine waves usually works well for steady-state visual evoked potential (SSVEP) recognition in brain-computer interface (BCI) application. However, using the reference signals of sine- cosine waves without subject-specific and inter-trial information can hardly give the optimal recognition accuracy, due to possible overfitting, especially within a short time window length. This paper introduces an L1-regularized multiway canonical correlation analysis (L1-MCCA) for reference signal optimization to improve the SSVEP recognition performance further. A multiway extension of the CCA, called MCCA, is first presented, in which collaborative CCAs are exploited to optimize the reference signals in correlation analysis for SSVEP recognition alternatingly from the channel-way and trial-way arrays of constructed EEG tensor. L1-regularization is subsequently imposed on the trial-way array optimization in the MCCA, and hence results in the more powerful L1-MCCA with function of effective trial selection. Both the proposed MCCA and L1-MCCA methods are validated for SSVEP recognition with EEG data from 10 healthy subjects, and compared to the ordinary CCA without reference signal optimization. Experimental results show that the MCCA significantly outperforms the CCA for SSVEP recognition. The L1-MCCA further improves the recognition accuracy which is significantly higher than that of the MCCA.", "In this paper, novel methods for detecting steady-state visual evoked potentials using multiple electroencephalogram (EEG) signals are presented. The methods are tailored for brain-computer interfacing, where fast and accurate detection is of vital importance for achieving high information transfer rates. High detection accuracy using short time segments is obtained by finding combinations of electrode signals that cancel strong interference signals in the EEG data. Data from a test group consisting of 10 subjects are used to evaluate the new methods and to compare them to standard techniques. Using 1-s signal segments, six different visual stimulation frequencies could be discriminated with an average classification accuracy of 84 . An additional advantage of the presented methodology is that it is fully online, i.e., no calibration data for noise estimation, feature extraction, or electrode selection is needed", "", "In recent years, Brain Computer Interface (BCI) systems based on Steady-State Visual Evoked Potential (SSVEP) have received much attentions. This study tries to develop a classifier, which can provide higher classification accuracy for multiclass SSVEP data. Four different flickering frequencies in low frequency region were used to elicit the SSVEPs and were displayed on a Liquid Crystal Display (LCD) monitor using LabVIEW. The Electroencephalogram (EEG) signals recorded from the occipital region were first segmented into 1 second window and features were extracted using Fast Fourier Transform (FFT). One-Against-All (OAA), a popular strategy for multiclass Support Vector Machines (SVM) is compared with Artificial Neural Network (ANN) models on the basis of SSVEP classifier accuracies. OAA SVM classifier had got an average accuracy of 88.55 for SSVEP classification over 10 subjects. Based on this study, it is found that for SSVEP classification OAA -SVM classifier can provide better results" ] }
1602.00904
2262056862
Brain-computer interfaces (BCIs) have been gaining momentum in making human-computer interaction more natural, especially for people with neuro-muscular disabilities. Among the existing solutions the systems relying on electroencephalograms (EEG) occupy the most prominent place due to their non-invasiveness. However, the process of translating EEG signals into computer commands is far from trivial, since it requires the optimization of many different parameters that need to be tuned jointly. In this report, we focus on the category of EEG-based BCIs that rely on Steady-State-Visual-Evoked Potentials (SSVEPs) and perform a comparative evaluation of the most promising algorithms existing in the literature. More specifically, we define a set of algorithms for each of the various different parameters composing a BCI system (i.e. filtering, artifact removal, feature extraction, feature selection and classification) and study each parameter independently by keeping all other parameters fixed. The results obtained from this evaluation process are provided together with a dataset consisting of the 256-channel, EEG signals of 11 subjects, as well as a processing toolbox for reproducing the results and supporting further experimentation. In this way, we manage to make available for the community a state-of-the-art baseline for SSVEP-based BCIs that can be used as a basis for introducing novel methods and approaches.
The existence of various options for the implementation of each submodule has motivated a non-trivial number of comparative studies for BCI systems that have been reported in the literature. In @cite_12 a comparison study was presented with respect to the classification technique. However, the comparison was limited between SVM and NN. Furthermore, in the feature extraction stage only features produced by FFT are used. In @cite_39 a more exhaustive comparative study has been presented. More specifically, in the feature extraction stage three different data sets have been produced based on spectral analysis, filterbank theory and time - frequency domain. In addition in the feature selection stage, three feature selection approaches are used, two filters, the Pearson's filter and the Davies Bouldin (DB) index, and one wrapper algorithm. A more thorough comparative study is presented in @cite_45 with respect to BCI systems. In this work the study was concentrated around numerous classification algorithms and the application of them in various BCI systems.
{ "cite_N": [ "@cite_45", "@cite_12", "@cite_39" ], "mid": [ "2075647286", "2331156488", "" ], "abstract": [ "In this paper we review classification algorithms used to design brain–computer interface (BCI) systems based on electroencephalography (EEG). We briefly present the commonly employed algorithms and describe their critical properties. Based on the literature, we compare them in terms of performance and provide guidelines to choose the suitable classification algorithm(s) for a specific BCI.", "In recent years, Brain Computer Interface (BCI) systems based on Steady-State Visual Evoked Potential (SSVEP) have received much attentions. This study tries to develop a classifier, which can provide higher classification accuracy for multiclass SSVEP data. Four different flickering frequencies in low frequency region were used to elicit the SSVEPs and were displayed on a Liquid Crystal Display (LCD) monitor using LabVIEW. The Electroencephalogram (EEG) signals recorded from the occipital region were first segmented into 1 second window and features were extracted using Fast Fourier Transform (FFT). One-Against-All (OAA), a popular strategy for multiclass Support Vector Machines (SVM) is compared with Artificial Neural Network (ANN) models on the basis of SSVEP classifier accuracies. OAA SVM classifier had got an average accuracy of 88.55 for SSVEP classification over 10 subjects. Based on this study, it is found that for SSVEP classification OAA -SVM classifier can provide better results", "" ] }
1602.00351
2262925392
Learning for maximizing AUC performance is an important research problem in Machine Learning and Artificial Intelligence. Unlike traditional batch learning methods for maximizing AUC which often suffer from poor scalability, recent years have witnessed some emerging studies that attempt to maximize AUC by single-pass online learning approaches. Despite their encouraging results reported, the existing online AUC maximization algorithms often adopt simple online gradient descent approaches that fail to exploit the geometrical knowledge of the data observed during the online learning process, and thus could suffer from relatively larger regret. To address the above limitation, in this work, we explore a novel algorithm of Adaptive Online AUC Maximization (AdaOAM) which employs an adaptive gradient method that exploits the knowledge of historical gradients to perform more informative online learning. The new adaptive updating strategy of the AdaOAM is less sensitive to the parameter settings and maintains the same time complexity as previous non-adaptive counterparts. Additionally, we extend the algorithm to handle high-dimensional sparse data (SAdaOAM) and address sparsity in the solution by performing lazy gradient updating. We analyze the theoretical bounds and evaluate their empirical performance on various types of data sets. The encouraging empirical results obtained clearly highlighted the effectiveness and efficiency of the proposed algorithms.
Online learning has been extensively studied in the machine learning communities @cite_1 @cite_25 @cite_35 @cite_15 @cite_34 , mainly due to its high efficiency and scalability to large-scale learning tasks. Different from conventional batch learning methods that assume all training instances are available prior to the learning phase, online learning considers one instance each time to update the model sequentially and iteratively. Therefore, online learning is ideally appropriate for tasks in which data arrives sequentially. A number of first-order algorithms have been proposed including the well-known Perceptron algorithm @cite_24 and the Passive-Aggressive (PA) algorithm @cite_25 . Although the PA introduces the concept of maximum margin" for classification, it fails to control the direction and scale of parameter updates during online learning phase. In order to address this issue, recent years have witnessed some second-order online learning algorithms @cite_36 @cite_40 @cite_7 @cite_12 , which apply parameter confidence information to improve online learning performance. Further, in order to solve the cost-sensitive classification tasks on-the-fly, online learning researchers have also proposed a few novel online learning algorithms to directly optimize some more meaningful cost-sensitive metrics @cite_26 @cite_28 @cite_3 .
{ "cite_N": [ "@cite_35", "@cite_26", "@cite_7", "@cite_36", "@cite_28", "@cite_1", "@cite_3", "@cite_24", "@cite_40", "@cite_15", "@cite_34", "@cite_25", "@cite_12" ], "mid": [ "2097645432", "", "", "", "2152929147", "1570963478", "652172809", "2040870580", "1966771059", "", "2005650084", "", "" ], "abstract": [ "In most kernel based online learning algorithms, when an incoming instance is misclassified, it will be added into the pool of support vectors and assigned with a weight, which often remains unchanged during the rest of the learning process. This is clearly insufficient since when a new support vector is added, we generally expect the weights of the other existing support vectors to be updated in order to reflect the influence of the added support vector. In this paper, we propose a new online learning method, termed Double Updating Online Learning, or DUOL for short, that explicitly addresses this problem. Instead of only assigning a fixed weight to the misclassified example received at the current trial, the proposed online learning algorithm also tries to update the weight for one of the existing support vectors. We show that the mistake bound can be improved by the proposed online learning method. We conduct an extensive set of empirical evaluations for both binary and multi-class online learning tasks. The experimental results show that the proposed technique is considerably more effective than the state-of-the-art online learning algorithms. The source code is available to public at http: www.cais.ntu.edu.sg chhoi DUOL .", "", "", "", "Malicious Uniform Resource Locator (URL) detection is an important problem in web search and mining, which plays a critical role in internet security. In literature, many existing studies have attempted to formulate the problem as a regular supervised binary classification task, which typically aims to optimize the prediction accuracy. However, in a real-world malicious URL detection task, the ratio between the number of malicious URLs and legitimate URLs is highly imbalanced, making it very inappropriate for simply optimizing the prediction accuracy. Besides, another key limitation of the existing work is to assume a large amount of training data is available, which is impractical as the human labeling cost could be potentially quite expensive. To solve these issues, in this paper, we present a novel framework of Cost-Sensitive Online Active Learning (CSOAL), which only queries a small fraction of training data for labeling and directly optimizes two cost-sensitive measures to address the class-imbalance issue. In particular, we propose two CSOAL algorithms and analyze their theoretical performance in terms of cost-sensitive bounds. We conduct an extensive set of experiments to examine the empirical performance of the proposed algorithms for a large-scale challenging malicious URL detection task, in which the encouraging results showed that the proposed technique by querying an extremely small-sized labeled data (about 0.5 out of 1-million instances) can achieve better or highly comparable classification performance in comparison to the state-of-the-art cost-insensitive and cost-sensitive online classification algorithms using a huge amount of labeled data.", "1. Introduction 2. Prediction with expert advice 3. Tight bounds for specific losses 4. Randomized prediction 5. Efficient forecasters for large classes of experts 6. Prediction with limited feedback 7. Prediction and playing games 8. Absolute loss 9. Logarithmic loss 10. Sequential investment 11. Linear pattern recognition 12. Linear classification 13. Appendix.", "Although both cost-sensitive classification and online learning have been well studied separately in data mining and machine learning, there was very few comprehensive study of cost-sensitive online classification in literature. In this paper, we formally investigate this problem by directly optimizing cost-sensitive measures for an online classification task. As the first comprehensive study, we propose the Cost-Sensitive Double Updating Online Learning (CSDUOL) algorithms, which explores a recent double updating technique to tackle the online optimization task of cost-sensitive classification by maximizing the weighted sum or minimizing the weighted misclassification cost. We theoretically analyze the cost-sensitive measure bounds of the proposed algorithms, extensively examine their empirical performance for cost-sensitive online classification tasks, and finally demonstrate the application of our technique to solve online anomaly detection tasks.", "", "We present AROW, an online learning algorithm for binary and multiclass problems that combines large margin training, confidence weighting, and the capacity to handle non-separable data. AROW performs adaptive regularization of the prediction function upon seeing each new instance, allowing it to perform especially well in the presence of label noise. We derive mistake bounds for the binary and multiclass settings that are similar in form to the second order perceptron bound. Our bounds do not assume separability. We also relate our algorithm to recent confidence-weighted online learning techniques. Empirical evaluations show that AROW achieves state-of-the-art performance on a wide range of binary and multiclass tasks, as well as robustness in the face of non-separable data.", "", "In this paper, we propose a novel machine learning framework called \"Online Transfer Learning\" (OTL), which aims to attack an online learning task on a target domain by transferring knowledge from some source domain. We do not assume data in the target domain follows the same distribution as that in the source domain, and the motivation of our work is to enhance a supervised online learning task on a target domain by exploiting the existing knowledge that had been learnt from training data in source domains. OTL is in general very challenging since data in both source and target domains not only can be different in their class distributions, but also can be diverse in their feature representations. As a first attempt to this new research problem, we investigate two different settings of OTL: (i) OTL on homogeneous domains of common feature space, and (ii) OTL across heterogeneous domains of different feature spaces. For each setting, we propose effective OTL algorithms to solve online classification tasks, and show some theoretical bounds of the algorithms. In addition, we also apply the OTL technique to attack the challenging online learning tasks with concept-drifting data streams. Finally, we conduct extensive empirical studies on a comprehensive testbed, in which encouraging results validate the efficacy of our techniques.", "", "" ] }
1602.00193
2282971212
WeChat is a mobile messaging application that has 549 million active users as of Q1 2015, and “WeChat Moments” (WM) serves its social-networking function that allows users to post share links of web pages. WM differs from the other social networks as it imposes many restrictions on the information diffusion process to mitigate the information overload. In this paper, we conduct a measurement study on information diffusion in the WM network by crawling and analyzing the spreading statistics of more than 160,000 pages that involve approximately 40 million users. Specifically, we identify the relationship of the number of posted pages and the number of views, the diffusion path length, the similarity and distribution of users' locations as well as their connections with the GDP of the users' province. For each individual WM page, we measure its temporal characteristics (e.g., the life time, the popularity within a time period); for each individual user, we evaluate how many of, or how likely, one's friends will view his posted pages. Our results will help the business to decide when and how to release the marketing pages over WM for better publicity.
Over the last few years, the rising popularity of open social networks has introduced many problems for researchers. Researchers have focused their interests on analysis, measurement, and experiments of information diffusion, user behaviors, community structure, advertisement, etc. in Twitter @cite_5 , Facebook @cite_9 , Sina Weibo @cite_18 @cite_2 , and even technical community like Stack Overflow @cite_14 . In these open social networks, a user's post or page can be seen by anyone if no access restriction is enforced by the user. This greatly stimulates users to share and spread the information in the network, and boosts the prosperity of open social networks. However, it may lead to information overload problems as well as privacy concerns.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_9", "@cite_2", "@cite_5" ], "mid": [ "2053241453", "2134406267", "2099556653", "1564105441", "2101196063" ], "abstract": [ "In this article, we analyze and compare user behavior on two different microblogging platforms: (1) Sina Weibo which is the most popular microblogging service in China and (2) Twitter. Such a comparison has not been done before at this scale and is therefore essential for understanding user behavior on microblogging services. In our study, we analyze more than 40 million microblogging activities and investigate microblogging behavior from different angles. We (i) analyze how people access microblogs and (ii) compare the writing style of Sina Weibo and Twitter users by analyzing textual features of microposts. Based on semantics and sentiments that our user modeling framework extracts from English and Chinese posts, we study and compare (iii) the topics and (iv) sentiment polarities of posts on Sina Weibo and Twitter. Furthermore, (v) we investigate the temporal dynamics of the microblogging behavior such as the drift of user interests over time. Our results reveal significant differences in the microblogging behavior on Sina Weibo and Twitter and deliver valuable insights for multilingual and culture-aware user modeling based on microblogging data. We also explore the correlation between some of these differences and cultural models from social science research.", "Question answering (Q&A) websites are now large repositories of valuable knowledge. While most Q&A sites were initially aimed at providing useful answers to the question asker, there has been a marked shift towards question answering as a community-driven knowledge creation process whose end product can be of enduring value to a broad audience. As part of this shift, specific expertise and deep knowledge of the subject at hand have become increasingly important, and many Q&A sites employ voting and reputation mechanisms as centerpieces of their design to help users identify the trustworthiness and accuracy of the content. To better understand this shift in focus from one-off answers to a group knowledge-creation process, we consider a question together with its entire set of corresponding answers as our fundamental unit of analysis, in contrast with the focus on individual question-answer pairs that characterized previous work. Our investigation considers the dynamics of the community activity that shapes the set of answers, both how answers and voters arrive over time and how this influences the eventual outcome. For example, we observe significant assortativity in the reputations of co-answerers, relationships between reputation and answer speed, and that the probability of an answer being chosen as the best one strongly depends on temporal characteristics of answer arrivals. We then show that our understanding of such properties is naturally applicable to predicting several important quantities, including the long-term value of the question and its answers, as well as whether a question requires a better answer. Finally, we discuss the implications of these results for the design of Q&A sites.", "This study examines the relationship between use of Facebook, a popular online social network site, and the formation and maintenance of social capital. In addition to assessing bonding and bridging social capital, we explore a dimension of social capital that assesses one’s ability to stay connected with members of a previously inhabited community, which we call maintained social capital. Regression analyses conducted on results from a survey of undergraduate students (N = 286) suggest a strong association between use of Facebook and the three types of social capital, with the strongest relationship being to bridging social capital. In addition, Facebook usage was found to interact with measures of psychological well-being, suggesting that it might provide greater benefits for users experiencing low self-esteem and low life satisfaction.", "The new social media such as Twitter and Sina Weibo has become an increasingly popular channel for spreading influence, challenging traditional media such as TVs and newspapers. The most influential and verified users, also called big-V accounts on Sina Weibo often attract million of followers and fans, creating massive “celebrity-centric” social networks on the social media, which play a key role in disseminating breaking news, latest events, and controversial opinions on social issues. Given the importance of these accounts, it is very crucial to understand social networks and user influence of these accounts and profile their followers' behaviors. Towards this end, this paper monitors a selected group of influential users on Sina Weibo and collects their tweet streams as well as retweeting and commenting activities on these tweets from their followers. Our analysis on tweet data streams from Sina Weibo reveals when and what the followers comment on the tweets of these influential users, and discovers different temporal patterns and word diversity in the comments. Based on the insight gained from follower characteristics, we further develop simple and intuitive algorithms for classifying the followers into spammers and normal fans. Our experimental results demonstrate that the proposed algorithms are able to achieve an average accuracy of 95.20 in detecting spammers from the followers who have commented on the tweets of these influential accounts.", "Twitter, a microblogging service less than three years old, commands more than 41 million users as of July 2009 and is growing fast. Twitter users tweet about any topic within the 140-character limit and follow others to receive their tweets. The goal of this paper is to study the topological characteristics of Twitter and its power as a new medium of information sharing. We have crawled the entire Twitter site and obtained 41.7 million user profiles, 1.47 billion social relations, 4,262 trending topics, and 106 million tweets. In its follower-following topology analysis we have found a non-power-law follower distribution, a short effective diameter, and low reciprocity, which all mark a deviation from known characteristics of human social networks [28]. In order to identify influentials on Twitter, we have ranked users by the number of followers and by PageRank and found two rankings to be similar. Ranking by retweets differs from the previous two rankings, indicating a gap in influence inferred from the number of followers and that from the popularity of one's tweets. We have analyzed the tweets of top trending topics and reported on their temporal behavior and user participation. We have classified the trending topics based on the active period and the tweets and show that the majority (over 85 ) of topics are headline news or persistent news in nature. A closer look at retweets reveals that any retweeted tweet is to reach an average of 1,000 users no matter what the number of followers is of the original tweet. Once retweeted, a tweet gets retweeted almost instantly on next hops, signifying fast diffusion of information after the 1st retweet. To the best of our knowledge this work is the first quantitative study on the entire Twittersphere and information diffusion on it." ] }
1602.00193
2282971212
WeChat is a mobile messaging application that has 549 million active users as of Q1 2015, and “WeChat Moments” (WM) serves its social-networking function that allows users to post share links of web pages. WM differs from the other social networks as it imposes many restrictions on the information diffusion process to mitigate the information overload. In this paper, we conduct a measurement study on information diffusion in the WM network by crawling and analyzing the spreading statistics of more than 160,000 pages that involve approximately 40 million users. Specifically, we identify the relationship of the number of posted pages and the number of views, the diffusion path length, the similarity and distribution of users' locations as well as their connections with the GDP of the users' province. For each individual WM page, we measure its temporal characteristics (e.g., the life time, the popularity within a time period); for each individual user, we evaluate how many of, or how likely, one's friends will view his posted pages. Our results will help the business to decide when and how to release the marketing pages over WM for better publicity.
Anonymous social networks provide users with a platform to post messages and communicate without showing their real identity. Some researchers have focused on studying many anonymous communicating platforms such as Whispers @cite_17 , Yik Yak @cite_13 and YouBeMom @cite_16 . Unlike the open social networks, these anonymous platforms well support the protection for users' privacy, as there is no way to figure out the real identify of any user when the information diffusion between two users is established via their weak ties'' such as on their location, interests, or friends in common, rather than the real-world friend relationship. However, such a relationship over weak ties may discourage the users to share a post or even disrupt the diffusion process of a message.
{ "cite_N": [ "@cite_16", "@cite_13", "@cite_17" ], "mid": [ "2231206438", "1508801931", "2161044629" ], "abstract": [ "Moms are one of the fastest growing demographics online. While much is known about where they spend their time, little is known about how they spend it. Using a dataset of over 51 million posts and comments from the website YouBeMom.com, this paper explores what kinds of topics moms talk about when they are not constrained by norms and expectations of face-to-face culture. Results show that almost 5 of posts are about dh, or “dear husband,” but these posts tend to express more negative emotion than other posts. The average post is only 124 characters long and family and daily life are common categories of posting. This suggests that YouBeMom is used as a fast-paced social outlet that may not be available to moms in other parts of their lives. This work concludes with a discussion of anonymity and disinhibition and puts forth a new provocation that moms, too, spend time online “for the lulz.”", "This paper provides results from a pilot study of Yik Yak at one STEM-oriented research university in the Midwest. This paper reports on two aspects of the anonymous messaging app, Yik Yak: location-dependence and purposes for writing. Suggestions for improved research protocols are included.", "Social interactions and interpersonal communication has undergone significant changes in recent years. Increasing awareness of privacy issues and events such as the Snowden disclosures have led to the rapid growth of a new generation of anonymous social networks and messaging applications. By removing traditional concepts of strong identities and social links, these services encourage communication between strangers, and allow users to express themselves without fear of bullying or ret aliation. Despite millions of users and billions of monthly page views, there is little empirical analysis of how services like Whisper have changed the shape and content of social interactions. In this paper, we present results of the first large-scale empirical study of an anonymous social network, using a complete 3-month trace of the Whisper network covering 24 million whispers written by more than 1 million unique users. We seek to understand how anonymity and the lack of social links affect user behavior. We analyze Whisper from a number of perspectives, including the structure of user interactions in the absence of persistent social links, user engagement and network stickiness over time, and content moderation in a network with minimal user accountability. Finally, we identify and test an attack that exposes Whisper users to detailed location tracking. We have notified Whisper and they have taken steps to address the problem." ] }
1602.00487
2252678810
Most Software Defined Networks (SDN) traffic engineering applications use excessive and frequent global monitoring in order to find the optimal Quality-of-Service (QoS) paths for the current state of the network. In this work, we present the motivations, architecture and initial evaluation of a SDN application called Cognitive Routing Engine (CRE) which is able to find near-optimal paths for a user-specified QoS while using a very small monitoring overhead compared to global monitoring which is required to guarantee that optimal paths are found. Smaller monitoring overheads bring the advantage of smaller response time for the SDN controllers and switches. The initial evaluation of CRE on a SDN representation of the GEANT academic network shows that it is possible to find near-optimal paths with a small optimality gap of 1.65 while using 9.5 times less monitoring.
The Cognitive Routing Engine developed in this paper is similar to the routing algorithm that is used in Cognitive Packet Networks (CPNs) @cite_2 @cite_14 @cite_7 @cite_5 but with the significant and important differences that in CPN, each router runs its own routing and learning algorithm for every source-destination pair and QoS requirement and, the routers collect the network state through the use of smart packets and their associated acknowledgment packets. This is in contrast with CRE which runs in a logically centralized manner on top of the SDN controller and uses exclusively OF mechanisms to gather network state. Hence, CRE can be viewed as a SDN-compatible version of CPN and can be used with the large number of SDN-enabled switches that has already been deployed. CPN has been used successful in various settings @cite_10 , for e.g., traffic engineering @cite_15 , routing in wireless @cite_1 and sensor @cite_4 networks and defence against Denial-of-Service (DoS) attacks @cite_8 .
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_7", "@cite_8", "@cite_1", "@cite_2", "@cite_5", "@cite_15", "@cite_10" ], "mid": [ "", "", "2029528936", "2107377751", "1980352564", "2010941377", "2137686796", "", "2163879575" ], "abstract": [ "", "", "Abstract We discuss a packet network architecture called a cognitive packet network (CPN), in which intelligent capabilities for routing and flow control are moved towards the packets, rather than being concentrated in the nodes and protocols. Our architecture contains “smart” and “dumb” packets, as well as acknowledgement packets. Smart CPN packets route themselves, and learn to avoid congestion and losses from their own observations about the network and from the experience of other packets. They use a reinforcement learning algorithm to route themselves based on a goal function which has been assigned to them for each connection. Dumb CPN packets of a specific quality of service (QoS) class use routes which have been selected by the smart packets (SPs) of that class. Acknowledgement (ACK) packets are generated by the destination when an SP arrives there; the ACK heads back to the source of the SP along the inverse route and is used to update mailboxes in CPN routers, as well as to provide source routing information for dumb packets. We first summarize the basic concepts behind CPN, and present simulations illustrating their performance for different QoS goals, and analytical results for best and worst case performance. We then describe a test-bed network we have designed and implemented in order to demonstrate these ideas. We provide measurement data on the test-bed to illustrate the capacity of the network to adapt to changes in traffic load and to failures of links. Finally, we use measurements to evaluate the impact of the ratio of smart to dumb packets on the end-to-end delay experienced by all of the packets.", "Denial of service attacks, viruses and worms are common tools for malicious adversarial behaviour in networks. We propose the use of our autonomic routing protocol, the cognitive packet network (CPN), as a means to defend nodes from distributed denial of service (DDoS) attacks, where one or more attackers generate flooding traffic from multiple sources towards selected nodes or IP addresses. We use both analytical and simulation modelling, and experiments on our CPN testbed, to evaluate the advantages and disadvantages of our approach in the presence of imperfect detection of DDoS attacks, and of false alarms.", "Abstract This paper proposes a new energy efficient algorithm to find and maintain routes in mobile ad hoc networks. The proposal borrows the notion of learning from a previous research on cognitive packet networks (CPN) to create a robust routing protocol. Our idea uses smart packets that exploit the use of unicasts and broadcasts to search for routes. Because unicasts impose lower overall overhead, their use is preferred. Smart packets learn how to make good unicast routing decisions by employing a combined goal function which considers both the energy stored in the nodes and path delay. The end result is a dynamic discovery of paths that offer an equilibrium between low-delay routes and an efficient use of network resources that extends the working lifetime of the network.", "We propose cognitive packet networks (CPN) in which intelligent capabilities for routing and flow control are concentrated in the packets, rather than in the nodes and protocols. Cognitive packets within a CPN route themselves. They are assigned goals before entering the network and pursue these goals adaptively. Cognitive packets learn from their own observations about the network and from the experience of other packets with whom they exchange information via mailboxes. Cognitive packets rely minimally on routers. This paper describes CPN and shows how learning can support intelligent behavior of cognitive packets.", "Reliability, security, scalability and QoS (quality-of-service) have become key issues as we envision the future Internet. The paper presents the \"cognitive packet network\" (CPN) architecture in which intelligent peer-to-peer routing is carried out with the help of \"smart packets\" based on best-effort QoS goals. Since packetized voice has stringent QoS requirements, we then discuss the choice of a \"goal\" and \"reward\" function for this application and present experiments we have conducted for \"voice over CPN\". Its performance is detailed via several measurements, and the resulting QoS is compared with that of the IP routing protocol under identical conditions showing the gain resulting from the use of CPN.", "", "Current and future multimedia networks require connections under specific quality of service (QoS) constraints which can no longer be provided by the best-effort Internet. Therefore, ‘smarter’ networks have been proposed in order to cover this need. The cognitive packet network (CPN) is a routing protocol that provides QoS-driven routing and performs self-improvement in a distributed manner, by learning from the experience of special packets, which gather on-line QoS measurements and discover new routes. The CPN was first introduced in 1999 and has been used in several applications since then. Here we provide a comprehensive survey of its variations, applications and experimental performance evaluations." ] }
1602.00345
2272201970
High-performance computing platforms such as supercomputers have traditionally been designed to meet the compute demands of scientific applications. Consequently, they have been architected as producers and not consumers of data. The Apache Hadoop ecosystem has evolved to meet the requirements of data processing applications and has addressed many of the limitations of HPC platforms. There exist a class of scientific applications however, that need the collective capabilities of traditional high-performance computing environments and the Apache Hadoop ecosystem. For example, the scientific domains of bio-molecular dynamics, genomics and network science need to couple traditional computing with Hadoop Spark based analysis. We investigate the critical question of how to present the capabilities of both computing environments to such scientific applications. Whereas this questions needs answers at multiple levels, we focus on the design of resource management middleware that might support the needs of both. We propose extensions to the Pilot-Abstraction to provide a unifying resource management layer. This is an important step that allows applications to integrate HPC stages (e.g. simulations) to data analytics. Many supercomputing centers have started to officially support Hadoop environments, either in a dedicated environment or in hybrid deployments using tools such as myHadoop. This typically involves many intrinsic, environment-specific details that need to be mastered, and often swamp conceptual issues like: How best to couple HPC and Hadoop application stages? How to explore runtime trade-offs (data localities vs. data movement)? This paper provides both conceptual understanding and practical solutions to the integrated use of HPC and Hadoop environments.
Hadoop originally provided a rudimentary resource management system, the scheduler @cite_13 provides a robust application-level scheduler framework addressing the increased requirements with respect to applications and infrastructure: more complex data localities (memory, SSDs, disk, rack, datacenter), long-lived services, periodic jobs, interactive and batch jobs need to be supported on the same environment. In contrast, to traditional batch schedulers, is optimized for data-intensive environments supporting data-locality and the management of a large number of fine-granular tasks (found in data-parallel applications).
{ "cite_N": [ "@cite_13" ], "mid": [ "2105947650" ], "abstract": [ "The initial design of Apache Hadoop [1] was tightly focused on running massive, MapReduce jobs to process a web crawl. For increasingly diverse companies, Hadoop has become the data and computational agora---the de facto place where data and computational resources are shared and accessed. This broad adoption and ubiquitous usage has stretched the initial design well beyond its intended target, exposing two key shortcomings: 1) tight coupling of a specific programming model with the resource management infrastructure, forcing developers to abuse the MapReduce programming model, and 2) centralized handling of jobs' control flow, which resulted in endless scalability concerns for the scheduler. In this paper, we summarize the design, development, and current state of deployment of the next generation of Hadoop's compute platform: YARN. The new architecture we introduced decouples the programming model from the resource management infrastructure, and delegates many scheduling functions (e.g., task fault-tolerance) to per-application components. We provide experimental evidence demonstrating the improvements we made, confirm improved efficiency by reporting the experience of running YARN on production environments (including 100 of Yahoo! grids), and confirm the flexibility claims by discussing the porting of several programming frameworks onto YARN viz. Dryad, Giraph, Hoya, Hadoop MapReduce, REEF, Spark, Storm, Tez." ] }
1602.00216
2336288769
Data acquisition, storage and management have been improved, while the key factors of many phenomena are not well known. Consequently, irrelevant and redundant features artificially increase the size of datasets, which complicates learning tasks, such as regression. To address this problem, feature selection methods have been proposed. This paper introduces a new supervised filter based on the Morisita estimator of intrinsic dimension. It can identify relevant features and distinguish between redundant and irrelevant information. Besides, it offers a clear graphical representation of the results, and it can be easily implemented in different programming languages. Comprehensive numerical experiments are conducted using simulated datasets characterized by different levels of complexity, sample size and noise. The suggested algorithm is also successfully tested on a selection of real world applications and compared with RReliefF using extreme learning machine. In addition, a new measure of feature relevance is presented and discussed.
@cite_38 @cite_28 have opened up new prospects for the effective use of ID estimation in data mining by introducing the Fractal Dimension Reduction (FDR) algorithm. FDR executes an unsupervised procedure of feature selection aiming to remove from a dataset all the redundant variables. The fundamental idea is that fully redundant variables do not contribute to the value of the data ID.
{ "cite_N": [ "@cite_28", "@cite_38" ], "mid": [ "1762305352", "1861510175" ], "abstract": [ "Here we comment about the works that the original paper published in the 2000 Brazilian Symposium on Databases fostered in the Database and Images Group – GBdI, what by their turn motivated other researches abroad. It is shown that the Fractal Theory is indeed helpful to a large spectrum of activities required to manage large amounts of data. Research derived from the original paper includes speeding up similarity queries, designing of cost models and selectivity estimation for similarity queries, sampling on databases, performing attribute selection, identifying clusters of correlated attributes, as well as correlation clustering on large, high dimensional datasets.", "Dimensionality curse and dimensionality reduction are two key issues that have retained high interest for data mining, machine learning, multimedia indexing, and clustering. In this paper we present a fast, scalable algorithm to quickly select the most important attributes (dimensions) for a given set of n-dimensional vectors. In contrast to older methods, our method has the following desirable properties: (a) it does not do rotation of attributes, thus leading to easy interpretation of the resulting attributes; (b) it can spot attributes that have either linear or nonlinear correlations; (c) it requires a constant number of passes over the dataset; (d) it gives a good estimate on how many attributes should be kept. The idea is to use the ‘fractal' dimension of a dataset as a good approximation of its intrinsic dimension, and to drop attributes that do not affect it. We applied our method on real and synthetic datasets, where it gave fast and correct results." ] }
1601.08067
2257997837
Exact Byzantine consensus problem requires that non-faulty processes reach agreement on a decision (or output) that is in the convex hull of the inputs at the non-faulty processes. It is well-known that exact consensus is impossible in an asynchronous system in presence of faults, and in a synchronous system, n>=3f+1 is tight on the number of processes to achieve exact Byzantine consensus with scalar inputs, in presence of up to f Byzantine faulty processes. Recent work has shown that when the inputs are d-dimensional vectors of reals, n>=max(3f+1,(d+1)f+1) is tight to achieve exact Byzantine consensus in synchronous systems, and n>= (d+2)f+1 for approximate Byzantine consensus in asynchronous systems. Due to the dependence of the lower bound on vector dimension d, the number of processes necessary becomes large when the vector dimension is large. With the hope of reducing the lower bound on n, we consider two relaxed versions of Byzantine vector consensus: k-Relaxed Byzantine vector consensus and (delta,p)-Relaxed Byzantine vector consensus. In k-relaxed consensus, the validity condition requires that the output must be in the convex hull of projection of the inputs onto any subset of k-dimensions of the vectors. For (delta,p)-consensus the validity condition requires that the output must be within distance delta of the convex hull of the inputs of the non-faulty processes, where L_p norm is used as the distance metric. For (delta,p)-consensus, we consider two versions: in one version, delta is a constant, and in the second version, delta is a function of the inputs themselves. We show that for k-relaxed consensus and (delta,p)-consensus with constant delta>=0, the bound on n is identical to the bound stated above for the original vector consensus problem. On the other hand, when delta depends on the inputs, we show that the bound on n is smaller when d>=3.
The necessary and sufficient condition for consensus in presense of Byzantine failure under various underlying network conditions is extensively studied in the literature. Lamport, Shostak and Pease @cite_8 developed the initial results on Byzantine fault-tolerant agreement. The FLP result @cite_16 showed that exact consensus is impossible even under single process failure in asynchronous systems. To circumvent this obstacle, @cite_7 proposed approximate consensus for asynchronous systems.
{ "cite_N": [ "@cite_16", "@cite_7", "@cite_8" ], "mid": [ "2035362408", "2126906505", "2120510885" ], "abstract": [ "The consensus problem involves an asynchronous system of processes, some of which may be unreliable. The problem is for the reliable processes to agree on a binary value. In this paper, it is shown that every protocol for this problem has the possibility of nontermination, even with only one faulty process. By way of contrast, solutions are known for the synchronous case, the “Byzantine Generals” problem.", "This paper considers a variant of the Byzantine Generals problem, in which processes start with arbitrary real values rather than Boolean values or values from some bounded range, and in which approximate, rather than exact, agreement is the desired goal. Algorithms are presented to reach approximate agreement in asynchronous, as well as synchronous systems. The asynchronous agreement algorithm is an interesting contrast to a result of , who show that exact agreement with guaranteed termination is not attainable in an asynchronous system with as few as one faulty process. The algorithms work by successive approximation, with a provable convergence rate that depends on the ratio between the number of faulty processes and the total number of processes. Lower bounds on the convergence rate for algorithms of this form are proved, and the algorithms presented are shown to be optimal.", "Reliable computer systems must handle malfunctioning components that give conflicting information to different parts of the system. This situation can be expressed abstractly in terms of a group of generals of the Byzantine army camped with their troops around an enemy city. Communicating only by messenger, the generals must agree upon a common battle plan. However, one or more of them may be traitors who will try to confuse the others. The problem is to find an algorithm to ensure that the loyal generals will reach agreement. It is shown that, using only oral messages, this problem is solvable if and only if more than two-thirds of the generals are loyal; so a single traitor can confound two loyal generals. With unforgeable written messages, the problem is solvable for any number of generals and possible traitors. Applications of the solutions to reliable computer systems are then discussed." ] }
1601.08067
2257997837
Exact Byzantine consensus problem requires that non-faulty processes reach agreement on a decision (or output) that is in the convex hull of the inputs at the non-faulty processes. It is well-known that exact consensus is impossible in an asynchronous system in presence of faults, and in a synchronous system, n>=3f+1 is tight on the number of processes to achieve exact Byzantine consensus with scalar inputs, in presence of up to f Byzantine faulty processes. Recent work has shown that when the inputs are d-dimensional vectors of reals, n>=max(3f+1,(d+1)f+1) is tight to achieve exact Byzantine consensus in synchronous systems, and n>= (d+2)f+1 for approximate Byzantine consensus in asynchronous systems. Due to the dependence of the lower bound on vector dimension d, the number of processes necessary becomes large when the vector dimension is large. With the hope of reducing the lower bound on n, we consider two relaxed versions of Byzantine vector consensus: k-Relaxed Byzantine vector consensus and (delta,p)-Relaxed Byzantine vector consensus. In k-relaxed consensus, the validity condition requires that the output must be in the convex hull of projection of the inputs onto any subset of k-dimensions of the vectors. For (delta,p)-consensus the validity condition requires that the output must be within distance delta of the convex hull of the inputs of the non-faulty processes, where L_p norm is used as the distance metric. For (delta,p)-consensus, we consider two versions: in one version, delta is a constant, and in the second version, delta is a function of the inputs themselves. We show that for k-relaxed consensus and (delta,p)-consensus with constant delta>=0, the bound on n is identical to the bound stated above for the original vector consensus problem. On the other hand, when delta depends on the inputs, we show that the bound on n is smaller when d>=3.
When @math , the inputs are scalar, and all the @math norms are identical. For the case of @math , @math -relaxed consensus is equivalent to a problem that was addressed in prior work @cite_13 ; for this special case, it was shown that @math is necessary and sufficient @cite_13 .
{ "cite_N": [ "@cite_13" ], "mid": [ "2095464745" ], "abstract": [ "Easy proofs are given, of the impossibility of solving several consensus problems (Byzantine agreement, weak agreement, Byzantine firing squad, approximate agreement and clock synchronization) in certain communication graphs." ] }
1601.08067
2257997837
Exact Byzantine consensus problem requires that non-faulty processes reach agreement on a decision (or output) that is in the convex hull of the inputs at the non-faulty processes. It is well-known that exact consensus is impossible in an asynchronous system in presence of faults, and in a synchronous system, n>=3f+1 is tight on the number of processes to achieve exact Byzantine consensus with scalar inputs, in presence of up to f Byzantine faulty processes. Recent work has shown that when the inputs are d-dimensional vectors of reals, n>=max(3f+1,(d+1)f+1) is tight to achieve exact Byzantine consensus in synchronous systems, and n>= (d+2)f+1 for approximate Byzantine consensus in asynchronous systems. Due to the dependence of the lower bound on vector dimension d, the number of processes necessary becomes large when the vector dimension is large. With the hope of reducing the lower bound on n, we consider two relaxed versions of Byzantine vector consensus: k-Relaxed Byzantine vector consensus and (delta,p)-Relaxed Byzantine vector consensus. In k-relaxed consensus, the validity condition requires that the output must be in the convex hull of projection of the inputs onto any subset of k-dimensions of the vectors. For (delta,p)-consensus the validity condition requires that the output must be within distance delta of the convex hull of the inputs of the non-faulty processes, where L_p norm is used as the distance metric. For (delta,p)-consensus, we consider two versions: in one version, delta is a constant, and in the second version, delta is a function of the inputs themselves. We show that for k-relaxed consensus and (delta,p)-consensus with constant delta>=0, the bound on n is identical to the bound stated above for the original vector consensus problem. On the other hand, when delta depends on the inputs, we show that the bound on n is smaller when d>=3.
The Byzantine vector consensus (BVC) problem (also called multidimensional consensus) was introduced by Mendes and Herlihy @cite_19 and Vaidya and Garg @cite_2 . Tight bounds on number of processes @math for Byzantine vector consensus have been obtained for synchronous @cite_2 and asynchronous @cite_19 @cite_2 systems both, when the network is a complete graph. A necessary condition and a sufficient condition for iterative byzantine vector consensus were derived by Vaidya @cite_4 , however, there is a gap between these necessary and sufficient conditions.
{ "cite_N": [ "@cite_19", "@cite_4", "@cite_2" ], "mid": [ "2010107859", "2951735990", "2950535145" ], "abstract": [ "The problem of e-approximate agreement in Byzantine asynchronous systems is well-understood when all values lie on the real line. In this paper, we generalize the problem to consider values that lie in Rm, for m ≥ 1, and present an optimal protocol in regard to fault tolerance. Our scenario is the following. Processes start with values in Rm, for m ≥ 1, and communicate via message-passing. The system is asynchronous: there is no upper bound on processes' relative speeds or on message delay. Some faulty processes can display arbitrarily malicious (i.e. Byzantine) behavior. Non-faulty processes must decide on values that are: (1) in Rm; (2) within distance e of each other; and (3) in the convex hull of the non-faulty processes' inputs. We give an algorithm with a matching lower bound on fault tolerance: we require n > t(m+2), where n is the number of processes, t is the number of Byzantine processes, and input and output values reside in Rm. Non-faulty processes send O(n2 d log(m e max δ(d): 1 ≤ d ≤ m )) messages in total, where δ(d) is the range of non-faulty inputs projected at coordinate d. The Byzantine processes do not affect the algorithm's running time.", "This work addresses Byzantine vector consensus (BVC), wherein the input at each process is a d-dimensional vector of reals, and each process is expected to decide on a decision vector that is in the convex hull of the input vectors at the fault-free processes [3, 8]. The input vector at each process may also be viewed as a point in the d-dimensional Euclidean space R^d, where d > 0 is a finite integer. Recent work [3, 8] has addressed Byzantine vector consensus in systems that can be modeled by a complete graph. This paper considers Byzantine vector consensus in incomplete graphs. In particular, we address a particular class of iterative algorithms in incomplete graphs, and prove a necessary condition, and a sufficient condition, for the graphs to be able to solve the vector consensus problem iteratively. We present an iterative Byzantine vector consensus algorithm, and prove it correct under the sufficient condition. The necessary condition presented in this paper for vector consensus does not match with the sufficient condition for d > 1; thus, a weaker condition may potentially suffice for Byzantine vector consensus.", "Consider a network of n processes each of which has a d-dimensional vector of reals as its input. Each process can communicate directly with all the processes in the system; thus the communication network is a complete graph. All the communication channels are reliable and FIFO (first-in-first-out). The problem of Byzantine vector consensus (BVC) requires agreement on a d-dimensional vector that is in the convex hull of the d-dimensional input vectors at the non-faulty processes. We obtain the following results for Byzantine vector consensus in complete graphs while tolerating up to f Byzantine failures: * We prove that in a synchronous system, n >= max(3f+1, (d+1)f+1) is necessary and sufficient for achieving Byzantine vector consensus. * In an asynchronous system, it is known that exact consensus is impossible in presence of faulty processes. For an asynchronous system, we prove that n >= (d+2)f+1 is necessary and sufficient to achieve approximate Byzantine vector consensus. Our sufficiency proofs are constructive. We show sufficiency by providing explicit algorithms that solve exact BVC in synchronous systems, and approximate BVC in asynchronous systems. We also obtain tight bounds on the number of processes for achieving BVC using algorithms that are restricted to a simpler communication pattern." ] }
1601.08067
2257997837
Exact Byzantine consensus problem requires that non-faulty processes reach agreement on a decision (or output) that is in the convex hull of the inputs at the non-faulty processes. It is well-known that exact consensus is impossible in an asynchronous system in presence of faults, and in a synchronous system, n>=3f+1 is tight on the number of processes to achieve exact Byzantine consensus with scalar inputs, in presence of up to f Byzantine faulty processes. Recent work has shown that when the inputs are d-dimensional vectors of reals, n>=max(3f+1,(d+1)f+1) is tight to achieve exact Byzantine consensus in synchronous systems, and n>= (d+2)f+1 for approximate Byzantine consensus in asynchronous systems. Due to the dependence of the lower bound on vector dimension d, the number of processes necessary becomes large when the vector dimension is large. With the hope of reducing the lower bound on n, we consider two relaxed versions of Byzantine vector consensus: k-Relaxed Byzantine vector consensus and (delta,p)-Relaxed Byzantine vector consensus. In k-relaxed consensus, the validity condition requires that the output must be in the convex hull of projection of the inputs onto any subset of k-dimensions of the vectors. For (delta,p)-consensus the validity condition requires that the output must be within distance delta of the convex hull of the inputs of the non-faulty processes, where L_p norm is used as the distance metric. For (delta,p)-consensus, we consider two versions: in one version, delta is a constant, and in the second version, delta is a function of the inputs themselves. We show that for k-relaxed consensus and (delta,p)-consensus with constant delta>=0, the bound on n is identical to the bound stated above for the original vector consensus problem. On the other hand, when delta depends on the inputs, we show that the bound on n is smaller when d>=3.
A more generalized problem called Convex Hull Consensus problem was introduced by Tseng and Vaidya @cite_9 . The tight bounds on number of processes @math is identical to the vector consensus case. Optimal fault resilient algorithms were proposed for asynchronous systems under crash faults @cite_9 and Byzantine faults @cite_0 , respectively.
{ "cite_N": [ "@cite_0", "@cite_9" ], "mid": [ "2147056869", "2950382039" ], "abstract": [ "Much of the past work on asynchronous approximate Byzantine consensus has assumed scalar inputs at the nodes [4, 8]. Recent work has yielded approximate Byzantine consensus algorithms for the case when the input at each node is a d-dimensional vector, and the nodes must reach consensus on a vector in the convex hull of the input vectors at the fault-free nodes [9, 13]. The d-dimensional vectors can be equivalently viewed as points in the d-dimensional Euclidean space. Thus, the algorithms in [9, 13] require the fault-free nodes to decide on a point in the d-dimensional space. In our recent work [12], we proposed a generalization of the consensus problem, namely Byzantine convex consensus (BCC), which allows the decision to be a convex polytope in the d-dimensional space, such that the decided polytope is within the convex hull of the input vectors at the fault-free nodes. We also presented an asynchronous approximate BCC algorithm. In this paper, we propose a new BCC algorithm with optimal fault-tolerance that also agrees on a convex polytope that is as large as possible under adversarial conditions. Our prior work [12] does not guarantee the optimality of the output polytope.", "This paper defines a new consensus problem, convex consensus. Similar to vector consensus [13, 20, 19], the input at each process is a d-dimensional vector of reals (or, equivalently, a point in the d-dimensional Euclidean space). However, for convex consensus, the output at each process is a convex polytope contained within the convex hull of the inputs at the fault-free processes. We explore the convex consensus problem under crash faults with incorrect inputs, and present an asynchronous approximate convex consensus algorithm with optimal fault tolerance that reaches consensus on an optimal output polytope. Convex consensus can be used to solve other related problems. For instance, a solution for convex consensus trivially yields a solution for vector consensus. More importantly, convex consensus can potentially be used to solve other more interesting problems, such as convex function optimization [5, 4]." ] }
1601.08067
2257997837
Exact Byzantine consensus problem requires that non-faulty processes reach agreement on a decision (or output) that is in the convex hull of the inputs at the non-faulty processes. It is well-known that exact consensus is impossible in an asynchronous system in presence of faults, and in a synchronous system, n>=3f+1 is tight on the number of processes to achieve exact Byzantine consensus with scalar inputs, in presence of up to f Byzantine faulty processes. Recent work has shown that when the inputs are d-dimensional vectors of reals, n>=max(3f+1,(d+1)f+1) is tight to achieve exact Byzantine consensus in synchronous systems, and n>= (d+2)f+1 for approximate Byzantine consensus in asynchronous systems. Due to the dependence of the lower bound on vector dimension d, the number of processes necessary becomes large when the vector dimension is large. With the hope of reducing the lower bound on n, we consider two relaxed versions of Byzantine vector consensus: k-Relaxed Byzantine vector consensus and (delta,p)-Relaxed Byzantine vector consensus. In k-relaxed consensus, the validity condition requires that the output must be in the convex hull of projection of the inputs onto any subset of k-dimensions of the vectors. For (delta,p)-consensus the validity condition requires that the output must be within distance delta of the convex hull of the inputs of the non-faulty processes, where L_p norm is used as the distance metric. For (delta,p)-consensus, we consider two versions: in one version, delta is a constant, and in the second version, delta is a function of the inputs themselves. We show that for k-relaxed consensus and (delta,p)-consensus with constant delta>=0, the bound on n is identical to the bound stated above for the original vector consensus problem. On the other hand, when delta depends on the inputs, we show that the bound on n is smaller when d>=3.
@cite_10 study a new version of the approximate vector consensus problem, called @math -solo approximate agreement , in the context of a @math -solo execution model that yields the message-passing model and the traditional shared memory model as special cases. For @math -solo approximate agreement, the inputs are @math -dimensional vectors of reals, and the outputs must be in the convex hull of all the inputs. Up to @math processes may potentially choose as their ouputs any arbitrary points in the convex hull of all inputs (not necessarily approximately equal to each other), while each remaining process must choose as its output a point within distance @math of the convex hull of the outputs of these @math processes (all outputs must be within the convex hull of the inputs). Although @cite_10 only consider crash failures, the problem can be easily extended to the Byzantine fault model. The relaxed consensus formulations considered in our work are different from @math -solo agreement.
{ "cite_N": [ "@cite_10" ], "mid": [ "1577529853" ], "abstract": [ "In a wait-free model any number of processes may crash. A process runs solo when it computes its local output without receiving any information from other processes, either because they crashed or they are too slow. While in wait-free shared-memory models at most one process may run solo in an execution, any number of processes may have to run solo in an asynchronous wait-free message-passing model." ] }
1601.08011
2265010077
Multiple probabilistic packet marking (PPM) schemes for IP traceback have been proposed to deal with Distributed Denial of Service (DDoS) attacks by reconstructing their attack graphs and identifying the attack sources. In this paper, ten PPM-based IP traceback schemes are compared and analyzed in terms of features such as convergence time, performance evaluation, underlying topologies, incremental deployment, re-marking, and upstream graph. Our analysis shows that the considered schemes exhibit a significant discrepancy in performance as well as performance assessment. We concisely demonstrate this by providing a table showing that (a) different metrics are used for many schemes to measure their performance and, (b) most schemes are evaluated on different classes of underlying network topologies. Our results reveal that both the value and arrangement of the PPM-based scheme convergence times vary depending on exactly the underlying network topology. As a result, this paper shows that a side-by-side comparison of the scheme performance a complicated and turns out to be a crucial open problem in this research area.
PPM-based schemes consist of a and a , and are based on the assumption that large amounts of traffic are used in a (D)DoS attack @cite_7 . In their original work, @cite_7 propose that the PPM marking scheme is employed at all times in all the routers in the network, while the reconstruction procedure is employed by the victim in the event of an attack. The marking scheme ensures that every router embeds its own identity in packets randomly selected from the packets the routers process during routing. Since a large number of packets is received in an attack, there is a considerable chance that a victim will have received packets with markings from all the routers that were traversed by the attack packets. The victim then employs the reconstruction procedure which uses the received marked attack packets to map out the the paths from the victim to the attackers. The total number of received packets required to trace the attackers is referred to as the scheme's .
{ "cite_N": [ "@cite_7" ], "mid": [ "1967949770" ], "abstract": [ "This paper describes a technique for tracing anonymous packet flooding attacks in the Internet back towards their source. This work is motivated by the increased frequency and sophistication of denial-of-service attacks and by the difficulty in tracing packets with incorrect, or spoofed'', source addresses. In this paper we describe a general purpose traceback mechanism based on probabilistic packet marking in the network. Our approach allows a victim to identify the network path(s) traversed by attack traffic without requiring interactive operational support from Internet Service Providers (ISPs). Moreover, this traceback can be performed post-mortem'' -- after an attack has completed. We present an implementation of this technology that is incrementally deployable, (mostly) backwards compatible and can be efficiently implemented using conventional technology." ] }
1601.08011
2265010077
Multiple probabilistic packet marking (PPM) schemes for IP traceback have been proposed to deal with Distributed Denial of Service (DDoS) attacks by reconstructing their attack graphs and identifying the attack sources. In this paper, ten PPM-based IP traceback schemes are compared and analyzed in terms of features such as convergence time, performance evaluation, underlying topologies, incremental deployment, re-marking, and upstream graph. Our analysis shows that the considered schemes exhibit a significant discrepancy in performance as well as performance assessment. We concisely demonstrate this by providing a table showing that (a) different metrics are used for many schemes to measure their performance and, (b) most schemes are evaluated on different classes of underlying network topologies. Our results reveal that both the value and arrangement of the PPM-based scheme convergence times vary depending on exactly the underlying network topology. As a result, this paper shows that a side-by-side comparison of the scheme performance a complicated and turns out to be a crucial open problem in this research area.
Multiple PPM-based schemes have since been introduced. One example is the (TMS) @cite_1 . The author points out that PPM is prone to information loss as a result of . Re-marking occurs when a router randomly selects a packet which already has marking information from an upstream router, and consequently overwrites this information. TMS tackles this problem by ensuring that their marking scheme forfeits the marking opportunity in the event that the randomly selected packet contains previous marking information. As a result, they report lower convergence times than PPM for DDoS attacks.
{ "cite_N": [ "@cite_1" ], "mid": [ "2038212958" ], "abstract": [ "The IP traceback is an important mechanism in defending against distributed denial-of-service (DDoS) attacks. In this paper, we propose a probabilistic packet marking (PPM) scheme, Tabu Marking Scheme (TMS), to speedup IP traceback. The key idea of \"tabu mark\" is that, a router still marks packets probabilistically, but regards a packet marked by an upstream router as a tabu and does not mark it again. We study the impact of the traffic aggregation on the convergence behavior of PPM schemes. Furthermore we derive a new analytical result on the partial coupon collection problem, which is a powerful tool applicable for computing the mean convergence time of any PPM scheme. Our study shows that the idea of \"tabu mark\" not only helps a PPM scheme that allows overwriting to reduce the convergence time under a DDoS attack, but also ensures the authentication of the routers' markings." ] }
1601.08011
2265010077
Multiple probabilistic packet marking (PPM) schemes for IP traceback have been proposed to deal with Distributed Denial of Service (DDoS) attacks by reconstructing their attack graphs and identifying the attack sources. In this paper, ten PPM-based IP traceback schemes are compared and analyzed in terms of features such as convergence time, performance evaluation, underlying topologies, incremental deployment, re-marking, and upstream graph. Our analysis shows that the considered schemes exhibit a significant discrepancy in performance as well as performance assessment. We concisely demonstrate this by providing a table showing that (a) different metrics are used for many schemes to measure their performance and, (b) most schemes are evaluated on different classes of underlying network topologies. Our results reveal that both the value and arrangement of the PPM-based scheme convergence times vary depending on exactly the underlying network topology. As a result, this paper shows that a side-by-side comparison of the scheme performance a complicated and turns out to be a crucial open problem in this research area.
Another example is the (PBS) which also avoids re-marking @cite_28 . However, in contrast to TMS, the PBS marking scheme ensures that the router information is embedded in the next available packet if the randomly selected packet already has marking information. The PBS marking scheme requires extra space cost of 1 bit compared to PPM. Additionally, the reconstruction procedure utilizes both legitimate and attack traffic to reconstruct the attack graph.
{ "cite_N": [ "@cite_28" ], "mid": [ "2064328472" ], "abstract": [ "Sources of a Distributed Denial of Service (DDoS) attack can be identified by the traffic they generate using the IP traceback technique. Because of its relevance, the Probabilistic Packet Marking (PPM) schemes for IP traceback is an intensively researched field. In these schemes, routers are given the extra function of randomly selecting packets from those that go through them, to embed their address information in those selected packets. During or after the attack, the paths that were traversed by the attack traffic can be identified based on the router information in the marked packets. Since these schemes require a large number of received packets to trace an attacker successfully, they usually demand a high time and space complexity to trace many attackers as is the case in DDoS attacks. This is partly because the marking scheme allows remarking, where routers can overwrite previous marking information in a selected packet, which leads to data loss. We present the Prediction Based Scheme (PBS), which is an addition to the PPM schemes for IP tracetrack. The proposed approach consists of two parts: (a) a marking scheme, that reduces the number of packets required to trace a DoS attacker and (b) an extension to a traceback algorithm, whose main feature is to return a complete attack graph with fewer received packets than the traditional algorithm. The proposed marking scheme alleviates the problem of data loss by ensuring previous marking information is not overwritten. Additionally, the proposed traceback algorithm uses graphs built using legitimate traffic to predict the path taken by attack traffic. Results show that the marking scheme in PBS, compared to PPM, ensures that traceback is possible with about 54 as many total packets to achieve complete attack path construction, while the traceback algorithm takes about 33 as many marked packets." ] }
1601.08011
2265010077
Multiple probabilistic packet marking (PPM) schemes for IP traceback have been proposed to deal with Distributed Denial of Service (DDoS) attacks by reconstructing their attack graphs and identifying the attack sources. In this paper, ten PPM-based IP traceback schemes are compared and analyzed in terms of features such as convergence time, performance evaluation, underlying topologies, incremental deployment, re-marking, and upstream graph. Our analysis shows that the considered schemes exhibit a significant discrepancy in performance as well as performance assessment. We concisely demonstrate this by providing a table showing that (a) different metrics are used for many schemes to measure their performance and, (b) most schemes are evaluated on different classes of underlying network topologies. Our results reveal that both the value and arrangement of the PPM-based scheme convergence times vary depending on exactly the underlying network topology. As a result, this paper shows that a side-by-side comparison of the scheme performance a complicated and turns out to be a crucial open problem in this research area.
@cite_17 present (RPPM). They point out that the reconstruction procedure used in PPM @cite_7 and (AMS) @cite_6 has an imprecise termination condition. They present a precise termination condition that enables complete attack graph reconstruction within a user-specified level of confidence.
{ "cite_N": [ "@cite_6", "@cite_7", "@cite_17" ], "mid": [ "2150228605", "1967949770", "2114418170" ], "abstract": [ "Defending against distributed denial-of-service attacks is one of the hardest security problems on the Internet today. One difficulty to thwart these attacks is to trace the source of the attacks because they often use incorrect, or spoofed IP source addresses to disguise the true origin. In this paper, we present two new schemes, the advanced marking scheme and the authenticated marking scheme, which allow the victim to trace-back the approximate origin of spoofed IP packets. Our techniques feature low network and router overhead, and support incremental deployment. In contrast to previous work, our techniques have significantly higher precision (lower false positive rate) and fewer computation overhead for the victim to reconstruct the attack paths under large scale distributed denial-of-service attacks. Furthermore the authenticated marking scheme provides efficient authentication of routers' markings such that even a compromised router cannot forge or tamper markings from other uncompromised routers.", "This paper describes a technique for tracing anonymous packet flooding attacks in the Internet back towards their source. This work is motivated by the increased frequency and sophistication of denial-of-service attacks and by the difficulty in tracing packets with incorrect, or spoofed'', source addresses. In this paper we describe a general purpose traceback mechanism based on probabilistic packet marking in the network. Our approach allows a victim to identify the network path(s) traversed by attack traffic without requiring interactive operational support from Internet Service Providers (ISPs). Moreover, this traceback can be performed post-mortem'' -- after an attack has completed. We present an implementation of this technology that is incrementally deployable, (mostly) backwards compatible and can be efficiently implemented using conventional technology.", "The probabilistic packet marking (PPM) algorithm is a promising way to discover the Internet map or an attack graph that the attack packets traversed during a distributed denial-of-service attack. However, the PPM algorithm is not perfect, as its termination condition is not well defined in the literature. More importantly, without a proper termination condition, the attack graph constructed by the PPM algorithm would be wrong. In this work, we provide a precise termination condition for the PPM algorithm and name the new algorithm the rectified PPM (RPPM) algorithm. The most significant merit of the RPPM algorithm is that when the algorithm terminates, the algorithm guarantees that the constructed attack graph is correct, with a specified level of confidence. We carry out simulations on the RPPM algorithm and show that the RPPM algorithm can guarantee the correctness of the constructed attack graph under 1) different probabilities that a router marks the attack packets and 2) different structures of the network graph. The RPPM algorithm provides an autonomous way for the original PPM algorithm to determine its termination, and it is a promising means of enhancing the reliability of the PPM algorithm." ] }
1601.08011
2265010077
Multiple probabilistic packet marking (PPM) schemes for IP traceback have been proposed to deal with Distributed Denial of Service (DDoS) attacks by reconstructing their attack graphs and identifying the attack sources. In this paper, ten PPM-based IP traceback schemes are compared and analyzed in terms of features such as convergence time, performance evaluation, underlying topologies, incremental deployment, re-marking, and upstream graph. Our analysis shows that the considered schemes exhibit a significant discrepancy in performance as well as performance assessment. We concisely demonstrate this by providing a table showing that (a) different metrics are used for many schemes to measure their performance and, (b) most schemes are evaluated on different classes of underlying network topologies. Our results reveal that both the value and arrangement of the PPM-based scheme convergence times vary depending on exactly the underlying network topology. As a result, this paper shows that a side-by-side comparison of the scheme performance a complicated and turns out to be a crucial open problem in this research area.
Multiple additional schemes have been proposed with the purpose to increase the efficiency of PPM @cite_21 @cite_23 @cite_15 @cite_20 . Some of these schemes are analyzed in Table . The table compares ten PPM-based schemes in terms of features such as convergence time, the metrics, underlying topologies, incremental deployment, re-marking, and upstream graph. The schemes considered therein are by no means an exhaustive study of all the PPM-based schemes, but the selection of schemes is large enough to show the discrepancy in both the metrics and underlying topologies as well as the inadequacy of the topologies that make their direct comparison difficult.
{ "cite_N": [ "@cite_15", "@cite_21", "@cite_20", "@cite_23" ], "mid": [ "2134697892", "1959318606", "2152835245", "2095726404" ], "abstract": [ "This paper presents an approach to IP traceback based on the probabilistic packet marking paradigm. Our approach, which we call randomize-and-link, uses large checksum cords to ldquolinkrdquo message fragments in a way that is highly scalable, for the checksums serve both as associative addresses and data integrity verifiers. The main advantage of these checksum cords is that they spread the addresses of possible router messages across a spectrum that is too large for the attacker to easily create messages that collide with legitimate messages.", "Traceback mechanisms are a critical part of the defense against IP spoofing and DoS attacks, as well as being of forensic value to law enforcement. Currently proposed IP traceback mechanisms are inadequate to address the traceback problem for the following reasons: they require DDoS victims to gather thousands of packets to reconstruct a single attack path; they do not scale to large scale distributed DoS attacks; and they do not support incremental deployment. We propose fast Internet traceback (FIT), a new packet marking approach that significantly improves IP traceback in several dimensions: (1) victims can identify attack paths with high probability after receiving only tens of packets, a reduction of 1-3 orders of magnitude compared to previous packet marking schemes; (2) FIT performs well even in the presence of legacy routers, allowing every FIT-enabled router in path to be identified; and (3) FIT scales to large distributed attacks with thousands of attackers. Compared with previous packet marking schemes, FIT represents a step forward in performance and deployability.", "An improved dynamic probabilistic packet marking algorithm named IDPPM is presented, which not only can locate and attack a source rapidly and accurately, but also can reduce the marking overhead of routers near the attackers, which is its greatest contribution given by our technique. In contrast to previous work, the challenge of weakest node and weakest link is solved with the price of a little more numbers of packets to reconstruct the attack path. Theoretical analysis and NS2 simulation results in IPv4 and IPv6 testify that the approach is feasible and efficient respectively. Index Terms—Distributed Denial of Service (DDoS), IP traceback, Dynamic Probabilistic Packet Marking (DPPM), IPv6", "Distributed denial of service attacks continue to pose major threats to the Internet. In order to traceback attack sources (i.e., IP addresses), a well studied approach is probabilistic packet marking (PPM), where each intermediate router of a packet marks it with a certain probability, enabling a victim host to traceback the attack source. In a recent study, we showed how attackers can take advantage of probabilistic nature of packet markings in existing PPM schemes to create spoofed marks, hence compromising traceback. In this paper, we propose a new PPM scheme called TTL-based PPM (TPM) scheme, where each packet is marked with a probability inversely proportional to the distance traversed by the packet so far. Thus, packets that have to traverse longer distances are marked with higher probability, compared to those that have to traverse shorter distances. This ensures that a packet is marked with much higher probability by intermediate routers than by traditional mechanisms, hence reducing the effectiveness of spoofed packets reaching victims. Using formal analysis and simulations using real Internet topology maps, we show how our TPM scheme can effectively trace DDoS attackers even in presence of spoofing when compared to existing schemes." ] }
1601.08011
2265010077
Multiple probabilistic packet marking (PPM) schemes for IP traceback have been proposed to deal with Distributed Denial of Service (DDoS) attacks by reconstructing their attack graphs and identifying the attack sources. In this paper, ten PPM-based IP traceback schemes are compared and analyzed in terms of features such as convergence time, performance evaluation, underlying topologies, incremental deployment, re-marking, and upstream graph. Our analysis shows that the considered schemes exhibit a significant discrepancy in performance as well as performance assessment. We concisely demonstrate this by providing a table showing that (a) different metrics are used for many schemes to measure their performance and, (b) most schemes are evaluated on different classes of underlying network topologies. Our results reveal that both the value and arrangement of the PPM-based scheme convergence times vary depending on exactly the underlying network topology. As a result, this paper shows that a side-by-side comparison of the scheme performance a complicated and turns out to be a crucial open problem in this research area.
The feature refers to whether the scheme would be successful if the marking scheme is deployed on a fraction of the routers in the network. Only a few schemes explicitly state that they would be successful when partially deployed @cite_7 @cite_6 @cite_21 @cite_15 .
{ "cite_N": [ "@cite_15", "@cite_21", "@cite_6", "@cite_7" ], "mid": [ "2134697892", "1959318606", "2150228605", "1967949770" ], "abstract": [ "This paper presents an approach to IP traceback based on the probabilistic packet marking paradigm. Our approach, which we call randomize-and-link, uses large checksum cords to ldquolinkrdquo message fragments in a way that is highly scalable, for the checksums serve both as associative addresses and data integrity verifiers. The main advantage of these checksum cords is that they spread the addresses of possible router messages across a spectrum that is too large for the attacker to easily create messages that collide with legitimate messages.", "Traceback mechanisms are a critical part of the defense against IP spoofing and DoS attacks, as well as being of forensic value to law enforcement. Currently proposed IP traceback mechanisms are inadequate to address the traceback problem for the following reasons: they require DDoS victims to gather thousands of packets to reconstruct a single attack path; they do not scale to large scale distributed DoS attacks; and they do not support incremental deployment. We propose fast Internet traceback (FIT), a new packet marking approach that significantly improves IP traceback in several dimensions: (1) victims can identify attack paths with high probability after receiving only tens of packets, a reduction of 1-3 orders of magnitude compared to previous packet marking schemes; (2) FIT performs well even in the presence of legacy routers, allowing every FIT-enabled router in path to be identified; and (3) FIT scales to large distributed attacks with thousands of attackers. Compared with previous packet marking schemes, FIT represents a step forward in performance and deployability.", "Defending against distributed denial-of-service attacks is one of the hardest security problems on the Internet today. One difficulty to thwart these attacks is to trace the source of the attacks because they often use incorrect, or spoofed IP source addresses to disguise the true origin. In this paper, we present two new schemes, the advanced marking scheme and the authenticated marking scheme, which allow the victim to trace-back the approximate origin of spoofed IP packets. Our techniques feature low network and router overhead, and support incremental deployment. In contrast to previous work, our techniques have significantly higher precision (lower false positive rate) and fewer computation overhead for the victim to reconstruct the attack paths under large scale distributed denial-of-service attacks. Furthermore the authenticated marking scheme provides efficient authentication of routers' markings such that even a compromised router cannot forge or tamper markings from other uncompromised routers.", "This paper describes a technique for tracing anonymous packet flooding attacks in the Internet back towards their source. This work is motivated by the increased frequency and sophistication of denial-of-service attacks and by the difficulty in tracing packets with incorrect, or spoofed'', source addresses. In this paper we describe a general purpose traceback mechanism based on probabilistic packet marking in the network. Our approach allows a victim to identify the network path(s) traversed by attack traffic without requiring interactive operational support from Internet Service Providers (ISPs). Moreover, this traceback can be performed post-mortem'' -- after an attack has completed. We present an implementation of this technology that is incrementally deployable, (mostly) backwards compatible and can be efficiently implemented using conventional technology." ] }
1601.08011
2265010077
Multiple probabilistic packet marking (PPM) schemes for IP traceback have been proposed to deal with Distributed Denial of Service (DDoS) attacks by reconstructing their attack graphs and identifying the attack sources. In this paper, ten PPM-based IP traceback schemes are compared and analyzed in terms of features such as convergence time, performance evaluation, underlying topologies, incremental deployment, re-marking, and upstream graph. Our analysis shows that the considered schemes exhibit a significant discrepancy in performance as well as performance assessment. We concisely demonstrate this by providing a table showing that (a) different metrics are used for many schemes to measure their performance and, (b) most schemes are evaluated on different classes of underlying network topologies. Our results reveal that both the value and arrangement of the PPM-based scheme convergence times vary depending on exactly the underlying network topology. As a result, this paper shows that a side-by-side comparison of the scheme performance a complicated and turns out to be a crucial open problem in this research area.
refers to whether the marking scheme at a router permits the overwriting of previous edge or router information in a packet. The majority of the considered schemes permit re-marking of packets @cite_7 @cite_6 @cite_21 @cite_17 @cite_23 @cite_15 @cite_20 .
{ "cite_N": [ "@cite_7", "@cite_21", "@cite_6", "@cite_23", "@cite_15", "@cite_20", "@cite_17" ], "mid": [ "1967949770", "1959318606", "2150228605", "2095726404", "2134697892", "2152835245", "2114418170" ], "abstract": [ "This paper describes a technique for tracing anonymous packet flooding attacks in the Internet back towards their source. This work is motivated by the increased frequency and sophistication of denial-of-service attacks and by the difficulty in tracing packets with incorrect, or spoofed'', source addresses. In this paper we describe a general purpose traceback mechanism based on probabilistic packet marking in the network. Our approach allows a victim to identify the network path(s) traversed by attack traffic without requiring interactive operational support from Internet Service Providers (ISPs). Moreover, this traceback can be performed post-mortem'' -- after an attack has completed. We present an implementation of this technology that is incrementally deployable, (mostly) backwards compatible and can be efficiently implemented using conventional technology.", "Traceback mechanisms are a critical part of the defense against IP spoofing and DoS attacks, as well as being of forensic value to law enforcement. Currently proposed IP traceback mechanisms are inadequate to address the traceback problem for the following reasons: they require DDoS victims to gather thousands of packets to reconstruct a single attack path; they do not scale to large scale distributed DoS attacks; and they do not support incremental deployment. We propose fast Internet traceback (FIT), a new packet marking approach that significantly improves IP traceback in several dimensions: (1) victims can identify attack paths with high probability after receiving only tens of packets, a reduction of 1-3 orders of magnitude compared to previous packet marking schemes; (2) FIT performs well even in the presence of legacy routers, allowing every FIT-enabled router in path to be identified; and (3) FIT scales to large distributed attacks with thousands of attackers. Compared with previous packet marking schemes, FIT represents a step forward in performance and deployability.", "Defending against distributed denial-of-service attacks is one of the hardest security problems on the Internet today. One difficulty to thwart these attacks is to trace the source of the attacks because they often use incorrect, or spoofed IP source addresses to disguise the true origin. In this paper, we present two new schemes, the advanced marking scheme and the authenticated marking scheme, which allow the victim to trace-back the approximate origin of spoofed IP packets. Our techniques feature low network and router overhead, and support incremental deployment. In contrast to previous work, our techniques have significantly higher precision (lower false positive rate) and fewer computation overhead for the victim to reconstruct the attack paths under large scale distributed denial-of-service attacks. Furthermore the authenticated marking scheme provides efficient authentication of routers' markings such that even a compromised router cannot forge or tamper markings from other uncompromised routers.", "Distributed denial of service attacks continue to pose major threats to the Internet. In order to traceback attack sources (i.e., IP addresses), a well studied approach is probabilistic packet marking (PPM), where each intermediate router of a packet marks it with a certain probability, enabling a victim host to traceback the attack source. In a recent study, we showed how attackers can take advantage of probabilistic nature of packet markings in existing PPM schemes to create spoofed marks, hence compromising traceback. In this paper, we propose a new PPM scheme called TTL-based PPM (TPM) scheme, where each packet is marked with a probability inversely proportional to the distance traversed by the packet so far. Thus, packets that have to traverse longer distances are marked with higher probability, compared to those that have to traverse shorter distances. This ensures that a packet is marked with much higher probability by intermediate routers than by traditional mechanisms, hence reducing the effectiveness of spoofed packets reaching victims. Using formal analysis and simulations using real Internet topology maps, we show how our TPM scheme can effectively trace DDoS attackers even in presence of spoofing when compared to existing schemes.", "This paper presents an approach to IP traceback based on the probabilistic packet marking paradigm. Our approach, which we call randomize-and-link, uses large checksum cords to ldquolinkrdquo message fragments in a way that is highly scalable, for the checksums serve both as associative addresses and data integrity verifiers. The main advantage of these checksum cords is that they spread the addresses of possible router messages across a spectrum that is too large for the attacker to easily create messages that collide with legitimate messages.", "An improved dynamic probabilistic packet marking algorithm named IDPPM is presented, which not only can locate and attack a source rapidly and accurately, but also can reduce the marking overhead of routers near the attackers, which is its greatest contribution given by our technique. In contrast to previous work, the challenge of weakest node and weakest link is solved with the price of a little more numbers of packets to reconstruct the attack path. Theoretical analysis and NS2 simulation results in IPv4 and IPv6 testify that the approach is feasible and efficient respectively. Index Terms—Distributed Denial of Service (DDoS), IP traceback, Dynamic Probabilistic Packet Marking (DPPM), IPv6", "The probabilistic packet marking (PPM) algorithm is a promising way to discover the Internet map or an attack graph that the attack packets traversed during a distributed denial-of-service attack. However, the PPM algorithm is not perfect, as its termination condition is not well defined in the literature. More importantly, without a proper termination condition, the attack graph constructed by the PPM algorithm would be wrong. In this work, we provide a precise termination condition for the PPM algorithm and name the new algorithm the rectified PPM (RPPM) algorithm. The most significant merit of the RPPM algorithm is that when the algorithm terminates, the algorithm guarantees that the constructed attack graph is correct, with a specified level of confidence. We carry out simulations on the RPPM algorithm and show that the RPPM algorithm can guarantee the correctness of the constructed attack graph under 1) different probabilities that a router marks the attack packets and 2) different structures of the network graph. The RPPM algorithm provides an autonomous way for the original PPM algorithm to determine its termination, and it is a promising means of enhancing the reliability of the PPM algorithm." ] }
1601.08011
2265010077
Multiple probabilistic packet marking (PPM) schemes for IP traceback have been proposed to deal with Distributed Denial of Service (DDoS) attacks by reconstructing their attack graphs and identifying the attack sources. In this paper, ten PPM-based IP traceback schemes are compared and analyzed in terms of features such as convergence time, performance evaluation, underlying topologies, incremental deployment, re-marking, and upstream graph. Our analysis shows that the considered schemes exhibit a significant discrepancy in performance as well as performance assessment. We concisely demonstrate this by providing a table showing that (a) different metrics are used for many schemes to measure their performance and, (b) most schemes are evaluated on different classes of underlying network topologies. Our results reveal that both the value and arrangement of the PPM-based scheme convergence times vary depending on exactly the underlying network topology. As a result, this paper shows that a side-by-side comparison of the scheme performance a complicated and turns out to be a crucial open problem in this research area.
refers to whether a scheme requires a previously obtained map of the network to successfully trace the specific path taken by attack traffic. Some of the works address how such a map can be obtained to aid in attack graph reconstruction @cite_6 @cite_21 @cite_28 .
{ "cite_N": [ "@cite_28", "@cite_21", "@cite_6" ], "mid": [ "2064328472", "1959318606", "2150228605" ], "abstract": [ "Sources of a Distributed Denial of Service (DDoS) attack can be identified by the traffic they generate using the IP traceback technique. Because of its relevance, the Probabilistic Packet Marking (PPM) schemes for IP traceback is an intensively researched field. In these schemes, routers are given the extra function of randomly selecting packets from those that go through them, to embed their address information in those selected packets. During or after the attack, the paths that were traversed by the attack traffic can be identified based on the router information in the marked packets. Since these schemes require a large number of received packets to trace an attacker successfully, they usually demand a high time and space complexity to trace many attackers as is the case in DDoS attacks. This is partly because the marking scheme allows remarking, where routers can overwrite previous marking information in a selected packet, which leads to data loss. We present the Prediction Based Scheme (PBS), which is an addition to the PPM schemes for IP tracetrack. The proposed approach consists of two parts: (a) a marking scheme, that reduces the number of packets required to trace a DoS attacker and (b) an extension to a traceback algorithm, whose main feature is to return a complete attack graph with fewer received packets than the traditional algorithm. The proposed marking scheme alleviates the problem of data loss by ensuring previous marking information is not overwritten. Additionally, the proposed traceback algorithm uses graphs built using legitimate traffic to predict the path taken by attack traffic. Results show that the marking scheme in PBS, compared to PPM, ensures that traceback is possible with about 54 as many total packets to achieve complete attack path construction, while the traceback algorithm takes about 33 as many marked packets.", "Traceback mechanisms are a critical part of the defense against IP spoofing and DoS attacks, as well as being of forensic value to law enforcement. Currently proposed IP traceback mechanisms are inadequate to address the traceback problem for the following reasons: they require DDoS victims to gather thousands of packets to reconstruct a single attack path; they do not scale to large scale distributed DoS attacks; and they do not support incremental deployment. We propose fast Internet traceback (FIT), a new packet marking approach that significantly improves IP traceback in several dimensions: (1) victims can identify attack paths with high probability after receiving only tens of packets, a reduction of 1-3 orders of magnitude compared to previous packet marking schemes; (2) FIT performs well even in the presence of legacy routers, allowing every FIT-enabled router in path to be identified; and (3) FIT scales to large distributed attacks with thousands of attackers. Compared with previous packet marking schemes, FIT represents a step forward in performance and deployability.", "Defending against distributed denial-of-service attacks is one of the hardest security problems on the Internet today. One difficulty to thwart these attacks is to trace the source of the attacks because they often use incorrect, or spoofed IP source addresses to disguise the true origin. In this paper, we present two new schemes, the advanced marking scheme and the authenticated marking scheme, which allow the victim to trace-back the approximate origin of spoofed IP packets. Our techniques feature low network and router overhead, and support incremental deployment. In contrast to previous work, our techniques have significantly higher precision (lower false positive rate) and fewer computation overhead for the victim to reconstruct the attack paths under large scale distributed denial-of-service attacks. Furthermore the authenticated marking scheme provides efficient authentication of routers' markings such that even a compromised router cannot forge or tamper markings from other uncompromised routers." ] }
1601.08011
2265010077
Multiple probabilistic packet marking (PPM) schemes for IP traceback have been proposed to deal with Distributed Denial of Service (DDoS) attacks by reconstructing their attack graphs and identifying the attack sources. In this paper, ten PPM-based IP traceback schemes are compared and analyzed in terms of features such as convergence time, performance evaluation, underlying topologies, incremental deployment, re-marking, and upstream graph. Our analysis shows that the considered schemes exhibit a significant discrepancy in performance as well as performance assessment. We concisely demonstrate this by providing a table showing that (a) different metrics are used for many schemes to measure their performance and, (b) most schemes are evaluated on different classes of underlying network topologies. Our results reveal that both the value and arrangement of the PPM-based scheme convergence times vary depending on exactly the underlying network topology. As a result, this paper shows that a side-by-side comparison of the scheme performance a complicated and turns out to be a crucial open problem in this research area.
It is important to point out that PPM-based schemes are not the only proposed approaches to IP traceback @cite_4 @cite_11 . Alternatives include packet logging @cite_0 , specialized routing @cite_12 , Internet control message protocol (ICMP) traceback @cite_2 , deterministic packet marking @cite_25 and hybrid approaches which combine different traceback techniques @cite_22 , or combine traceback with anomaly detection @cite_27 .
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_0", "@cite_27", "@cite_2", "@cite_25", "@cite_12", "@cite_11" ], "mid": [ "2041784634", "1984956506", "", "2119227347", "1578138711", "2163809997", "206164581", "2165780801" ], "abstract": [ "In this article we present the current state of the art in IP traceback. The rising threat of cyber attacks, especially DDoS, makes the IP traceback problem very relevant to today's Internet security. Each approach is evaluated in terms of its pros and cons. We also relate each approach to practical deployment issues on the existing Internet infrastructure. The functionality of each approach is discussed in detail and then evaluated. We conclude with a discussion on some legal implications of IP traceback.", "Because the Internet has been widely applied in various fields, more and more network security issues emerge and catch people's attention. However, adversaries often hide themselves by spoofing their own IP addresses and then launch attacks. For this reason, researchers have proposed a lot of traceback schemes to trace the source of these attacks. Some use only one packet in their packet logging schemes to achieve IP tracking. Others combine packet marking with packet logging and therefore create hybrid IP traceback schemes demanding less storage but requiring a longer search. In this paper, we propose a new hybrid IP traceback scheme with efficient packet logging aiming to have a fixed storage requirement for each router (under 320 KB, according to CAIDA's skitter data set) in packet logging without the need to refresh the logged tracking information and to achieve zero false positive and false negative rates in attack-path reconstruction. In addition, we use a packet's marking field to censor attack traffic on its upstream routers. Lastly, we simulate and analyze our scheme, in comparison with other related research, in the following aspects: storage requirement, computation, and accuracy.", "", "A low-rate distributed denial of service (DDoS) attack has significant ability of concealing its traffic because it is very much like normal traffic. It has the capacity to elude the current anomaly-based detection schemes. An information metric can quantify the differences of network traffic with various probability distributions. In this paper, we innovatively propose using two new information metrics such as the generalized entropy metric and the information distance metric to detect low-rate DDoS attacks by measuring the difference between legitimate traffic and attack traffic. The proposed generalized entropy metric can detect attacks several hops earlier (three hops earlier while the order α = 10 ) than the traditional Shannon metric. The proposed information distance metric outperforms (six hops earlier while the order α = 10) the popular Kullback-Leibler divergence approach as it can clearly enlarge the adjudication distance and then obtain the optimal detection sensitivity. The experimental results show that the proposed information metrics can effectively detect low-rate DDoS attacks and clearly reduce the false positive rate. Furthermore, the proposed IP traceback algorithm can find all attacks as well as attackers from their own local area networks (LANs) and discard attack traffic.", "", "We propose a new approach for IP traceback which is scalable and simple to implement, and introduces no bandwidth and practically no processing overhead. It is backward compatible with equipment which does not implement it. The approach is capable of tracing back attacks, which are composed of just a few packets. In addition, a service provider can implement this scheme without revealing its internal network topology.", "Finding the source of forged Internet Protocol (IP) datagrams in a large, high-speed network is difficult due to the design of the IP protocol and the lack of sufficient capability in most high-speed, high-capacity router implementations. Typically, not enough of the routers in such a network are capable of performing the packet forwarding diagnostics required for this. As a result, tracking-down the source of a flood-type denial-of-service (DoS) attack is usually difficult or impossible in these networks. CenterTrack is an overlay network, consisting of IP tunnels or other connections, that is used to selectively reroute interesting datagrams directly from edge routers to special tracking routers. The tracking routers, or associated sniffers, can easily determine the ingress edge router by observing from which tunnel the datagrams arrive. The datagrams can be examined, then dropped or forwarded to the appropriate egress point. This system simplifies the work required to determine the ingress adjacency of a flood attack while bypassing any equipment which may be incapable of performing the necessary diagnostic functions.", "The integrity of the Internet is severely impaired by rampant denial of service and distributed DoS attacks. It is by no means trivial to devise a countermeasure to address these attacks because of their anonymous and distributed nature. This article presents a brief survey of the most promising recently proposed schemes for tracing cyber attacks: IP traceback. Since IP traceback technology is evolving rapidly, for the community to better comprehend and capture the properties of disparate traceback approaches, we first classify these schemes from multiple aspects. From the perspective of practicality and feasibility, we then analyze and explore the advantages and disadvantages of these schemes in depth so that shortcomings and possible enhancements of each scheme are highlighted. Finally, open problems and future work are discussed, and concluding remarks are drawn." ] }
1601.07962
2958843287
This paper presents evidence-based dynamic analysis, an approach that enables lightweight analyses--under 5 overhead for these bugs--making it practical for the first time to perform these analyses in deployed settings. The key insight of evidence-based dynamic analysis is that for a class of errors, it is possible to ensure that evidence that they happened at some point in the past remains for later detection. Evidence-based dynamic analysis allows execution to proceed at nearly full speed until the end of an epoch (e.g., a heavyweight system call). It then examines program state to check for evidence that an error occurred at some time during that epoch. If so, it rolls back execution and re-executes the code with instrumentation activated to pinpoint the error. We present DoubleTake, a prototype evidence-based dynamic analysis framework. DoubleTake is practical and easy to deploy, requiring neither custom hardware, compiler, nor operating system support. We demonstrate DoubleTake's generality and efficiency by building dynamic analyses that find buffer overflows, memory use-after-free errors, and memory leaks. Our evaluation shows that DoubleTake is efficient, imposing just 4 overhead on average, making it the fastest such system to date. It is also precise: DoubleTake pinpoints the location of these errors to the exact line and memory addresses where they occur, providing valuable debugging information to programmers.
Aftersight is the related work that is closest in spirit to @cite_22 . It separates analysis from normal execution by logging inputs to a virtual machine and exporting them to a separate virtual machine for detailed (slow) analysis that can run offline or concurrently with application execution. Aftersight monitors applications running in a virtual machine, which adds some amount of workload-dependent overhead. VM-based recording alone adds additional runtime overhead, an average of 5 . Aftersight's dynamic analyses are offloaded to unused processors, which may not be available in some deployments. Unlike Aftersight, does not require the use of a virtual machine, does not rely on additional processors for dynamic analyses, and incurs lower average overhead.
{ "cite_N": [ "@cite_22" ], "mid": [ "1549813142" ], "abstract": [ "Analyzing the behavior of running programs has a wide variety of compelling applications, from intrusion detection and prevention to bug discovery. Unfortunately, the high runtime overheads imposed by complex analysis techniques makes their deployment impractical in most settings. We present a virtual machine based architecture called Aftersight ameliorates this, providing a flexible and practical way to run heavyweight analyses on production workloads. Aftersight decouples analysis from normal execution by logging nondeterministic VM inputs and replaying them on a separate analysis platform. VM output can be gated on the results of an analysis for intrusion prevention or analysis can run at its own pace for intrusion detection and best effort prevention. Logs can also be stored for later analysis offline for bug finding or forensics, allowing analyses that would otherwise be unusable to be applied ubiquitously. In all cases, multiple analyses can be run in parallel, added on demand, and are guaranteed not to interfere with the running workload. We present our experience implementing Aftersight as part of the VMware virtual machine platform and using it to develop a realtime intrusion detection and prevention system, as well as an an offline system for bug detection, which we used to detect numerous novel and serious bugs in VMware ESX Server, Linux, and Windows applications." ] }
1601.07884
2255455642
We propose an image representation and matching approach that substantially improves visual-based location estimation for images. The main novelty of the approach, called distinctive visual element matching (DVEM), is its use of representations that are specific to the query image whose location is being predicted. These representations are based on visual element clouds, which robustly capture the connection between the query and visual evidence from candidate locations. We then maximize the influence of visual elements that are geo-distinctive because they do not occur in images taken at many other locations. We carry out experiments and analysis for both geo-constrained and geo-unconstrained location estimation cases using two large-scale, publicly-available datasets: the San Francisco Landmark dataset with @math million street-view images and the MediaEval '15 Placing Task dataset with @math million geo-tagged images from Flickr. We present examples that illustrate the highly-transparent mechanics of the approach, which are based on common sense observations about the visual patterns in image collections. Our results show that the proposed method delivers a considerable performance improvement compared to the state of the art.
Visual-only geo-location estimation approaches can be divided into two categories. The first is approaches. Such approaches estimate geo-location within a geographically constrained area @cite_20 @cite_34 or a finite set of locations @cite_0 @cite_35 @cite_24 @cite_33 . The second is approaches, which estimate geo-location at a global scale @cite_12 @cite_10 . The challenge of geo-unconstrained geo-location estimation is daunting: a recent survey @cite_8 indicated that there are still ample opportunities waiting to be explored in this respect. In this work, our overall goal is to substantially improve the accuracy of image location estimation using only their visual content, and to achieve this improvement in both the geo-constrained and geo-unconstrained scenarios. As demonstrated by our experimental results, DVEM's representation and matching of images using geo-distinctive visual elements achieves a substantial performance improvement compared to existing approaches to both geo-constrained and geo-unconstrained location estimation.
{ "cite_N": [ "@cite_35", "@cite_33", "@cite_8", "@cite_0", "@cite_24", "@cite_34", "@cite_10", "@cite_12", "@cite_20" ], "mid": [ "1990736548", "2039715554", "83893761", "2066185824", "", "2013270301", "2040427182", "2103163130", "" ], "abstract": [ "New applications are emerging every day exploiting the huge data volume in community photo collections. Most focus on popular subsets, e.g., images containing landmarks or associated to Wikipedia articles. In this work we are concerned with the problem of accurately finding the location where a photo is taken without needing any metadata, that is, solely by its visual content. We also recognize landmarks where applicable, automatically linking them to Wikipedia. We show that the time is right for automating the geo-tagging process, and we show how this can work at large scale. In doing so, we do exploit redundancy of content in popular locations--but unlike most existing solutions, we do not restrict to landmarks. In other words, we can compactly represent the visual content of all thousands of images depicting e.g., the Parthenon and still retrieve any single, isolated, non-landmark image like a house or a graffiti on a wall. Starting from an existing, geo-tagged dataset, we cluster images into sets of different views of the same scene. This is a very efficient, scalable, and fully automated mining process. We then align all views in a set to one reference image and construct a 2D scene map. Our indexing scheme operates directly on scene maps. We evaluate our solution on a challenging one million urban image dataset and provide public access to our service through our online application, VIRaL.", "Landmark search is crucial to improve the quality of travel experience. Smart phones make it possible to search landmarks anytime and anywhere. Most of the existing work computes image features on smart phones locally after taking a landmark image. Compared with sending original image to the remote server, sending computed features saves network bandwidth and consequently makes sending process fast. However, this scheme would be restricted by the limitations of phone battery power and computational ability. In this paper, we propose to send compressed (low resolution) images to remote server instead of computing image features locally for landmark recognition and search. To this end, a robust 3D model based method is proposed to recognize query images with corresponding landmarks. Using the proposed method, images with low resolution can be recognized accurately, even though images only contain a small part of the landmark or are taken under various conditions of lighting, zoom, occlusions and different viewpoints. In order to provide an attractive landmark search result, a 3D texture model is generated to respond to a landmark query. The proposed search approach, which opens up a new direction, starts from a 2D compressed image query input and ends with a 3D model search result.", "Benchmarks have the power to bring research communities together to focus on specific research challenges. They drive research forward by making it easier to systematically compare and contrast new solutions, and evaluate their performance with respect to the existing state of the art. In this chapter, we present a retrospective on the Placing Task, a yearly challenge offered by the MediaEval Multimedia Benchmark. The Placing Task, launched in 2010, is a benchmarking task that requires participants to develop algorithms that automatically predict the geolocation of social multimedia (videos and images). This chapter covers the editions of the Placing Task offered in 2010–2013, and also presents an outlook onto 2014. We present the formulation of the task and the task dataset for each year, tracing the design decisions that were made by the organizers, and how each year built on the previous year. Finally, we provide a summary of future directions and challenges for multimodal geolocation, and concluding remarks on how benchmarking has catalyzed research progress in the research area of geolocation prediction for social multimedia.", "Social media has become a very popular way for people to share their photos with friends. Because most of the social images are attached with GPS (geo-tags), a photo's GPS information can be estimated with the help of the large geo-tagged image set while using a visual searching based approach. This paper proposes an unsupervised image GPS location estimation approach with hierarchical global feature clustering and local feature refinement. It consists of two parts: an offline system and an online system. In the offline system, a hierarchical structure is constructed for a large-scale offline social image set with GPS information. Representative images are selected for each GPS location refined cluster, and an inverted file structure is proposed. In the online system, when given an input image, its GPS information can be estimated by hierarchical global clusters selection and local feature refinement in the online system. Both the computational cost and GPS estimation performance demonstrates the effectiveness of the proposed hierarchical structure and inverted file structure in our approach.", "", "Repeated structures such as building facades, fences or road markings often represent a significant challenge for place recognition. Repeated structures are notoriously hard for establishing correspondences using multi-view geometry. Even more importantly, they violate the feature independence assumed in the bag-of-visual-words representation which often leads to over-counting evidence and significant degradation of retrieval performance. In this work we show that repeated structures are not a nuisance but, when appropriately represented, they form an important distinguishing feature for many places. We describe a representation of repeated structures suitable for scalable retrieval. It is based on robust detection of repeated image structures and a simple modification of weights in the bag-of-visual-word model. Place recognition results are shown on datasets of street-level imagery from Pittsburgh and San Francisco demonstrating significant gains in recognition performance compared to the standard bag-of-visual-words baseline and more recently proposed burstiness weighting.", "We propose an automatic method that addresses the challenge of predicting the geo-location of social images using only the visual content of those images. Our method is able to generate a geo-location prediction for an image globally . In this respect, it contrasts with other existing approaches, specifically with those that generate predictions restricted to specific cities, landmarks, or an otherwise pre-defined set of locations. The essence and the main novelty of our ranking-based method is that for a given query image a geo-location is recommended based on the evidence collected from images that are not only geographically close to this geo-location, but also have sufficient visual similarity to the query image within the considered image collection. Our method is evaluated experimentally on a public dataset of 8.8 million geo-tagged images from Flickr, released by the MediaEval 2013 evaluation benchmark. Experiments show that the proposed method delivers a substantial performance improvement compared to the existing related approaches, particularly for queries with high numbers of neighbors . In addition, a detailed analysis of the method’s performance reveals the impact of different visual feature extraction and image matching strategies, as well as the densities and types of images found at different locations, on the prediction accuracy.", "Estimating geographic information from an image is an excellent, difficult high-level computer vision problem whose time has come. The emergence of vast amounts of geographically-calibrated image data is a great reason for computer vision to start looking globally - on the scale of the entire planet! In this paper, we propose a simple algorithm for estimating a distribution over geographic locations from a single image using a purely data-driven scene matching approach. For this task, we leverage a dataset of over 6 million GPS-tagged images from the Internet. We represent the estimated image location as a probability distribution over the Earthpsilas surface. We quantitatively evaluate our approach in several geolocation tasks and demonstrate encouraging performance (up to 30 times better than chance). We show that geolocation estimates can provide the basis for numerous other image understanding tasks such as population density estimation, land cover estimation or urban rural classification.", "" ] }
1601.07884
2255455642
We propose an image representation and matching approach that substantially improves visual-based location estimation for images. The main novelty of the approach, called distinctive visual element matching (DVEM), is its use of representations that are specific to the query image whose location is being predicted. These representations are based on visual element clouds, which robustly capture the connection between the query and visual evidence from candidate locations. We then maximize the influence of visual elements that are geo-distinctive because they do not occur in images taken at many other locations. We carry out experiments and analysis for both geo-constrained and geo-unconstrained location estimation cases using two large-scale, publicly-available datasets: the San Francisco Landmark dataset with @math million street-view images and the MediaEval '15 Placing Task dataset with @math million geo-tagged images from Flickr. We present examples that illustrate the highly-transparent mechanics of the approach, which are based on common sense observations about the visual patterns in image collections. Our results show that the proposed method delivers a considerable performance improvement compared to the state of the art.
Gopalan @cite_31 , using the same data set, modeled the transformation between the image appearance space and the location grouping space and incorporated it with a hierarchical sparse coding approach to learn the features that are useful in discriminating images across locations. We choose this dataset for our experiments on the geo-constrained setting, and use this approach as one of our baselines. The other papers that evaluate using this data set are the aggregated selective matching kernel purposed by (2015) @cite_2 , the work exploiting descriptor distinctiveness by Arandjelovi ' c and Zisserman (2014) @cite_18 , the work exploiting repeated pattens by (2013) @cite_34 , the graph based query expansion method of (2012) @cite_27 and the initial work of (2011) @cite_20 . The experiments in makes a comparison with all of these approaches.
{ "cite_N": [ "@cite_18", "@cite_27", "@cite_2", "@cite_31", "@cite_34", "@cite_20" ], "mid": [ "963660264", "", "1976794880", "1942040090", "2013270301", "" ], "abstract": [ "The objective of this paper is to improve large scale visual object retrieval for visual place recognition. Geo-localization based on a visual query is made difficult by plenty of non-distinctive features which commonly occur in imagery of urban environments, such as generic modern windows, doors, cars, trees, etc. The focus of this work is to adapt standard Hamming Embedding retrieval system to account for varying descriptor distinctiveness. To this end, we propose a novel method for efficiently estimating distinctiveness of all database descriptors, based on estimating local descriptor density everywhere in the descriptor space. In contrast to all competing methods, the (unsupervised) training time for our method (DisLoc) is linear in the number database descriptors and takes only a 100 s on a single CPU core for a 1 million image database. Furthermore, the added memory requirements are negligible (1 ).", "", "This paper considers a family of metrics to compare images based on their local descriptors. It encompasses the vector or locally aggregated descriptors descriptor and matching techniques such as hamming embedding. Making the bridge between these approaches leads us to propose a match kernel that takes the best of existing techniques by combining an aggregation procedure with a selective match kernel. The representation underpinning this kernel is approximated, providing a large scale image search both precise and scalable, as shown by our experiments on several benchmarks. We show that the same aggregation procedure, originally applied per image, can effectively operate on groups of similar features found across multiple images. This method implicitly performs feature set augmentation, while enjoying savings in memory requirements at the same time. Finally, the proposed method is shown effective for place recognition, outperforming state of the art methods on a large scale landmark recognition benchmark.", "We address the problem of estimating location information of an image using principles from automated representation learning. We pursue a hierarchical sparse coding approach that learns features useful in discriminating images across locations, by initializing it with a geometric prior corresponding to transformations between image appearance space and their corresponding location grouping space using the notion of parallel transport on manifolds. We then extend this approach to account for the availability of heterogeneous data modalities such as geo-tags and videos pertaining to different locations, and also study a relatively under-addressed problem of transferring knowledge available from certain locations to infer the grouping of data from novel locations. We evaluate our approach on several standard datasets such as im2gps, San Francisco and MediaEval2010, and obtain state-of-the-art results.", "Repeated structures such as building facades, fences or road markings often represent a significant challenge for place recognition. Repeated structures are notoriously hard for establishing correspondences using multi-view geometry. Even more importantly, they violate the feature independence assumed in the bag-of-visual-words representation which often leads to over-counting evidence and significant degradation of retrieval performance. In this work we show that repeated structures are not a nuisance but, when appropriately represented, they form an important distinguishing feature for many places. We describe a representation of repeated structures suitable for scalable retrieval. It is based on robust detection of repeated image structures and a simple modification of weights in the bag-of-visual-word model. Place recognition results are shown on datasets of street-level imagery from Pittsburgh and San Francisco demonstrating significant gains in recognition performance compared to the standard bag-of-visual-words baseline and more recently proposed burstiness weighting.", "" ] }
1601.07884
2255455642
We propose an image representation and matching approach that substantially improves visual-based location estimation for images. The main novelty of the approach, called distinctive visual element matching (DVEM), is its use of representations that are specific to the query image whose location is being predicted. These representations are based on visual element clouds, which robustly capture the connection between the query and visual evidence from candidate locations. We then maximize the influence of visual elements that are geo-distinctive because they do not occur in images taken at many other locations. We carry out experiments and analysis for both geo-constrained and geo-unconstrained location estimation cases using two large-scale, publicly-available datasets: the San Francisco Landmark dataset with @math million street-view images and the MediaEval '15 Placing Task dataset with @math million geo-tagged images from Flickr. We present examples that illustrate the highly-transparent mechanics of the approach, which are based on common sense observations about the visual patterns in image collections. Our results show that the proposed method delivers a considerable performance improvement compared to the state of the art.
The DVEM is suited for cases in which there is no finite set of locations to apply a classification approach. However, we point out here, that classification approaches have been proposed for geo-constrained content-based location estimation. @cite_28 modeled each geo-tagged image in the collection as a class, and learned a per-example linear SVM classifier for each of these classes with a calibration procedure that makes the classification scores comparable to each other. Due to high computational cost in both off-line learning and online querying phases, the experiment was conducted on a limited dataset of @math photos from Google Streetview taken in Pittsburgh, U.S., covering roughly an area of @math .
{ "cite_N": [ "@cite_28" ], "mid": [ "1995288918" ], "abstract": [ "The aim of this work is to localize a query photograph by finding other images depicting the same place in a large geotagged image database. This is a challenging task due to changes in viewpoint, imaging conditions and the large size of the image database. The contribution of this work is two-fold. First, we cast the place recognition problem as a classification task and use the available geotags to train a classifier for each location in the database in a similar manner to per-exemplar SVMs in object recognition. Second, as only few positive training examples are available for each location, we propose a new approach to calibrate all the per-location SVM classifiers using only the negative examples. The calibration we propose relies on a significance measure essentially equivalent to the p-values classically used in statistical hypothesis testing. Experiments are performed on a database of 25,000 geotagged street view images of Pittsburgh and demonstrate improved place recognition accuracy of the proposed approach over the previous work." ] }
1601.07884
2255455642
We propose an image representation and matching approach that substantially improves visual-based location estimation for images. The main novelty of the approach, called distinctive visual element matching (DVEM), is its use of representations that are specific to the query image whose location is being predicted. These representations are based on visual element clouds, which robustly capture the connection between the query and visual evidence from candidate locations. We then maximize the influence of visual elements that are geo-distinctive because they do not occur in images taken at many other locations. We carry out experiments and analysis for both geo-constrained and geo-unconstrained location estimation cases using two large-scale, publicly-available datasets: the San Francisco Landmark dataset with @math million street-view images and the MediaEval '15 Placing Task dataset with @math million geo-tagged images from Flickr. We present examples that illustrate the highly-transparent mechanics of the approach, which are based on common sense observations about the visual patterns in image collections. Our results show that the proposed method delivers a considerable performance improvement compared to the state of the art.
Authors that go beyond city scale, may still address only a constrained number of locations. @cite_35 investigate location prediction for popular locations in @math European cities using built by visually clustering and aligning images depicting the same view of a scene. @cite_0 constructed a hierarchical structure mined from a set of images depicting about 1,500 predefined places of interest, and proposed a hierarchical method to estimate image's location by matching its visual content against this hierarchical structure. Our approach resembles @cite_35 in that we also use sets of images to represent locations. Note however that in DVEM location representations are created specifically for individual queries at prediction time, making it possible to scale beyond the fixed set of locations.
{ "cite_N": [ "@cite_0", "@cite_35" ], "mid": [ "2066185824", "1990736548" ], "abstract": [ "Social media has become a very popular way for people to share their photos with friends. Because most of the social images are attached with GPS (geo-tags), a photo's GPS information can be estimated with the help of the large geo-tagged image set while using a visual searching based approach. This paper proposes an unsupervised image GPS location estimation approach with hierarchical global feature clustering and local feature refinement. It consists of two parts: an offline system and an online system. In the offline system, a hierarchical structure is constructed for a large-scale offline social image set with GPS information. Representative images are selected for each GPS location refined cluster, and an inverted file structure is proposed. In the online system, when given an input image, its GPS information can be estimated by hierarchical global clusters selection and local feature refinement in the online system. Both the computational cost and GPS estimation performance demonstrates the effectiveness of the proposed hierarchical structure and inverted file structure in our approach.", "New applications are emerging every day exploiting the huge data volume in community photo collections. Most focus on popular subsets, e.g., images containing landmarks or associated to Wikipedia articles. In this work we are concerned with the problem of accurately finding the location where a photo is taken without needing any metadata, that is, solely by its visual content. We also recognize landmarks where applicable, automatically linking them to Wikipedia. We show that the time is right for automating the geo-tagging process, and we show how this can work at large scale. In doing so, we do exploit redundancy of content in popular locations--but unlike most existing solutions, we do not restrict to landmarks. In other words, we can compactly represent the visual content of all thousands of images depicting e.g., the Parthenon and still retrieve any single, isolated, non-landmark image like a house or a graffiti on a wall. Starting from an existing, geo-tagged dataset, we cluster images into sets of different views of the same scene. This is a very efficient, scalable, and fully automated mining process. We then align all views in a set to one reference image and construct a 2D scene map. Our indexing scheme operates directly on scene maps. We evaluate our solution on a challenging one million urban image dataset and provide public access to our service through our online application, VIRaL." ] }
1601.07884
2255455642
We propose an image representation and matching approach that substantially improves visual-based location estimation for images. The main novelty of the approach, called distinctive visual element matching (DVEM), is its use of representations that are specific to the query image whose location is being predicted. These representations are based on visual element clouds, which robustly capture the connection between the query and visual evidence from candidate locations. We then maximize the influence of visual elements that are geo-distinctive because they do not occur in images taken at many other locations. We carry out experiments and analysis for both geo-constrained and geo-unconstrained location estimation cases using two large-scale, publicly-available datasets: the San Francisco Landmark dataset with @math million street-view images and the MediaEval '15 Placing Task dataset with @math million geo-tagged images from Flickr. We present examples that illustrate the highly-transparent mechanics of the approach, which are based on common sense observations about the visual patterns in image collections. Our results show that the proposed method delivers a considerable performance improvement compared to the state of the art.
The key example of the use of distinctiveness for content-based geo-location estimation is the work of Arandjelovi ' c and Zisserman @cite_18 , who modeled the distinctiveness of each local descriptor from its estimated surrounding local density in the descriptor space. This approach differs from ours in two ways: first, we use geo-distinctiveness, calculated on the basis of individual locations, rather than general distinctiveness and, second, we use geo-metrically verified salient points, rather than relying on the visual appearance of the descriptors of the salient points. As we will show by experimental results in , which uses Arandjelovi ' c and Zisserman @cite_18 as one of the baselines, this added step of geo-distinctive visual elements matching significantly improves location estimation
{ "cite_N": [ "@cite_18" ], "mid": [ "963660264" ], "abstract": [ "The objective of this paper is to improve large scale visual object retrieval for visual place recognition. Geo-localization based on a visual query is made difficult by plenty of non-distinctive features which commonly occur in imagery of urban environments, such as generic modern windows, doors, cars, trees, etc. The focus of this work is to adapt standard Hamming Embedding retrieval system to account for varying descriptor distinctiveness. To this end, we propose a novel method for efficiently estimating distinctiveness of all database descriptors, based on estimating local descriptor density everywhere in the descriptor space. In contrast to all competing methods, the (unsupervised) training time for our method (DisLoc) is linear in the number database descriptors and takes only a 100 s on a single CPU core for a 1 million image database. Furthermore, the added memory requirements are negligible (1 )." ] }
1601.08158
1738667096
The semantic localization problem in robotics consists in determining the place where a robot is located by means of semantic categories. The problem is usually addressed as a supervised classification process, where input data correspond to robot perceptions while classes to semantic categories, like kitchen or corridor.In this paper we propose a framework, implemented in the PCL library, which provides a set of valuable tools to easily develop and evaluate semantic localization systems. The implementation includes the generation of 3D global descriptors following a Bag-of-Words approach. This allows the generation of fixed-dimensionality descriptors from any type of keypoint detector and feature extractor combinations. The framework has been designed, structured and implemented to be easily extended with different keypoint detectors, feature extractors as well as classification models.The proposed framework has also been used to evaluate the performance of a set of already implemented descriptors, when used as input for a specific semantic localization system. The obtained results are discussed paying special attention to the internal parameters of the BoW descriptor generation process. Moreover, we also review the combination of some keypoint detectors with different 3D descriptor generation techniques. Presentation of a BoW implementation in the Point Cloud Library.Proposal of a general framework for semantic localization systems.The framework allows for integrations of future 3D features and keypoints.The Harris3D detector outperforms uniform sampling with fewer detected keypoints.BoW descriptors obtain better results than the ESF global feature.
For a complete review of the state-of-art in semantic localization we refer the reader to @cite_2 where a survey on this subject has been recently published. However, let's review the most related previous works from the last recent years.
{ "cite_N": [ "@cite_2" ], "mid": [ "2265661972" ], "abstract": [ "The evolution of contemporary mobile robotics has given thrust to a series of additional conjunct technologies. Of such is the semantic mapping, which provides an abstraction of space and a means for human-robot communication. The recent introduction and evolution of semantic mapping motivated this survey, in which an explicit analysis of the existing methods is sought. The several algorithms are categorized according to their primary characteristics, namely scalability, inference model, temporal coherence and topological map usage. The applications involving semantic maps are also outlined in the work at hand, emphasizing on human interaction, knowledge representation and planning. The existence of publicly available validation datasets and benchmarking, suitable for the evaluation of semantic mapping techniques is also discussed in detail. Last, an attempt to address open issues and questions is also made. Two level navigation.Cognitive navigation.Spatial semantics." ] }
1601.08158
1738667096
The semantic localization problem in robotics consists in determining the place where a robot is located by means of semantic categories. The problem is usually addressed as a supervised classification process, where input data correspond to robot perceptions while classes to semantic categories, like kitchen or corridor.In this paper we propose a framework, implemented in the PCL library, which provides a set of valuable tools to easily develop and evaluate semantic localization systems. The implementation includes the generation of 3D global descriptors following a Bag-of-Words approach. This allows the generation of fixed-dimensionality descriptors from any type of keypoint detector and feature extractor combinations. The framework has been designed, structured and implemented to be easily extended with different keypoint detectors, feature extractors as well as classification models.The proposed framework has also been used to evaluate the performance of a set of already implemented descriptors, when used as input for a specific semantic localization system. The obtained results are discussed paying special attention to the internal parameters of the BoW descriptor generation process. Moreover, we also review the combination of some keypoint detectors with different 3D descriptor generation techniques. Presentation of a BoW implementation in the Point Cloud Library.Proposal of a general framework for semantic localization systems.The framework allows for integrations of future 3D features and keypoints.The Harris3D detector outperforms uniform sampling with fewer detected keypoints.BoW descriptors obtain better results than the ESF global feature.
As already mentioned, the semantic localization problem consists of the process of acquiring an image, generate a suitable representation (that is, an image descriptor) and classifying the imaged scene @cite_10 . This classification can be performed according to a) high-level features of the environment, like detected objects @cite_21 @cite_5 @cite_19 , b) global image representations @cite_30 , or c) local features @cite_29 . @cite_13 a method for scene classification based on global image features was presented, where the temporal continuity between consecutive images was exploited using a Hidden Markov Model. @cite_8 , a scene classifier with range data as input information and AdaBoost as the classification model is proposed. In 2006, @cite_23 developed a visual scene classifier using composed receptive field histograms @cite_6 and SVMs.
{ "cite_N": [ "@cite_30", "@cite_13", "@cite_8", "@cite_29", "@cite_21", "@cite_6", "@cite_19", "@cite_23", "@cite_5", "@cite_10" ], "mid": [ "1532257412", "2128554449", "2117892041", "2042316011", "57608953", "2162113431", "2110811457", "2130917470", "2092104450", "2165974396" ], "abstract": [ "Humans can recognize the gist of a novel image in a single glance, independent of its complexity. How is this remarkable feat accomplished? On the basis of behavioral and computational evidence, this paper describes a formal approach to the representation and the mechanism of scene gist understanding, based on scene-centered, rather than object-centered primitives. We show that the structure of a scene image can be estimated by the mean of global image features, providing a statistical summary of the spatial layout properties (Spatial Envelope representation) of the scene. Global features are based on configurations of spatial scales and are estimated without invoking segmentation or grouping operations. The scene-centered approach is not an alternative to local image analysis but would serve as a feed-forward and parallel pathway of visual processing, able to quickly constrain local feature analysis and enhance object recognition in cluttered natural scenes.", "While navigating in an environment, a vision system has to be able to recognize where it is and what the main objects in the scene are. We present a context-based vision system for place and object recognition. The goal is to identify familiar locations (e.g., office 610, conference room 941, main street), to categorize new environments (office, corridor, street) and to use that information to provide contextual priors for object recognition (e.g., tables are more likely in an office than a street). We present a low-dimensional global image representation that provides relevant information for place recognition and categorization, and show how such contextual information introduces strong priors that simplify object recognition. We have trained the system to recognize over 60 locations (indoors and outdoors) and to suggest the presence and locations of more than 20 different object types. The algorithm has been integrated into a mobile system that provides realtime feedback to the user.", "This paper addresses the problem of classifying places in the environment of a mobile robot into semantic categories. We believe that semantic information about the type of place improves the capabilities of a mobile robot in various domains including localization, path-planning, or human-robot interaction. Our approach uses AdaBoost, a supervised learning algorithm, to train a set of classifiers for place recognition based on laser range data. In this paper we describe how this approach can be applied to distinguish between rooms, corridors, doorways, and hallways. Experimental results obtained in simulation and with real robots demonstrate the effectiveness of our approach in various environments.", "In this survey, we give an overview of invariant interest point detectors, how they evolvd over time, how they work, and what their respective strengths and weaknesses are. We begin with defining the properties of the ideal local feature detector. This is followed by an overview of the literature over the past four decades organized in different categories of feature extraction methods. We then provide a more detailed analysis of a selection of methods which had a particularly significant impact on the research field. We conclude with a summary and promising future research directions.", "Presented at the 2007 Robotics: Science and Systems Conference III (RSS), 27-30 June 2007, Atlanta, GA.", "Effective methods for recognising objects or spatio-temporal events can be constructed based on receptive field responses summarised into histograms or other histogram-like image descriptors. This work presents a set of composed histogram features of higher dimensionality, which give significantly better recognition performance compared to the histogram descriptors of lower dimensionality that were used in the original papers by Swain & Bollard (1991) or Schiele & Crowley (2000). The use of histograms of higher dimensionality is made possible by a sparse representation for efficient computation and handling of higher-dimensional histograms. Results of extensive experiments are reported, showing how the performance of histogram-based recognition schemes depend upon different combinations of cues, in terms of Gaussian derivatives or differential invariants applied to either intensity information, chromatic information or both. It is shown that there exist composed higher-dimensional histogram descriptors with much better performance for recognising known objects than previously used histogram features. Experiments are also reported of classifying unknown objects into visual categories.", "Mobile robotics has achieved notable progress, however, to increase the complexity of the tasks that mobile robots can perform in natural environments, we need to provide them with a greater semantic understanding of their surrounding. In particular, identifying indoor scenes, such as an Office or a Kitchen, is a highly valuable perceptual ability for an indoor mobile robot, and in this paper we propose a new technique to achieve this goal. As a distinguishing feature, we use common objects, such as Doors or furniture, as a key intermediate representation to recognize indoor scenes. We frame our method as a generative probabilistic hierarchical model, where we use object category classifiers to associate low-level visual features to objects, and contextual relations to associate objects to scenes. The inherent semantic interpretation of common objects allows us to use rich sources of online data to populate the probabilistic terms of our model. In contrast to alternative computer vision based methods, we boost performance by exploiting the embedded and dynamic nature of a mobile robot. In particular, we increase detection accuracy and efficiency by using a 3D range sensor that allows us to implement a focus of attention mechanism based on geometric and structural information. Furthermore, we use concepts from information theory to propose an adaptive scheme that limits computational load by selectively guiding the search for informative objects. The operation of this scheme is facilitated by the dynamic nature of a mobile robot that is constantly changing its field of view. We test our approach using real data captured by a mobile robot navigating in Office and home environments. Our results indicate that the proposed approach outperforms several state-of-the-art techniques for scene recognition.", "An important competence for a mobile robot system is the ability to localize and perform context interpretation. This is required to perform basic navigation and to facilitate local specific services. Usually localization is performed based on a purely geometric model. Through use of vision and place recognition a number of opportunities open up in terms of flexibility and association of semantics to the model. To achieve this we present an appearance based method for place recognition. The method is based on a large margin classifier in combination with a rich global image descriptor. The method is robust to variations in illumination and minor scene changes. The method is evaluated across several different cameras, changes in time-of-day and weather conditions. The results clearly demonstrate the value of the approach.", "The future of robots, as our companions is dependent on their ability to understand, interpret and represent the environment in a human compatible manner. Towards this aim, this work attempts to create a hierarchical probabilistic concept-oriented representation of space, based on objects. Specifically, it details efforts taken towards learning and generating concepts and attempts to classify places using the concepts gleaned. Several algorithms, from naive ones using only object category presence to more sophisticated ones using both objects and relationships, are proposed. Both learning and inference use the information encoded in the underlying representation-objects and relative spatial information between them. The approaches are based on learning from exemplars, clustering and the use of Bayesian network classifiers. The approaches are generative. Further, even though they are based on learning from exemplars, they are not ontology specific; i.e. they do not assume the use of any particular ontology. The presented algorithms rely on a robots inherent high-level feature extraction capability (object recognition and structural element extraction) capability to actually form concept models and infer them. Thus, this report presents methods that could enable a robot to to link sensory information to increasingly abstract concepts (spatial constructs). Such a conceptualization and the representation that results thereof would enable robots to be more cognizant of their surroundings and yet, compatible to us. Experiments on conceptualization and place classification are reported. Thus, the theme of this work is-conceptualization and classification for representation and spatial cognition.", "In this paper we describe the problem of Visual Place Categorization (VPC) for mobile robotics, which involves predicting the semantic category of a place from image measurements acquired from an autonomous platform. For example, a robot in an unfamiliar home environment should be able to recognize the functionality of the rooms it visits, such as kitchen, living room, etc. We describe an approach to VPC based on sequential processing of images acquired with a conventional video camera. We identify two key challenges: Dealing with non-characteristic views and integrating restricted-FOV imagery into a holistic prediction. We present a solution to VPC based upon a recently-developed visual feature known as CENTRIST (CENsus TRansform hISTogram). We describe a new dataset for VPC which we have recently collected and are making publicly available. We believe this is the first significant, realistic dataset for the VPC problem. It contains the interiors of six different homes with ground truth labels. We use this dataset to validate our solution approach, achieving promising results." ] }
1601.08158
1738667096
The semantic localization problem in robotics consists in determining the place where a robot is located by means of semantic categories. The problem is usually addressed as a supervised classification process, where input data correspond to robot perceptions while classes to semantic categories, like kitchen or corridor.In this paper we propose a framework, implemented in the PCL library, which provides a set of valuable tools to easily develop and evaluate semantic localization systems. The implementation includes the generation of 3D global descriptors following a Bag-of-Words approach. This allows the generation of fixed-dimensionality descriptors from any type of keypoint detector and feature extractor combinations. The framework has been designed, structured and implemented to be easily extended with different keypoint detectors, feature extractors as well as classification models.The proposed framework has also been used to evaluate the performance of a set of already implemented descriptors, when used as input for a specific semantic localization system. The obtained results are discussed paying special attention to the internal parameters of the BoW descriptor generation process. Moreover, we also review the combination of some keypoint detectors with different 3D descriptor generation techniques. Presentation of a BoW implementation in the Point Cloud Library.Proposal of a general framework for semantic localization systems.The framework allows for integrations of future 3D features and keypoints.The Harris3D detector outperforms uniform sampling with fewer detected keypoints.BoW descriptors obtain better results than the ESF global feature.
The use of the Bag of Words (BoW) technique @cite_17 can also be considered a remarkable milestone for visual semantic scene classification. The BoW process starts by creating a visual dictionary of representative features. Next, each extracted feature is assigned to the closest word in the dictionary. Then, a histogram representing the number of occurrences of each visual word is computed. This histogram is finally used as the image descriptor. An extensive evaluation of BoW features representations for scene classification were presented in @cite_22 , demonstrating that visual words representations are likely to produce superior performance. @cite_18 , an extension of the BoW technique using a spatial pyramid was proposed. Also, this work is one of the most relevant articles related to scene classification allowing to merge local and global information into a single image descriptor. The spatial pyramid approach has been successfully applied to several semantic localization problems, and it can be considered a standard solution for generating descriptors.
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_17" ], "mid": [ "2162915993", "2036718463", "1625255723" ], "abstract": [ "This paper presents a method for recognizing scene categories based on approximate global geometric correspondence. This technique works by partitioning the image into increasingly fine sub-regions and computing histograms of local features found inside each sub-region. The resulting \"spatial pyramid\" is a simple and computationally efficient extension of an orderless bag-of-features image representation, and it shows significantly improved performance on challenging scene categorization tasks. Specifically, our proposed method exceeds the state of the art on the Caltech-101 database and achieves high accuracy on a large database of fifteen natural scene categories. The spatial pyramid framework also offers insights into the success of several recently proposed image descriptions, including Torralba’s \"gist\" and Lowe’s SIFT descriptors.", "Based on keypoints extracted as salient image patches, an image can be described as a \"bag of visual words\" and this representation has been used in scene classification. The choice of dimension, selection, and weighting of visual words in this representation is crucial to the classification performance but has not been thoroughly studied in previous work. Given the analogy between this representation and the bag-of-words representation of text documents, we apply techniques used in text categorization, including term weighting, stop word removal, feature selection, to generate image representations that differ in the dimension, selection, and weighting of visual words. The impact of these representation choices to scene classification is studied through extensive experiments on the TRECVID and PASCAL collection. This study provides an empirical basis for designing visual-word representations that are likely to produce superior classification performance.", "We present a novel method for generic visual categorization: the problem of identifying the object content of natural images while generalizing across variations inherent to the object class. This bag of keypoints method is based on vector quantization of affine invariant descriptors of image patches. We propose and compare two alternative implementations using different classifiers: Naive Bayes and SVM. The main advantages of the method are that it is simple, computationally efficient and intrinsically invariant. We present results for simultaneously classifying seven semantic visual categories. These results clearly demonstrate that the method is robust to background clutter and produces good categorization accuracy even without exploiting geometric information." ] }
1601.08188
2951015274
Lipreading, i.e. speech recognition from visual-only recordings of a speaker's face, can be achieved with a processing pipeline based solely on neural networks, yielding significantly better accuracy than conventional methods. Feed-forward and recurrent neural network layers (namely Long Short-Term Memory; LSTM) are stacked to form a single structure which is trained by back-propagating error gradients through all the layers. The performance of such a stacked network was experimentally evaluated and compared to a standard Support Vector Machine classifier using conventional computer vision features (Eigenlips and Histograms of Oriented Gradients). The evaluation was performed on data from 19 speakers of the publicly available GRID corpus. With 51 different words to classify, we report a best word accuracy on held-out evaluation speakers of 79.6 using the end-to-end neural network-based solution (11.6 improvement over the best feature-based solution evaluated).
Lipreading has been used as a complementary modality for speech recognition from noisy audio data @cite_27 @cite_29 , as well as for purely visual speech recognition @cite_26 @cite_25 @cite_35 . The latter gives rise to a , which is defined as a system enabling speech communication to take place when an audible acoustic signal is unavailable'' @cite_5 . Silent Speech technology has a large number of applications: It allows persons with certain speech impairments ( laryngectomees, whose voice box ( larynx ) has been removed) to communicate, as well as enabling confidential and undisturbing communication in public places @cite_5 . Further uses of lipreading have been proposed, automatic speech extraction from surveillance videos and its interpretation for forensic purposes @cite_22 . Lipreading has been augmented with images of the tongue and vocal tract @cite_19 @cite_34 @cite_12 . Furthermore, there are Silent Speech interfaces based on very different principles, like speech recognition from electromyography @cite_20 @cite_21 @cite_8 @cite_33 or (electro-)magnetic articulography @cite_31 .
{ "cite_N": [ "@cite_35", "@cite_26", "@cite_22", "@cite_33", "@cite_8", "@cite_29", "@cite_21", "@cite_19", "@cite_27", "@cite_5", "@cite_31", "@cite_34", "@cite_20", "@cite_25", "@cite_12" ], "mid": [ "", "2106137268", "", "2060756806", "", "2157190406", "2168861131", "2145442746", "190036406", "", "2039640471", "2129160496", "2126584718", "2017280922", "" ], "abstract": [ "", "We have designed and implemented a lipreading system that recognizes isolated words using only color video of human lips (without acoustic data). The system performs video recognition using \"snakes\" to extract visual features of geometric space, Karhunen-Loeve transform (KLT) to extract principal components in the color eigenspace, and hidden Markov models (HMM's) to recognize the combined visual features sequences. With the visual information alone, we were able to achieve 94 accuracy for ten isolated words.", "", "", "", "We improve the performance of a hybrid connectionist speech recognition system by incorporating visual information about the corresponding lip movements. Specifically, we investigate the benefits of adding visual features in the presence of additive noise and crosstalk (cocktail party effect). Our study extends our previous experiments by using a new visual front end, and an alternative architecture for combining the visual and acoustic information. Furthermore, we have extended our recognizer to a multi-speaker, connected letters recognizer. Our results show a significant improvement for the combined architecture (acoustic and visual information) over just the acoustic system in the presence of additive noise and crosstalk. >", "It has been determined that the myoelectric signals (MES) from muscles associated with speech that are obtained using surface electrodes can be used to recognize speech at approximately five times a priori. The results of ten experiments have been evaluated using a maximum-likelihood recognition scheme and have consistently yielded similar recognition accuracy among several subjects. Individual word recognition from a ten-word set has approached 60 . Parameters studied have included energy, magnitude, and trial deviation. The results suggest that accuracies at a level suitable for use in a vocal prosthesis will require the development of a hybrid recognition algorithm utilizing either statistical and heuristic word separation or neural nets. >", "A machine learning technique is used to match reconstructed tongue contours in 30 frame per second ultrasound images to speaker vocal tract parameters obtained from a synchronized audio track. Speech synthesized using the learned parameters and noise as an activation function displays many of the time and frequency domain characteristics of the original audio, and, for isolated passages, is remarkably clear - although no articulators other than the tongue are included.", "", "", "Abstract Surgical voice restoration post-laryngectomy has a number of limitations and drawbacks. The present gold standard involves the use of a tracheo-oesophageal fistula (TOF) valve to divert air from the lungs into the throat, which vibrates, and from this, speech can be formed. Not all patients can use these valves and those who do are susceptible to complications associated with valve failure. Thus there is still a place for other voice restoration options. With advances in electronic miniaturization and portable computing power a computing-intensive solution has been investigated. Magnets were placed on the lips, teeth and tongue of a volunteer causing a change in the surrounding magnetic field when the individual mouthed words. These changes were detected by 6 dual axis magnetic sensors, which were incorporated into a pair of special glasses. The resulting signals were compared to training data recorded previously by means of a dynamic time warping algorithm using dynamic programming. When compared to a small vocabulary database, the patterns were found to be recognised with an accuracy of 97 for words and 94 for phonemes. On this basis we plan to develop a speech system for patients who have lost laryngeal function.", "The article compares two approaches to the description of ultrasound vocal tract images for application in a \"silent speech interface,\" one based on tongue contour modeling, and a second, global coding approach in which images are projected onto a feature space of Eigentongues. A curvature-based lip profile feature extraction method is also presented. Extracted visual features are input to a neural network which learns the relation between the vocal tract configuration and line spectrum frequencies (LSF) contained in a one-hour speech corpus. An examination of the quality of LSFs derived from the two approaches demonstrates that the Eigemongues approach has a more efficient implementation and provides superior results based on a normalized mean squared error criterion.", "A speech prosthesis has been developed based on the following idea. When a handicapped person such as a laryngectomee tries to speak in vain, the movements of the mouth, tongue, etc., are elicited. By detecting the movements, what he or she is trying to say can be determined. Then a speech synthesizer is driven to produce a voice of good quality.", "A recent trend in law enforcement has been the use of Forensic lip-readers. Criminal activities are often recorded on CCTV or other video gathering systems. Knowledge of what suspects are saying enriches the evidence gathered but lip-readers, by their own admission, are fallible so, based on long term studies of automated lip-reading, we are investigating the possibilities and limitations of applying this technique under realistic conditions. We have adopted a step-by-step approach and are developing a capability when prior video information is available for the suspect of interest. We use the terminology video-to-text (V2T) for this technique by analogy with speech-to-text (S2T) which also has applications in security and law-enforcement.", "" ] }
1601.07471
2343936705
This paper presents a shape-theoretic framework for dynamical analysis of nonlinear dynamical systems which appear frequently in several video-based inference tasks. Traditional approaches to dynamical modeling have included linear and nonlinear methods with their respective drawbacks. A novel approach we propose is the use of descriptors of the shape of the dynamical attractor as a feature representation of nature of dynamics. The proposed framework has two main advantages over traditional approaches: a) representation of the dynamical system is derived directly from the observational data, without any inherent assumptions, and b) the proposed features show stability under different time-series lengths where traditional dynamical invariants fail. We illustrate our idea using nonlinear dynamical models such as Lorenz and Rossler systems, where our feature representations (shape distribution) support our hypothesis that the local shape of the reconstructed phase space can be used as a discriminative feature. Our experimental analyses on these models also indicate that the proposed framework show stability for different time-series lengths, which is useful when the available number of samples are small variable. The specific applications of interest in this paper are: 1) activity recognition using motion capture and RGBD sensors, 2) activity quality assessment for applications in stroke rehabilitation, and 3) dynamical scene classification. We provide experimental validation through action and gesture recognition experiments on motion capture and Kinect datasets. In all these scenarios, we show experimental evidence of the favorable properties of the proposed representation.
Human activity analysis has attracted the attention of many researchers providing extensive literature on the subject. A detailed review of the approaches in literature for modeling and recognition of human activities are discussed in @cite_42 @cite_17 . Since our present work is related to non-parametric approaches for dynamical system analysis for action modeling, we restrict our discussion to related methods.
{ "cite_N": [ "@cite_42", "@cite_17" ], "mid": [ "1983705368", "2121899951" ], "abstract": [ "Human activity recognition is an important area of computer vision research. Its applications include surveillance systems, patient monitoring systems, and a variety of systems that involve interactions between persons and electronic devices such as human-computer interfaces. Most of these applications require an automated recognition of high-level activities, composed of multiple simple (or atomic) actions of persons. This article provides a detailed overview of various state-of-the-art research papers on human activity recognition. We discuss both the methodologies developed for simple human actions and those for high-level activities. An approach-based taxonomy is chosen that compares the advantages and limitations of each approach. Recognition methodologies for an analysis of the simple actions of a single person are first presented in the article. Space-time volume approaches and sequential approaches that represent and recognize activities directly from input images are discussed. Next, hierarchical recognition methodologies for high-level activities are presented and compared. Statistical approaches, syntactic approaches, and description-based approaches for hierarchical recognition are discussed in the article. In addition, we further discuss the papers on the recognition of human-object interactions and group activities. Public datasets designed for the evaluation of the recognition methodologies are illustrated in our article as well, comparing the methodologies' performances. This review will provide the impetus for future research in more productive areas.", "The ability to recognize humans and their activities by vision is key for a machine to interact intelligently and effortlessly with a human-inhabited environment. Because of many potentially important applications, “looking at people” is currently one of the most active application domains in computer vision. This survey identifies a number of promising applications and provides an overview of recent developments in this domain. The scope of this survey is limited to work on whole-body or hand motion; it does not include work on human faces. The emphasis is on discussing the various methodologies; they are grouped in 2-D approaches with or without explicit shape models and 3-D approaches. Where appropriate, systems are reviewed. We conclude with some thoughts about future directions." ] }
1601.07471
2343936705
This paper presents a shape-theoretic framework for dynamical analysis of nonlinear dynamical systems which appear frequently in several video-based inference tasks. Traditional approaches to dynamical modeling have included linear and nonlinear methods with their respective drawbacks. A novel approach we propose is the use of descriptors of the shape of the dynamical attractor as a feature representation of nature of dynamics. The proposed framework has two main advantages over traditional approaches: a) representation of the dynamical system is derived directly from the observational data, without any inherent assumptions, and b) the proposed features show stability under different time-series lengths where traditional dynamical invariants fail. We illustrate our idea using nonlinear dynamical models such as Lorenz and Rossler systems, where our feature representations (shape distribution) support our hypothesis that the local shape of the reconstructed phase space can be used as a discriminative feature. Our experimental analyses on these models also indicate that the proposed framework show stability for different time-series lengths, which is useful when the available number of samples are small variable. The specific applications of interest in this paper are: 1) activity recognition using motion capture and RGBD sensors, 2) activity quality assessment for applications in stroke rehabilitation, and 3) dynamical scene classification. We provide experimental validation through action and gesture recognition experiments on motion capture and Kinect datasets. In all these scenarios, we show experimental evidence of the favorable properties of the proposed representation.
Recently researchers from various backgrounds have shown interest in the development of computational frameworks for quantification of of movement, for possible applications in health monitoring and rehabilitation @cite_47 @cite_21 @cite_7 @cite_57 . Stroke being the most common neurological disorder, leaves millions disabled every year who are unable to undergo long-term therapy treatment due to insufficient coverage by insurance. Recent directions in rehabilitation research has been towards development of portable systems for therapy treatment. Traditional quantitative scales such as the Fugl Meyer Test @cite_33 and the Wolf Motor Function Test (WMFT) @cite_40 , have proven to be effective in evaluating movement quality. However, these approaches involve visual monitoring which would greatly benefit from the development of an objective computational framework for movement quality assessment. The aim here is to develop standardized methods to describe the level of impairment across subjects. We show the utility of the proposed action modeling framework for quantifying the quality of reaching tasks using a single marker on the wrist, and obtain comparable results to a heavy marker-based setup ( @math markers placed on arm, shoulder and torso @cite_47 ).
{ "cite_N": [ "@cite_33", "@cite_7", "@cite_21", "@cite_57", "@cite_40", "@cite_47" ], "mid": [ "1497431575", "2077312744", "2023203889", "1986808747", "2137456982", "2066118953" ], "abstract": [ "Abstract A system for evaluation of motor function, balance, some sensation qualities and joint function in hemiplegic patients is described in detail. The system applies a cumulative numerical score. A series of hemiplegic patients has been followed from within one week post-stroke and throughout one year. When initially nearly flaccid hemiparalysis prevails, the motor recovery, if any occur, follows a definable course. The findings in this study substantiate the validity of ontogenetic principles as applicable to the assessment of motor behaviour in hemiplegic patients, and foocus the importance of early therapeutic measures against contractures.", "", "Fields studying movement generation, including robotics, psychology, cognitive science, and neuroscience utilize concepts and tools related to the pervasiveness of variability in biological systems. The concept of variability and the measures for nonlinear dynamics used to evaluate this concept open new vistas for research in movement dysfunction of many types. This review describes innovations in the exploration of variability and their potential importance in understanding human movement. Far from being a source of error, evidence supports the presence of an optimal state of variability for healthy and functional movement. This variability has a particular organization and is characterized by a chaotic structure. Deviations from this state can lead to biological systems that are either overly rigid and robotic or noisy and unstable. Both situations result in systems that are less adaptable to perturbations, such as those associated with unhealthy pathological states or absence of skillfulness.", "In this paper, we propose a novel shape-theoretic framework for dynamical analysis of human movement from 3D data. The key idea we propose is the use of global descriptors of the shape of the dynamical attractor as a feature for modeling actions. We apply this approach to the novel application scenario of estimation of movement quality from a single-marker for future usage in home-based stroke rehabilitation. Using a dataset collected from 15 stroke survivors performing repetitive task therapy, we demonstrate that the proposed method outperforms traditional methods, such as kinematic analysis and use of chaotic invariants, in estimation of movement quality. In addition, we demonstrate that the proposed framework is sufficiently general for the application of action and gesture recognition as well. Our experimental results reflect improved action recognition results on two publicly available 3D human activity databases.", "Background and Purpose—The Wolf Motor Function Test (WMFT) is a new time-based method to evaluate upper extremity performance while providing insight into joint-specific and total limb movements. T...", "This paper presents a novel generalized computational framework for quantitative kinematic evaluation of movement in a rehabilitation clinic setting. The framework integrates clinical knowledge and computational data‐driven analysis together in a systematic manner. The framework provides three key benefits to rehabilitation: (a) the resulting continuous normalized measure allows the clinician to monitor movement quality on a fine scale and easily compare impairments across participants, (b) the framework reveals the effect of individual movement components on the composite movement performance helping the clinician decide the training foci, and (c) the evaluation runs in real‐time, which allows the clinician to constantly track a patient’s progress and make appropriate adaptations to the therapy protocol. The creation of such an evaluation is difficult because of the sparse amount of recorded clinical observations, the high dimensionality of movement and high variations in subject’s performance. We addres..." ] }
1601.07471
2343936705
This paper presents a shape-theoretic framework for dynamical analysis of nonlinear dynamical systems which appear frequently in several video-based inference tasks. Traditional approaches to dynamical modeling have included linear and nonlinear methods with their respective drawbacks. A novel approach we propose is the use of descriptors of the shape of the dynamical attractor as a feature representation of nature of dynamics. The proposed framework has two main advantages over traditional approaches: a) representation of the dynamical system is derived directly from the observational data, without any inherent assumptions, and b) the proposed features show stability under different time-series lengths where traditional dynamical invariants fail. We illustrate our idea using nonlinear dynamical models such as Lorenz and Rossler systems, where our feature representations (shape distribution) support our hypothesis that the local shape of the reconstructed phase space can be used as a discriminative feature. Our experimental analyses on these models also indicate that the proposed framework show stability for different time-series lengths, which is useful when the available number of samples are small variable. The specific applications of interest in this paper are: 1) activity recognition using motion capture and RGBD sensors, 2) activity quality assessment for applications in stroke rehabilitation, and 3) dynamical scene classification. We provide experimental validation through action and gesture recognition experiments on motion capture and Kinect datasets. In all these scenarios, we show experimental evidence of the favorable properties of the proposed representation.
The focus of existing approaches for movement quality assessment has been towards finding typical patterns in kinematics which differ between healthy and impaired subjects. While these approaches are successful in giving an insight into understanding human movement, they fail to utilize the inherent dynamical nature of the movement. Rehabilitation therapies are composed of repetitive movements (e.g., reach to a target) that are strongly periodic with inherent variability. Traditional methods have assumed that this variability arises from noise in the system. However, it is evident that variability is an integral part of repetitive movements due to the availability of multiple strategies for the movement. Also, it is believed that variability produced in human movement is a result of nonlinear interactions and have deterministic origin @cite_21 . Extensive research has been carried out to model this variability using nonlinear dynamical system theory @cite_29 @cite_36 @cite_21 . In this paper, we utilize the action modeling framework for movement quality assessment using a single wrist marker.
{ "cite_N": [ "@cite_36", "@cite_29", "@cite_21" ], "mid": [ "2044151604", "1969511443", "2023203889" ], "abstract": [ "We analyse the dynamics of human gait with simple nonlinear time series analysis methods that are appropriate for undergraduate courses. We show that short continuous recordings of the human locomotory apparatus possess properties typical of deterministic chaotic systems. To facilitate interest and enable the reproduction of presented results, as well as to promote applications of nonlinear time series analysis to other experimental systems, we provide user-friendly programs for each implemented method. Thus, we provide new insights into the dynamics of human locomotion, and make an effort to ease the inclusion of nonlinear time series analysis methods into the curriculum at an early stage of the educational process.", "Characterizing locomotor dynamics is essential for understanding the neuromuscular control of locomotion. In particular, quantifying dynamic stability during walking is important for assessing people who have a greater risk of falling. However, traditional biomechanical methods of defining stability have not quantified the resistance of the neuromuscular system to perturbations, suggesting that more precise definitions are required. For the present study, average maximum finite-time Lyapunov exponents were estimated to quantify the local dynamic stability of human walking kinematics. Local scaling exponents, defined as the local slopes of the correlation sum curves, were also calculated to quantify the local scaling structure of each embedded time series. Comparisons were made between overground and motorized treadmill walking in young healthy subjects and between diabetic neuropathic (NP) patients and healthy controls (CO) during overground walking. A modification of the method of surrogate data was deve...", "Fields studying movement generation, including robotics, psychology, cognitive science, and neuroscience utilize concepts and tools related to the pervasiveness of variability in biological systems. The concept of variability and the measures for nonlinear dynamics used to evaluate this concept open new vistas for research in movement dysfunction of many types. This review describes innovations in the exploration of variability and their potential importance in understanding human movement. Far from being a source of error, evidence supports the presence of an optimal state of variability for healthy and functional movement. This variability has a particular organization and is characterized by a chaotic structure. Deviations from this state can lead to biological systems that are either overly rigid and robotic or noisy and unstable. Both situations result in systems that are less adaptable to perturbations, such as those associated with unhealthy pathological states or absence of skillfulness." ] }
1601.07519
2271040645
We remark that the combination of the works of Ben-Bassat-Brav-Bussi-Joyce and Alper-Hall-Rydh imply the conjectured local description of the moduli stacks of semi-Schur objects in the derived category of coherent sheaves on projective Calabi-Yau 3-folds. This result was assumed in the author's previous papers to apply wall-crossing formulas of DT type invariants in the derived category, e.g. DT PT correspondence, rationality, etc. We also show that the above result is applied to prove the higher rank version of DT PT correspondence and rationality.
The result similar to Theorem was once announced by Behrend-Getzler @cite_3 . Recently, Jiang @cite_14 proved the Behrend function identities given in Theorem using the cyclic @math -algebra technique and the unpublished work by Behrend-Getzler @cite_3 . So far, there exist some articles in which higher rank analogue of DT theory or PT theory has been studied @cite_15 , @cite_26 , @cite_1 , @cite_33 , @cite_7 . In these articles, all the higher rank objects were of the form @math , which do not cover all of the stable sheaves as we already mentioned. So our situation is much more general than the above previous articles. It is a natural problem to extend the results of Theorem and Theorem to the motivic DT invariants introduced by Kontsevich-Soibelman @cite_28 . Still there exist some technical issues in this extension, e.g. the existence of an orientation data, but the numbers of issues are decreasing due to the recent progress on the rigorous foundation of motivic DT theory (cf. @cite_27 , @cite_19 , @cite_41 , @cite_18 ).
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_26", "@cite_33", "@cite_7", "@cite_28", "@cite_41", "@cite_1", "@cite_3", "@cite_19", "@cite_27", "@cite_15" ], "mid": [ "2231921011", "2180264879", "1990891135", "2964246903", "", "2154483904", "2285776297", "1658601324", "", "", "1945101555", "2963742856" ], "abstract": [ "We prove the motivic version of Joyce-Song formula for the Behrend function identities proposed in Jiang2 . The main method we use is Nicaise's motivic integration for formal schemes and Cluckers-Loeser's motivic constructible functions. As an application we prove that there is a Poisson algebra homomorphism from the motivic Hall algebra of the abelian category of coherent sheaves on a Calabi-Yau threefold @math to the motivic quantum torus of @math , thus generalizing the integration map of Joyce-Song in JS and Bridgeland in Bridgeland10 to the motivic level. Such an integration map has applications in the wall crossing of motivic Donaldson-Thomas invariants.", "Let @math be a cyclic @math -algebra of dimension @math with finite dimensional cohomology only in dimension one and two. By transfer theorem there exists a cyclic @math -algebra structure on the cohomology @math . The inner product plus the higher products of the cyclic @math -algebra defines a superpotential function @math on @math . We associate with an analytic Milnor fiber for the formal function @math and define the Euler characteristic of @math is to be the Euler characteristic of the 'et ale cohomology of the analytic Milnor fiber. In this paper we prove a Thom-Sebastiani type formula for the Euler characteristic of cyclic @math -algebras. As applications we prove the Joyce-Song formulas about the Behrend function identities for semi-Schur objects in the derived category of coherent sheaves over Calabi-Yau threefolds. A motivic Thom-Sebastiani type formula and a conjectural motivic Joyce-Song formulas for the motivic Milnor fiber of cyclic @math -algebras are also discussed.", "We describe a correspondence between the Donaldson–Thomas invariants enumerating D0–D6 bound states on a Calabi–Yau 3-fold and certain Gromov–Witten invariants counting rational curves in a family of blowups of weighted projective planes. This is a variation on a correspondence found by Gross–Pandharipande, with D0–D6 bound states replacing representations of generalised Kronecker quivers. We build on a small part of the theories developed by Joyce–Song and Kontsevich–Soibelman for wall-crossing formulae and by Gross–Pandharipande–Siebert for factorisations in the tropical vertex group. Along the way we write down an explicit formula for the BPS state counts which arise up to rank 3 and prove their integrality. We also compare with previous “noncommutative DT invariants” computations in the physics literature.", "", "", "We define new invariants of 3d Calabi-Yau categories endowed with a stability structure. Intuitively, they count the number of semistable objects with fixed class in the K-theory of the category (\"number of BPS states with given charge\" in physics language). Formally, our motivic DT-invariants are elements of quantum tori over a version of the Grothendieck ring of varieties over the ground field. Via the quasi-classical limit \"as the motive of affine line approaches to 1\" we obtain numerical DT-invariants which are closely related to those introduced by Behrend. We study some properties of both motivic and numerical DT-invariants including the wall-crossing formulas and integrality. We discuss the relationship with the mathematical works (in the non-triangulated case) of Joyce, Bridgeland and Toledano-Laredo, as well as with works of physicists on Seiberg-Witten model (string junctions), classification of N=2 supersymmetric theories (Cecotti-Vafa) and structure of the moduli space of vector multiplets. Relating the theory of 3d Calabi-Yau categories with distinguished set of generators (called cluster collection) with the theory of quivers with potential we found the connection with cluster transformations and cluster varieties (both classical and quantum).", "The aim of the paper is twofold. Firstly, we give an axiomatic presentation of Donaldson-Thomas theory for categories of homological dimension at most one with potential. In particular, we provide rigorous proofs of all standard results concerning the integration map, wall-crossing, PT-DT correspondence, etc. following Kontsevich and Soibelman. We also show the equivalence of their approach and the one given by Joyce and Song. Secondly, we relate Donaldson-Thomas functions for such a category with arbitrary potential to those with zero potential under some mild conditions. As a result of this, we obtain a geometric interpretation of Donaldson-Thomas functions in all known realizations, i.e. mixed Hodge modules, perverse sheaves and constructible functions.", "We study higher rank Donaldson-Thomas invariants of a Calabi-Yau 3-fold using Joyce-Song's wall-crossing formula. We construct quivers whose counting invariants coincide with the Donaldson-Thomas invariants. As a corollary, we prove the integrality and a certain symmetry for the higher rank invariants.", "", "", "Let @math be a smooth scheme over an algebraically closed field @math of characteristic zero and @math a regular function, and write @math Crit @math , as a closed subscheme of @math . The motivic vanishing cycle @math is an element of the @math -equivariant motivic Grothendieck ring @math defined by Denef and Loeser math.AG 0006050 and Looijenga math.AG 0006220, and used in Kontsevich and Soibelman's theory of motivic Donaldson-Thomas invariants, arXiv:0811.2435. We prove three main results: (a) @math depends only on the third-order thickenings @math of @math . (b) If @math is another smooth scheme, @math is regular, @math Crit @math , and @math is an embedding with @math and @math an isomorphism, then @math equals @math \"twisted\" by a motive associated to a principal @math -bundle defined using @math , where now we work in a quotient ring @math of @math . (c) If @math is an \"oriented algebraic d-critical locus\" in the sense of Joyce arXiv:1304.4508, there is a natural motive @math , such that if @math is locally modelled on Crit @math , then @math is locally modelled on @math . Using results from arXiv:1305.6302, these imply the existence of natural motives on moduli schemes of coherent sheaves on a Calabi-Yau 3-fold equipped with \"orientation data\", as required in Kontsevich and Soibelman's motivic Donaldson-Thomas theory arXiv:0811.2435, and on intersections of oriented Lagrangians in an algebraic symplectic manifold. This paper is an analogue for motives of results on perverse sheaves of vanishing cycles proved in arXiv:1211.3259. We extend this paper to Artin stacks in arXiv:1312.0090.", "" ] }
1601.06919
2952276602
Although web crawlers have been around for twenty years by now, there is virtually no freely available, opensource crawling software that guarantees high throughput, overcomes the limits of single-machine systems and at the same time scales linearly with the amount of resources available. This paper aims at filling this gap, through the description of BUbiNG, our next-generation web crawler built upon the authors' experience with UbiCrawler [ 2004] and on the last ten years of research on the topic. BUbiNG is an opensource Java fully distributed crawler; a single BUbiNG agent, using sizeable hardware, can crawl several thousands pages per second respecting strict politeness constraints, both host- and IP-based. Unlike existing open-source distributed crawlers that rely on batch techniques (like MapReduce), BUbiNG job distribution is based on modern high-speed protocols so to achieve very high throughput.
Recently, a new generation of crawlers was designed, aiming to download billions of pages, like @cite_1 . Nonetheless, none of them is freely available and open source: is the first open-source crawler designed to be fast, scalable and runnable on commodity hardware.
{ "cite_N": [ "@cite_1" ], "mid": [ "1977836056" ], "abstract": [ "This article shares our experience in designing a Web crawler that can download billions of pages using a single-server implementation and models its performance. We first show that current crawling algorithms cannot effectively cope with the sheer volume of URLs generated in large crawls, highly branching spam, legitimate multimillion-page blog sites, and infinite loops created by server-side scripts. We then offer a set of techniques for dealing with these issues and test their performance in an implementation we call IRLbot. In our recent experiment that lasted 41 days, IRLbot running on a single server successfully crawled 6.3 billion valid HTML pages (7.6 billion connection requests) and sustained an average download rate of 319 mb s (1,789 pages s). Unlike our prior experiments with algorithms proposed in related work, this version of IRLbot did not experience any bottlenecks and successfully handled content from over 117 million hosts, parsed out 394 billion links, and discovered a subset of the Web graph with 41 billion unique nodes." ] }
1601.07014
2950906573
In this work we propose a novel approach to perform segmentation by leveraging the abstraction capabilities of convolutional neural networks (CNNs). Our method is based on Hough voting, a strategy that allows for fully automatic localisation and segmentation of the anatomies of interest. This approach does not only use the CNN classification outcomes, but it also implements voting by exploiting the features produced by the deepest portion of the network. We show that this learning-based segmentation method is robust, multi-region, flexible and can be easily adapted to different modalities. In the attempt to show the capabilities and the behaviour of CNNs when they are applied to medical image analysis, we perform a systematic study of the performances of six different network architectures, conceived according to state-of-the-art criteria, in various situations. We evaluate the impact of both different amount of training data and different data dimensionality (2D, 2.5D and 3D) on the final results. We show results on both MRI and transcranial US volumes depicting respectively 26 regions of the basal ganglia and the midbrain.
In this section we give an overview of existing approaches that employ CNNs to solve problems from both computer vision and medical imaging domain. In the last few years CNNs became very popular tools among the computer vision community. Classification problems such as image categorisation @cite_36 @cite_35 , object detection @cite_21 and face recognition @cite_15 as well as regression problems such as human pose estimation @cite_10 , and depth prediction from RGB data @cite_14 have been addressed using CNNs and unprecedented results have been reported. In order to cope with the challenges present in natural images, such as scale changes, occlusions, deformations different illumination settings and viewpoint changes, these methods needed to be trained on very large annotated datasets and required several weeks to be built even when powerful GPUs were employed. In medical imaging, however, it is difficult to obtain even a fraction of this amount of resources, both in terms of computational means and amount of annotated training data.
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_36", "@cite_21", "@cite_15", "@cite_10" ], "mid": [ "", "2951234442", "", "2102605133", "1970456555", "2949941598" ], "abstract": [ "", "Predicting depth is an essential component in understanding the 3D geometry of a scene. While for stereo images local correspondence suffices for estimation, finding depth relations from a single image is less straightforward, requiring integration of both global and local information from various cues. Moreover, the task is inherently ambiguous, with a large source of uncertainty coming from the overall scale. In this paper, we present a new method that addresses this task by employing two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally. We also apply a scale-invariant error to help measure depth relations rather than scale. By leveraging the raw datasets as large sources of training data, our method achieves state-of-the-art results on both NYU Depth and KITTI, and matches detailed depth boundaries without the need for superpixelation.", "", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "In this paper we consider the problem of multi-view face detection. While there has been significant research on this problem, current state-of-the-art approaches for this task require annotation of facial landmarks, e.g. TSM [25], or annotation of face poses [28, 22]. They also require training dozens of models to fully capture faces in all orientations, e.g. 22 models in HeadHunter method [22]. In this paper we propose Deep Dense Face Detector (DDFD), a method that does not require pose landmark annotation and is able to detect faces in a wide range of orientations using a single model based on deep convolutional neural networks. The proposed method has minimal complexity; unlike other recent deep learning object detection methods [9], it does not require additional components such as segmentation, bounding-box regression, or SVM classifiers. Furthermore, we analyzed scores of the proposed face detector for faces in different orientations and found that 1) the proposed method is able to detect faces from different angles and can handle occlusion to some extent, 2) there seems to be a correlation between distribution of positive examples in the training set and scores of the proposed face detector. The latter suggests that the proposed method's performance can be further improved by using better sampling strategies and more sophisticated data augmentation techniques. Evaluations on popular face detection benchmark datasets show that our single-model face detector algorithm has similar or better performance compared to the previous methods, which are more complex and require annotations of either different poses or facial landmarks.", "Convolutional Neural Networks (ConvNets) have successfully contributed to improve the accuracy of regression-based methods for computer vision tasks such as human pose estimation, landmark localization, and object detection. The network optimization has been usually performed with L2 loss and without considering the impact of outliers on the training process, where an outlier in this context is defined by a sample estimation that lies at an abnormal distance from the other training sample estimations in the objective space. In this work, we propose a regression model with ConvNets that achieves robustness to such outliers by minimizing Tukey's biweight function, an M-estimator robust to outliers, as the loss function for the ConvNet. In addition to the robust loss, we introduce a coarse-to-fine model, which processes input images of progressively higher resolutions for improving the accuracy of the regressed values. In our experiments, we demonstrate faster convergence and better generalization of our robust loss function for the tasks of human pose estimation and age estimation from face images. We also show that the combination of the robust loss function with the coarse-to-fine model produces comparable or better results than current state-of-the-art approaches in four publicly available human pose estimation datasets." ] }
1601.07014
2950906573
In this work we propose a novel approach to perform segmentation by leveraging the abstraction capabilities of convolutional neural networks (CNNs). Our method is based on Hough voting, a strategy that allows for fully automatic localisation and segmentation of the anatomies of interest. This approach does not only use the CNN classification outcomes, but it also implements voting by exploiting the features produced by the deepest portion of the network. We show that this learning-based segmentation method is robust, multi-region, flexible and can be easily adapted to different modalities. In the attempt to show the capabilities and the behaviour of CNNs when they are applied to medical image analysis, we perform a systematic study of the performances of six different network architectures, conceived according to state-of-the-art criteria, in various situations. We evaluate the impact of both different amount of training data and different data dimensionality (2D, 2.5D and 3D) on the final results. We show results on both MRI and transcranial US volumes depicting respectively 26 regions of the basal ganglia and the midbrain.
Another important issue in CNN-related research is the search for optimal CNN network architecture: we have found very little literature that addresses this issue systematically. Although several networks architectures were analysed in @cite_39 @cite_5 , we have found only one study on very deep CNN'' @cite_8 , in which the number of convolutional layers was varied systematically (8-16) while keeping kernel sizes fixed. The study concluded that small kernel sizes in combination with deep architectures can outperform CNNs with few layers and large kernel sizes.
{ "cite_N": [ "@cite_5", "@cite_8", "@cite_39" ], "mid": [ "", "1686810756", "2167510172" ], "abstract": [ "", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "We address a central problem of neuroanatomy, namely, the automatic segmentation of neuronal structures depicted in stacks of electron microscopy (EM) images. This is necessary to efficiently map 3D brain structure and connectivity. To segment biological neuron membranes, we use a special type of deep artificial neural network as a pixel classifier. The label of each pixel (membrane or non-membrane) is predicted from raw pixel values in a square window centered on it. The input layer maps each window pixel to a neuron. It is followed by a succession of convolutional and max-pooling layers which preserve 2D information and extract features with increasing levels of abstraction. The output layer produces a calibrated probability for each class. The classifier is trained by plain gradient descent on a 512 × 512 × 30 stack with known ground truth, and tested on a stack of the same size (ground truth unknown to the authors) by the organizers of the ISBI 2012 EM Segmentation Challenge. Even without problem-specific postprocessing, our approach outperforms competing techniques by a large margin in all three considered metrics, i.e. rand error, warping error and pixel error. For pixel error, our approach is the only one outperforming a second human observer." ] }
1601.07124
2273062358
This paper aims to introduces a new algorithm for automatic speech-to-text summarization based on statistical divergences of probabilities and graphs. The input is a text from speech conversations with noise, and the output a compact text summary. Our results, on the pilot task CCCS Multiling 2015 French corpus are very encouraging
@cite_1 developed a method to generate automatic summaries by identifying and synthesizing similar elements in a cluster of documents. This method creates the summary based on similarity between the sentences and topic. @cite_24 described an approach to fusion sentences through the text-to-text technique, to synthesize repeated information from multiple documents. This method uses a syntactic alignment in sentences to identify common information. After the identification step, sentences are processed and a new text is generated with the same content.
{ "cite_N": [ "@cite_24", "@cite_1" ], "mid": [ "2012561700", "2118733980" ], "abstract": [ "A system that can produce informative summaries, highlighting common information found in many online documents, will help Web users to pinpoint information that they need without extensive reading. In this article, we introduce sentence fusion, a novel text-to-text generation technique for synthesizing common information across documents. Sentence fusion involves bottom-up local multisequence alignment to identify phrases conveying similar information and statistical generation to combine common phrases into a sentence. Sentence fusion moves the summarization field from the use of purely extractive methods to the generation of abstracts that contain sentences not found in any of the input documents and can synthesize information across sources.", "We present a method to automatically generate a concise summary by identifying and synthesizing similar elements across related text from a set of multiple documents. Our approach is unique in its usage of language generation to reformulate the wording of the summary." ] }
1601.07124
2273062358
This paper aims to introduces a new algorithm for automatic speech-to-text summarization based on statistical divergences of probabilities and graphs. The input is a text from speech conversations with noise, and the output a compact text summary. Our results, on the pilot task CCCS Multiling 2015 French corpus are very encouraging
Another method to obtain relevant sentences uses compression, as reported in @cite_6 . Pitler uses approaches based on syntactic trees, sentences and discourse. @cite_11 describes a multi-sentence compression method using a word-based graph. The summarization by extraction does not have the same quality as the summaries produced by abstraction because it uses surface methods based on statistical calculations to verify the sentence relevance. However, the extraction is general and do not require deep analysis of the language @cite_24 @cite_23 .
{ "cite_N": [ "@cite_24", "@cite_23", "@cite_6", "@cite_11" ], "mid": [ "2012561700", "", "21810565", "2160017075" ], "abstract": [ "A system that can produce informative summaries, highlighting common information found in many online documents, will help Web users to pinpoint information that they need without extensive reading. In this article, we introduce sentence fusion, a novel text-to-text generation technique for synthesizing common information across documents. Sentence fusion involves bottom-up local multisequence alignment to identify phrases conveying similar information and statistical generation to combine common phrases into a sentence. Sentence fusion moves the summarization field from the use of purely extractive methods to the generation of abstracts that contain sentences not found in any of the input documents and can synthesize information across sources.", "", "Sentence compression is the task of producing a summary of a single sentence. The compressed sentence should be shorter, contain the important content from the original, and itself be grammatical. The three papers discussed here take different approaches to identifying important content, determining which sentences are grammatical, and jointly optimizing these objectives. One family of approaches we will discuss is those that are tree-based, which create a compressed sentence by making edits to the syntactic tree of the original sentence. A second type of approach is sentence-based, which generates strings directly. Orthogonal to either of these two approaches is whether sentences are treated in isolation or if the surrounding discourse affects compressions. We compare a tree-based, a sentence-based, and a discourse-based approach and conclude with ideas for future work in this area. Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-10-20. This technical report is available at ScholarlyCommons: http: repository.upenn.edu cis_reports 929 Methods for Sentence Compression", "We consider the task of summarizing a cluster of related sentences with a short sentence which we call multi-sentence compression and present a simple approach based on shortest paths in word graphs. The advantage and the novelty of the proposed method is that it is syntaxlean and requires little more than a tokenizer and a tagger. Despite its simplicity, it is capable of generating grammatical and informative summaries as our experiments with English and Spanish data demonstrate." ] }
1601.07140
2253806798
This paper describes the COCO-Text dataset. In recent years large-scale datasets like SUN and Imagenet drove the advancement of scene understanding and object recognition. The goal of COCO-Text is to advance state-of-the-art in text detection and recognition in natural images. The dataset is based on the MS COCO dataset, which contains images of complex everyday scenes. The images were not collected with text in mind and thus contain a broad variety of text instances. To reflect the diversity of text in natural scenes, we annotate text with (a) location in terms of a bounding box, (b) fine-grained classification into machine printed text and handwritten text, (c) classification into legible and illegible text, (d) script of the text and (e) transcriptions of legible text. The dataset contains over 173k text annotations in over 63k images. We provide a statistical analysis of the accuracy of our annotations. In addition, we present an analysis of three leading state-of-the-art photo Optical Character Recognition (OCR) approaches on our dataset. While scene text detection and recognition enjoys strong advances in recent years, we identify significant shortcomings motivating future work.
In recent years large-scale datasets like SUN @cite_5 , Imagenet @cite_4 and MS COCO @cite_13 drove the advancement of several fields in computer vision. The presented dataset is based upon MS COCO and its image captions extension @cite_11 . We utilize the rich annotations from these datasets to optimize annotators' task allocations.
{ "cite_N": [ "@cite_13", "@cite_5", "@cite_4", "@cite_11" ], "mid": [ "", "2017814585", "2108598243", "1889081078" ], "abstract": [ "", "Scene categorization is a fundamental problem in computer vision. However, scene understanding research has been constrained by the limited scope of currently-used databases which do not capture the full variety of scene categories. Whereas standard databases for object categorization contain hundreds of different classes of objects, the largest available dataset of scene categories contains only 15 classes. In this paper we propose the extensive Scene UNderstanding (SUN) database that contains 899 categories and 130,519 images. We use 397 well-sampled categories to evaluate numerous state-of-the-art algorithms for scene recognition and establish new bounds of performance. We measure human scene classification performance on the SUN database and compare this with computational methods. Additionally, we study a finer-grained scene representation to detect scenes embedded inside of larger scenes.", "The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.", "In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided." ] }
1601.07140
2253806798
This paper describes the COCO-Text dataset. In recent years large-scale datasets like SUN and Imagenet drove the advancement of scene understanding and object recognition. The goal of COCO-Text is to advance state-of-the-art in text detection and recognition in natural images. The dataset is based on the MS COCO dataset, which contains images of complex everyday scenes. The images were not collected with text in mind and thus contain a broad variety of text instances. To reflect the diversity of text in natural scenes, we annotate text with (a) location in terms of a bounding box, (b) fine-grained classification into machine printed text and handwritten text, (c) classification into legible and illegible text, (d) script of the text and (e) transcriptions of legible text. The dataset contains over 173k text annotations in over 63k images. We provide a statistical analysis of the accuracy of our annotations. In addition, we present an analysis of three leading state-of-the-art photo Optical Character Recognition (OCR) approaches on our dataset. While scene text detection and recognition enjoys strong advances in recent years, we identify significant shortcomings motivating future work.
Scene text detection and recognition approaches generally comprise two parts: Detecting proposal text regions in the image, and recognizing the words in those regions. Current work in the area include approaches by @cite_6 , where first three different detectors are combined to identify text regions and subsequently characters are classified with a fully connected neural network with HOG features as input supported by a language model based on n-grams. Further, Neumann and Matas @cite_15 first identify Extremal Regions, groups them into words and then selects most probable character segmentation. Furthermore, @cite_16 use Convolutional Neural Networks (CNN) for both text region detection and character classification.
{ "cite_N": [ "@cite_15", "@cite_16", "@cite_6" ], "mid": [ "2061802763", "1922126009", "2122221966" ], "abstract": [ "An end-to-end real-time scene text localization and recognition method is presented. The real-time performance is achieved by posing the character detection problem as an efficient sequential selection from the set of Extremal Regions (ERs). The ER detector is robust to blur, illumination, color and texture variation and handles low-contrast text. In the first classification stage, the probability of each ER being a character is estimated using novel features calculated with O(1) complexity per region tested. Only ERs with locally maximal probability are selected for the second stage, where the classification is improved using more computationally expensive features. A highly efficient exhaustive search with feedback loops is then applied to group ERs into words and to select the most probable character segmentation. Finally, text is recognized in an OCR stage trained using synthetic fonts. The method was evaluated on two public datasets. On the ICDAR 2011 dataset, the method achieves state-of-the-art text localization results amongst published methods and it is the first one to report results for end-to-end text recognition. On the more challenging Street View Text dataset, the method achieves state-of-the-art recall. The robustness of the proposed method against noise and low contrast of characters is demonstrated by “false positives” caused by detected watermark text in the dataset.", "In this work we present an end-to-end system for text spotting--localising and recognising text in natural scene images--and text based image retrieval. This system is based on a region proposal mechanism for detection and deep convolutional neural networks for recognition. Our pipeline uses a novel combination of complementary proposal generation techniques to ensure high recall, and a fast subsequent filtering stage for improving precision. For the recognition and ranking of proposals, we train very large convolutional neural networks to perform word recognition on the whole proposal region at the same time, departing from the character classifier based systems of the past. These networks are trained solely on data produced by a synthetic text generation engine, requiring no human labelled data. Analysing the stages of our pipeline, we show state-of-the-art performance throughout. We perform rigorous experiments across a number of standard end-to-end text spotting benchmarks and text-based image retrieval datasets, showing a large improvement over all previous methods. Finally, we demonstrate a real-world application of our text spotting system to allow thousands of hours of news footage to be instantly searchable via a text query.", "We describe Photo OCR, a system for text extraction from images. Our particular focus is reliable text extraction from smartphone imagery, with the goal of text recognition as a user input modality similar to speech recognition. Commercially available OCR performs poorly on this task. Recent progress in machine learning has substantially improved isolated character classification, we build on this progress by demonstrating a complete OCR system using these techniques. We also incorporate modern data center-scale distributed language modelling. Our approach is capable of recognizing text in a variety of challenging imaging conditions where traditional OCR systems fail, notably in the presence of substantial blur, low resolution, low contrast, high image noise and other distortions. It also operates with low latency, mean processing time is 600 ms per image. We evaluate our system on public benchmark datasets for text extraction and outperform all previously reported results, more than halving the error rate on multiple benchmarks. The system is currently in use in many applications at Google, and is available as a user input modality in Google Translate for Android." ] }
1601.07140
2253806798
This paper describes the COCO-Text dataset. In recent years large-scale datasets like SUN and Imagenet drove the advancement of scene understanding and object recognition. The goal of COCO-Text is to advance state-of-the-art in text detection and recognition in natural images. The dataset is based on the MS COCO dataset, which contains images of complex everyday scenes. The images were not collected with text in mind and thus contain a broad variety of text instances. To reflect the diversity of text in natural scenes, we annotate text with (a) location in terms of a bounding box, (b) fine-grained classification into machine printed text and handwritten text, (c) classification into legible and illegible text, (d) script of the text and (e) transcriptions of legible text. The dataset contains over 173k text annotations in over 63k images. We provide a statistical analysis of the accuracy of our annotations. In addition, we present an analysis of three leading state-of-the-art photo Optical Character Recognition (OCR) approaches on our dataset. While scene text detection and recognition enjoys strong advances in recent years, we identify significant shortcomings motivating future work.
Another related stream of research focuses on repeated labeling in the face of noisy labels as well as combining human workers with machine classifiers. In early work, @cite_20 looked into improving label quality by taking worker accuracy into account. Further, Wilber at al. @cite_17 investigate the use of grid questions, where workers select answers from a grid of images to take advantage of the parallelism of human perception. We also use similar grid interfaces, but our approach differs in that we do not require a specific number of responses, because we perform binary classification whereas they do relative comparisons. Closer to our work, @cite_14 propose a framework that combines object detectors with human annotators to annotate a dataset. While we also combine results from object detectors and human annotators, our work differs in that we do not have access to the detectors during the annotation process, but only initial detections as input. In this area, the work closest to ours is the approach by @cite_9 that proposes strategies to optimize the task allocation to human workers with constrained budgets. We adapt their approach in that we increase annotation redundancy only the supposedly most difficult annotations.
{ "cite_N": [ "@cite_9", "@cite_14", "@cite_20", "@cite_17" ], "mid": [ "2273307612", "1908985308", "2051105555", "1878518397" ], "abstract": [ "When crowdsourcing systems are used in combination with machine inference systems in the real world, they benefit the most when the machine system is deeply integrated with the crowd workers. However, if researchers wish to integrate the crowd with \"off-the-shelf\" machine classifiers, this deep integration is not always possible. This work explores two strategies to increase accuracy and decrease cost under this setting. First, we show that reordering tasks presented to the human can create a significant accuracy improvement. Further, we show that greedily choosing parameters to maximize machine accuracy is sub-optimal, and joint optimization of the combined system improves performance.", "The long-standing goal of localizing every object in an image remains elusive. Manually annotating objects is quite expensive despite crowd engineering innovations. Current state-of-the-art automatic object detectors can accurately detect at most a few objects per image. This paper brings together the latest advancements in object detection and in crowd engineering into a principled framework for accurately and efficiently localizing objects in images. The input to the system is an image to annotate and a set of annotation constraints: desired precision, utility and or human cost of the labeling. The output is a set of object annotations, informed by human feedback and computer vision. Our model seamlessly integrates multiple computer vision models with multiple sources of human input in a Markov Decision Process. We empirically validate the effectiveness of our human-in-the-loop labeling approach on the ILSVRC2014 object detection dataset.", "This paper presents and tests a formal mathematical model for the analysis of informant responses to systematic interview questions. We assume a situation in which the ethnographer does not know how much each informant knows about the cultural domain under consideration nor the answers to the questions. The model simultaneously provides an estimate of the cultural competence or knowledge of each informant and an estimate of the correct answer to each question asked of the informant. The model currently handles true-false, multiple-choice, andfill-in-the-blank type question formats. In familiar cultural domains the model produces good results from as few as four informants. The paper includes a table showing the number of informants needed to provide stated levels of confidence given the mean level of knowledge among the informants. Implications are discussed.", "Similarity comparisons of the form \"Is object a more similar to b than to c?\" are useful for computer vision and machine learning applications. Unfortunately, an embedding of @math points is specified by @math triplets, making collecting every triplet an expensive task. In noticing this difficulty, other researchers have investigated more intelligent triplet sampling techniques, but they do not study their effectiveness or their potential drawbacks. Although it is important to reduce the number of collected triplets, it is also important to understand how best to display a triplet collection task to a user. In this work we explore an alternative display for collecting triplets and analyze the monetary cost and speed of the display. We propose best practices for creating cost effective human intelligence tasks for collecting triplets. We show that rather than changing the sampling algorithm, simple changes to the crowdsourcing UI can lead to much higher quality embeddings. We also provide a dataset as well as the labels collected from crowd workers." ] }
1601.07267
2408080353
We show that evolutionarily stable states in general (nonlinear) population games (which can be viewed as continuous vector fields constrained on a polytope) are asymptotically stable under a multiplicative weights dynamic (under appropriate choices of a parameter called the learning rate or step size, which we demonstrate to be crucial to achieve convergence, as otherwise even chaotic behavior is possible to manifest). Our result implies that evolutionary theories based on multiplicative weights are compatible (in principle, more general) with those based on the notion of evolutionary stability. However, our result further establishes multiplicative weights as a nonlinear programming primitive (on par with standard nonlinear programming methods) since various nonlinear optimization problems, such as finding Nash Wardrop equilibria in nonatomic congestion games, which are well-known to be equipped with a convex potential function, and finding strict local maxima of quadratic programming problems, are special cases of the problem of computing evolutionarily stable states in nonlinear population games.
Discrete evolution rules related to multiplicative weights (and online learning primitives) have also been considered, for example, by @cite_3 , however, such evolution rules are customized by design to achieve good performance in selfish routing. Our analysis demonstrates that multiplicative weights may well serve as a primitive of adaptation in a wide variety of environments (an argument that intuitively corresponds with the success that multiplicative weights-based boosting algorithms in machine learning theory, such as AdaBoost, have met practical applications).
{ "cite_N": [ "@cite_3" ], "mid": [ "1978335532" ], "abstract": [ "We study the question of whether a large population of agents in a traffic network is able to converge to an equilibrium quickly. To that end, we consider a round-based variant of the Wardrop model. Every agent is allowed to reroute its traffic once in a while with the aim of finding a path with minimal latency. As a first result we find that using a replication policy which allows agents to imitate others gives rise to a bicriterial approximate equilibrium very quickly. In particular, the time bound depends logarithmically on the ratio between minimum and maximum latency but is otherwise independent of the network size. In the single-commodity case, this bicriteria approximate equilibrium has an intuitive interpretation as a state in which almost all agents are almost happy. This kind of approximate equilibrium, however, is transient. In order to reach a global approximation, we need to add an exploration component which enables the agents to explore the strategy space independently of the other agents. Although it can be shown that, when used exclusively, exploration policies imply an exponential lower bound, applying exploration carefully allows the population to approximate the global Wardrop equilibrium in polynomial time. Since the distributed and concurrent fashion of our policies bears the risk of oscillating behavior, we must take into account the steepness of the latency functions. We show that the relevant parameter is elasticity, a parameter closely related to the polynomial degree. This improves significantly over earlier results which depend on the absolute slope and therefore have a pseudopolynomial flavor." ] }
1601.07340
2259391824
Millimeter wave (mmWave) communications has been regarded as a key enabling technology for 5G networks, as it offers orders of magnitude greater spectrum than current cellular bands. In contrast to conventional multiple-input–multiple-output (MIMO) systems, precoding in mmWave MIMO cannot be performed entirely at baseband using digital precoders, as only a limited number of signal mixers and analog-to-digital converters can be supported considering their cost and power consumption. As a cost-effective alternative, a hybrid precoding transceiver architecture, combining a digital precoder and an analog precoder, has recently received considerable attention. However, the optimal design of such hybrid precoders has not been fully understood. In this paper, treating the hybrid precoder design as a matrix factorization problem, effective alternating minimization (AltMin) algorithms will be proposed for two different hybrid precoding structures, i.e., the fully-connected and partially-connected structures. In particular, for the fully-connected structure, an AltMin algorithm based on manifold optimization is proposed to approach the performance of the fully digital precoder, which, however, has a high complexity. Thus, a low-complexity AltMin algorithm is then proposed, by enforcing an orthogonal constraint on the digital precoder. Furthermore, for the partially-connected structure, an AltMin algorithm is also developed with the help of semidefinite relaxation. For practical implementation, the proposed AltMin algorithms are further extended to the broadband setting with orthogonal frequency division multiplexing modulation. Simulation results will demonstrate significant performance gains of the proposed AltMin algorithms over existing hybrid precoding algorithms. Moreover, based on the proposed algorithms, simulation comparisons between the two hybrid precoding structures will provide valuable design insights.
Hybrid precoding is a newly-emerged technique in mmWave MIMO systems @cite_36 @cite_18 @cite_2 @cite_38 @cite_22 . So far the main efforts are on the fully-connected structure @cite_12 @cite_20 @cite_8 @cite_29 @cite_24 @cite_32 @cite_44 @cite_9 @cite_50 . Orthogonal matching pursuit (OMP) is the most widely used algorithm, which often offers reasonably good performance. This algorithm requires the columns of analog precoding matrix to be picked from certain candidate vectors, such as array response vectors of the channel @cite_12 @cite_20 @cite_8 , and discrete Fourier transform (DFT) beamformers @cite_29 @cite_24 . Hence, the OMP-based hybrid precoder design can be viewed as a sparsity constrained matrix reconstruction problem. Though the design problem is greatly simplified in this way, restricting the space of feasible analog precoding solutions inevitably causes some performance loss. Additionally, extra overhead will be brought up for acquiring the information of array response vectors in advance. More recent attention has mainly focused on reducing the computation complexity of the OMP algorithm @cite_20 @cite_32 , e.g., by reusing the matrix inversion result in each iteration.
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_22", "@cite_8", "@cite_36", "@cite_29", "@cite_9", "@cite_32", "@cite_24", "@cite_44", "@cite_50", "@cite_2", "@cite_20", "@cite_12" ], "mid": [ "2045658304", "2103851250", "2138020668", "1992463666", "2104074482", "2015959396", "2041287972", "1499056928", "2045840914", "1975981182", "2242330824", "2111953900", "2051388965", "2053521124" ], "abstract": [ "With the formidable growth of various booming wireless communication services that require ever increasing data throughputs, the conventional microwave band below 10 GHz, which is currently used by almost all mobile communication systems, is going to reach its saturation point within just a few years. Therefore, the attention of radio system designers has been pushed toward ever higher segments of the frequency spectrum in a quest for increased capacity. In this article we investigate the feasibility, advantages, and challenges of future wireless communications over the Eband frequencies. We start with a brief review of the history of the E-band spectrum and its light licensing policy as well as benefits challenges. Then we introduce the propagation characteristics of E-band signals, based on which some potential fixed and mobile applications at the E-band are investigated. In particular, we analyze the achievability of a nontrivial multiplexing gain in fixed point-to-point E-band links, and propose an E-band mobile broadband (EMB) system as a candidate for the next generation mobile communication networks. The channelization and frame structure of the EMB system are discussed in detail.", "The use of mmWave frequencies for wireless communications offers channel bandwidths far greater than previously available, while enabling dozens or even hundreds of antenna elements to be used at the user equipment, base stations, and access points. To date, MIMO techniques, such as spatial multiplexing, beamforming, and diversity, have been widely deployed in lower-frequency systems such as IEEE 802.11n ac (wireless local area networks) and 3GPP LTE 4G cellphone standards. Given the tiny wavelengths associated with mmWave, coupled with differences in the propagation and antennas used, it is unclear how well spatial multiplexing with multiple streams will be suited to future mmWave mobile communications. This tutorial explores the fundamental issues involved in selecting the best communications approaches for mmWave frequencies, and provides insights, challenges, and appropriate uses of each MIMO technique based on early knowledge of the mmWave propagation environment.", "Millimeter-wave (mmW) frequencies between 30 and 300 GHz are a new frontier for cellular communication that offers the promise of orders of magnitude greater bandwidths combined with further gains via beamforming and spatial multiplexing from multielement antenna arrays. This paper surveys measurements and capacity studies to assess this technology with a focus on small cell deployments in urban environments. The conclusions are extremely encouraging; measurements in New York City at 28 and 73 GHz demonstrate that, even in an urban canyon environment, significant non-line-of-sight (NLOS) outdoor, street-level coverage is possible up to approximately 200 m from a potential low-power microcell or picocell base station. In addition, based on statistical channel models from these measurements, it is shown that mmW systems can offer more than an order of magnitude increase in capacity over current state-of-the-art 4G cellular networks at current cell densities. Cellular systems, however, will need to be significantly redesigned to fully achieve these gains. Specifically, the requirement of highly directional and adaptive transmissions, directional isolation between links, and significant possibilities of outage have strong implications on multiple access, channel structure, synchronization, and receiver design. To address these challenges, the paper discusses how various technologies including adaptive beamforming, multihop relaying, heterogeneous network architectures, and carrier aggregation can be leveraged in the mmW context.", "Due to the high cost and power consumption of radio frequency (RF) chains, millimeter wave (mm-wave) communica- tion systems equipped with large antenna arrays typically employ less RF chains than the antenna elements. This leads to the use of a hybrid MIMO processor consisting of a RF beamformer and a baseband MIMO processor in mm-wave communications. In this paper, we consider amplify-and-forward (AF) relay-assisted mm- wave systems with the hybrid MIMO processors over frequency- selective channels. We develop an iterative algorithm for jointly designing the receive transmit (Rx Tx) RF baseband processors of the relay based on the orthogonal matching pursuit (OMP) algorithm for sparse approximation, while assuming orthogonal frequency division multiplexing (OFDM) signaling. Simulation results show that the proposed method outperforms the conven- tional method that designs the baseband processor after steering the RF beams.", "The ever growing traffic explosion in mobile communications has recently drawn increased attention to the large amount of underutilized spectrum in the millimeter-wave frequency bands as a potentially viable solution for achieving tens to hundreds of times more capacity compared to current 4G cellular networks. Historically, mmWave bands were ruled out for cellular usage mainly due to concerns regarding short-range and non-line-of-sight coverage issues. In this article, we present recent results from channel measurement campaigns and the development of advanced algorithms and a prototype, which clearly demonstrate that the mmWave band may indeed be a worthy candidate for next generation (5G) cellular systems. The results of channel measurements carried out in both the United States and Korea are summarized along with the actual free space propagation measurements in an anechoic chamber. Then a novel hybrid beamforming scheme and its link- and system-level simulation results are presented. Finally, recent results from our mmWave prototyping efforts along with indoor and outdoor test results are described to assert the feasibility of mmWave bands for cellular usage.", "We consider the design of a hybrid multiple-input multiple-output (MIMO) processor consisting of a radio frequency (RF) beamformer and a baseband MIMO processor for millimeter-wave communications over multiuser interference channels. Sparse approximation problems are formulated to design hybrid MIMO processors approximating the minimum-mean-square-error transmit receive processors in MIMO interference channels. They are solved by orthogonal-matching-pursuit-based algorithms that successively select RF beamforming vectors from a set of candidate vectors and optimize the corresponding baseband processor in the least squares sense. It is shown that various beamformers can be designed by considering different types of candidate vector sets. Simulation results demonstrate the advantage of the proposed design over the conventional method that designs the baseband processor after steering the RF beams.", "Massive multiple-input multiple-output (MIMO) is envisioned to offer considerable capacity improvement, but at the cost of high complexity of the hardware. In this paper, we propose a low-complexity hybrid precoding scheme to approach the performance of the traditional baseband zero-forcing (ZF) precoding (referred to as full-complexity ZF), which is considered a virtually optimal linear precoding scheme in massive MIMO systems. The proposed hybrid precoding scheme, named phased-ZF (PZF), essentially applies phase-only control at the RF domain and then performs a low-dimensional baseband ZF precoding based on the effective channel seen from baseband. Heavily quantized RF phase control up to 2 bits of precision is also considered and shown to incur very limited degradation. The proposed scheme is simulated in both ideal Rayleigh fading channels and sparsely scattered millimeter wave (mmWave) channels, both achieving highly desirable performance.", "Millimeter wave (mmWave) multiple-input multipleoutput (MIMO) communication with large antenna arrays has been proposed to enable gigabit per second communication for next generation cellular systems and local area networks. A key difference relative to lower frequency solutions is that in mmWave systems, precoding combining can not be performed entirely at digital baseband, due to the high cost and power consumption of some components of the radio frequency (RF) chain. In this paper we develop a low complexity algorithm for finding hybrid precoders that split the precoding combining process between the analog and digital domains. Our approach exploits sparsity in the received signal to formulate the design of the precoder combiners as a compressed sensing optimization problem. We use the properties of the matrix containing the array response vectors to find first an orthonormal analog precoder, since sparse approximation algorithms applied to orthonormal sensing matrices are based on simple computations of correlations. Then, we propose to perform a local search to refine the analog precoder and compute the baseband precoder. We present numerical results demonstrate substantial improvements in complexity while maintaining good spectral efficiency.", "Millimeter-wave wireless systems are emerging as a promising technology for meeting the exploding capacity requirements of wireless communication networks. Besides large bandwidths, small wavelengths at mm-wave lead to a high-dimensional spatial signal space, that can be exploited for significant capacity gains through high-dimensional multiple-input multiple-output (MIMO) techniques. In conventional MIMO approaches, optimal performance requires prohibitively high transceiver complexity. By combining the concept of beamspace MIMO communication with a hybrid analog-digital transceiver, continuous aperture phased (CAP) MIMO achieves near-optimal performance with dramatically lower complexity. This paper presents a framework for physically-accurate computational modeling and analysis of CAP-MIMO, and reports measurement results on a DLA-based prototype for multimode line-of-sight communication. The model, based on a critically sampled system representation, is used to demonstrate the performance gains of CAP-MIMO over state-of-the-art designs at mm-wave. For example, a CAP-MIMO system can achieve a spectral efficiency of 10-20 bits s Hz with a 17-31 dB power advantage over state-of-the-art, corresponding to a data rate of 10-200 Gbps with 1-10 GHz system bandwidth. The model is refined to analyze critical sources of power loss in an actual multimode system. The prototype-based measurement results closely follow the theoretical predictions, validating CAP-MIMO theory, and illustrating the utility of the model.", "", "In order to support high-data-rate wireless communication links, millimeter wave (mmWave) systems need to overcome considerable propagation attenuation. In mmWave systems, the small wavelength enables pre-processing exploiting large antenna arrays to provide the required gain. Generally, in traditional microwave systems, pre-processing is done at the digital baseband. However, the cost and power consumption of a radio frequency (RF) chain, which carries out translation between RF and digital baseband, is too high. It is impossible to afford one for each antenna element. This hardware limitation places additional constraints on pre-processing design. In this paper, we use hybrid spatial processing architecture with a lower number of RF chains than the antenna elements. We propose a joint design of digital baseband pre-processing and post-processing based on a weighted minimum mean-squared error (MMSE) criterion subject to the transmit power constraint. Then, the optimization in the RF domain is specified into three criteria according to various error weights. For ease of hardware implementation, we develop a lower complexity transceiver in which pre-processing in RF domain after upconversion is implemented merely using analog phase shifters. Finally, we evaluate our proposed scheme by means of simulation.", "Millimeter wave (mmWave) cellular systems will enable gigabit-per-second data rates thanks to the large bandwidth available at mmWave frequencies. To realize sufficient link margin, mmWave systems will employ directional beamforming with large antenna arrays at both the transmitter and receiver. Due to the high cost and power consumption of gigasample mixed-signal devices, mmWave precoding will likely be divided among the analog and digital domains. The large number of antennas and the presence of analog beamforming requires the development of mmWave-specific channel estimation and precoding algorithms. This paper develops an adaptive algorithm to estimate the mmWave channel parameters that exploits the poor scattering nature of the channel. To enable the efficient operation of this algorithm, a novel hierarchical multi-resolution codebook is designed to construct training beamforming vectors with different beamwidths. For single-path channels, an upper bound on the estimation error probability using the proposed algorithm is derived, and some insights into the efficient allocation of the training power among the adaptive stages of the algorithm are obtained. The adaptive channel estimation algorithm is then extended to the multi-path case relying on the sparse nature of the channel. Using the estimated channel, this paper proposes a new hybrid analog digital precoding algorithm that overcomes the hardware constraints on the analog-only beamforming, and approaches the performance of digital solutions. Simulation results show that the proposed low-complexity channel estimation algorithm achieves comparable precoding gains compared to exhaustive channel training algorithms. The results illustrate that the proposed channel estimation and precoding algorithms can approach the coverage probability achieved by perfect channel knowledge even in the presence of interference.", "A millimeter wave (mm-wave) communication system provides multi-Gb s data rates in short-distance transmission. Because millimeter waves have short wavelength, transceivers can be composed of large antenna arrays to alleviate severe signal attenuation. Furthermore, the link performance can be improved by adopting precoding technology in multiple data stream transmission. However, the complexity of radio frequency (RF) chains increases when large antenna arrays are used in mm-wave systems. To reduce the hardware cost, the precoding circuit can be jointly designed in both analog and digital domains to reduce the required number of RF chains. This paper proposes a new method of building the joint RF and baseband precoder that reduces the computation complexity of the original precoder reconstruction algorithm and enables highly parallel hardware architecture. Moreover, the proposed precoder reconstruction algorithm was designed and implemented using TSMC 90-nm UTM CMOS technology. The proposed precoder reconstruction processor supports the transmissions of one to four data streams for 8 × 8 mm-wave multiple-input multiple-output systems. The operating frequency of this chip was 167 MHz, and the power consumption was 243.2 mW when the supply voltage was 1 V. The core area of the postlayout result was about 3.94 mm 2 . The proposed processor achieved 4, 4.9, 6.7, and 6.7 M channel matrices per second in four-, three-, two-, and one-stream modes, respectively.", "Millimeter wave (mmWave) signals experience orders-of-magnitude more pathloss than the microwave signals currently used in most wireless applications and all cellular systems. MmWave systems must therefore leverage large antenna arrays, made possible by the decrease in wavelength, to combat pathloss with beamforming gain. Beamforming with multiple data streams, known as precoding, can be used to further improve mmWave spectral efficiency. Both beamforming and precoding are done digitally at baseband in traditional multi-antenna systems. The high cost and power consumption of mixed-signal devices in mmWave systems, however, make analog processing in the RF domain more attractive. This hardware limitation restricts the feasible set of precoders and combiners that can be applied by practical mmWave transceivers. In this paper, we consider transmit precoding and receiver combining in mmWave systems with large antenna arrays. We exploit the spatial structure of mmWave channels to formulate the precoding combining problem as a sparse reconstruction problem. Using the principle of basis pursuit, we develop algorithms that accurately approximate optimal unconstrained precoders and combiners such that they can be implemented in low-cost RF hardware. We present numerical results on the performance of the proposed algorithms and show that they allow mmWave systems to approach their unconstrained performance limits, even when transceiver hardware constraints are considered." ] }
1601.07340
2259391824
Millimeter wave (mmWave) communications has been regarded as a key enabling technology for 5G networks, as it offers orders of magnitude greater spectrum than current cellular bands. In contrast to conventional multiple-input–multiple-output (MIMO) systems, precoding in mmWave MIMO cannot be performed entirely at baseband using digital precoders, as only a limited number of signal mixers and analog-to-digital converters can be supported considering their cost and power consumption. As a cost-effective alternative, a hybrid precoding transceiver architecture, combining a digital precoder and an analog precoder, has recently received considerable attention. However, the optimal design of such hybrid precoders has not been fully understood. In this paper, treating the hybrid precoder design as a matrix factorization problem, effective alternating minimization (AltMin) algorithms will be proposed for two different hybrid precoding structures, i.e., the fully-connected and partially-connected structures. In particular, for the fully-connected structure, an AltMin algorithm based on manifold optimization is proposed to approach the performance of the fully digital precoder, which, however, has a high complexity. Thus, a low-complexity AltMin algorithm is then proposed, by enforcing an orthogonal constraint on the digital precoder. Furthermore, for the partially-connected structure, an AltMin algorithm is also developed with the help of semidefinite relaxation. For practical implementation, the proposed AltMin algorithms are further extended to the broadband setting with orthogonal frequency division multiplexing modulation. Simulation results will demonstrate significant performance gains of the proposed AltMin algorithms over existing hybrid precoding algorithms. Moreover, based on the proposed algorithms, simulation comparisons between the two hybrid precoding structures will provide valuable design insights.
On the other hand, much less attention has been paid on the partially-connected structure @cite_17 @cite_19 @cite_37 @cite_33 @cite_6 @cite_13 . In @cite_17 @cite_19 , codebook-based design of hybrid precoders was presented for narrowband and orthogonal frequency division multiplexing (OFDM) systems, respectively. Although the codebook-based design enjoys a low complexity, there will be certain performance loss, and it is not clear how much performance gain can be further obtained. By utilizing the idea of successive interference cancellation (SIC), an iterative hybrid precoding algorithm for the partially-connected structure was proposed in @cite_37 . The algorithm is established based on the assumption that the digital precoding matrix is diagonal, which means that the digital precoder only allocates power to different data streams, and the number of RF chains should be equal to that of the data streams. However, using only analog precoders to provide beamforming gains is obviously a suboptimal strategy @cite_37 @cite_33 , which also deviates from the motivation of hybrid precoding. So far there is no study directly optimizing the hybrid precoders without extra constraints in the partially-connected structure, which will be pursued in this paper.
{ "cite_N": [ "@cite_37", "@cite_33", "@cite_6", "@cite_19", "@cite_13", "@cite_17" ], "mid": [ "1606470095", "2004294022", "2014075533", "2037682381", "2166774451", "2116193517" ], "abstract": [ "Millimeter wave (mmWave) massive MIMO can achieve orders of magnitude increase in spectral and energy efficiency, and it usually exploits the hybrid analog and digital precoding to overcome the serious signal attenuation induced by mmWave frequencies. However, most of hybrid precoding schemes focus on the full-array structure, which involves a high complexity. In this paper, we propose a near-optimal iterative hybrid precoding scheme based on the more realistic subarray structure with low complexity. We first decompose the complicated capacity optimization problem into a series of ones easier to be handled by considering each antenna array one by one. Then we optimize the achievable capacity of each antenna array from the first one to the last one by utilizing the idea of successive interference cancelation (SIC), which is realized in an iterative procedure that is easy to be parallelized. It is shown that the proposed hybrid precoding scheme can achieve better performance than other recently proposed hybrid precoding schemes, while it also enjoys an acceptable computational complexity.", "With the severe spectrum shortage in conventional cellular bands, large-scale antenna systems in the mmWave bands can potentially help to meet the anticipated demands of mobile traffic in the 5G era. There are many challenging issues, however, regarding the implementation of digital beamforming in large-scale antenna systems: complexity, energy consumption, and cost. In a practical large-scale antenna deployment, hybrid analog and digital beamforming structures can be important alternative choices. In this article, optimal designs of hybrid beamforming structures are investigated, with the focus on an N (the number of transceivers) by M (the number of active antennas per transceiver) hybrid beamforming structure. Optimal analog and digital beamforming designs in a multi-user beamforming scenario are discussed. Also, the energy efficiency and spectrum efficiency of the N × M beamforming structure are analyzed, including their relationship at the green point (i.e., the point with the highest energy efficiency) on the energy efficiency-spectrum efficiency curve, the impact of N on the energy efficiency performance at a given spectrum efficiency value, and the impact of N on the green point energy efficiency. These results can be conveniently utilized to guide practical LSAS design for optimal energy spectrum efficiency trade-off. Finally, a reference signal design for the hybrid beamform structure is presented, which achieves better channel estimation performance than the method solely based on analog beamforming. It is expected that large-scale antenna systems with hybrid beamforming structures in the mmWave band can play an important role in 5G.", "A massive hybrid array consists of multiple analog subarrays, with each subarray having its digital processing chain. It offers the potential advantage of balancing cost and performance for massive arrays and therefore serves as an attractive solution for future millimeter-wave (mm- Wave) cellular communications. On one hand, using beamforming analog subarrays such as phased arrays, the hybrid configuration can effectively collect or distribute signal energy in sparse mm-Wave channels. On the other hand, multiple digital chains in the configuration provide multiplexing capability and more beamforming flexibility to the system. In this article, we discuss several important issues and the state-of-the-art development for mm-Wave hybrid arrays, such as channel modeling, capacity characterization, applications of various smart antenna techniques for single-user and multiuser communications, and practical hardware design. We investigate how the hybrid array architecture and special mm-Wave channel property can be exploited to design suboptimal but practical massive antenna array schemes. We also compare two main types of hybrid arrays, interleaved and localized arrays, and recommend that the localized array is a better option in terms of overall performance and hardware feasibility.", "The ever-increasing traffic crunch for the wireless communication has drawn attention to the large spectrum available in the millimeter-wave bands as a potential means to achieve several fold mobile data traffic increase. While the channel characteristics at the millimeter-wave bands are known to be unfavorable for the mobile wireless communication purpose, the high gain available from massive array antenna facilitated by the short wavelength makes it possible to overcome the large path-loss. In this paper, we propose a hybrid beamforming architecture that combines an analog beamforming with array antennas and a digital precoding with multiple RF chains. Furthermore, we propose a multi-beam transmission diversity scheme for single stream transmission for single user MIMO operation. It is shown through various simulation results that the proposed hybrid beamforming scheme leads to considerable performance improvements even with limited feedback.", "Design and fabrication aspects of an affordable planar beam steerable antenna array with a simple architecture are considered in this paper. Grouping the elements of a phased array into a number of partially overlapped subarrays and using a single phase shifter for each subarray, generally results in a considerable reduction in array size and manufacturing costs. However, overlapped subarrays require complicated corporate feed networks and array architectures that cannot be easily implemented using planar technologies. In this paper a novel feed network and array architecture for implementing a planar phased array of microstrip antennas is presented that enables the fabrication of low-sidelobe, compact, beam-steerable millimeter-wave arrays and facilitates integration of the RF front-end electronics with the antenna structure. This design uses a combination of series and parallel feeding schemes to achieve the desired array coefficients. The proposed approach is used to design a three-state switched-beam phased array with a scanning width of spl plusmn 10 spl deg . This phased array which is composed of 80 microstrip elements, achieves a gain of >20 dB, a sidelobe level of 6.3 for all states of the beam. The antenna efficiency is measured at 33-36 in X band. It is shown that the proposed feeding scheme is insensitive to the mutual coupling among the elements.", "The use of the millimeter (mm) wave spectrum for next generation (5G) mobile communication has gained significant attention recently. The small carrier wavelengths at mmwave frequencies enable synthesis of compact antenna arrays, providing beamforming gains that compensate the increased propagation losses. In this work, we investigate the feasibility of employing multiple antenna arrays (at the transmitter and or receiver) to obtain diversity multiplexing gains in mmwave systems, where each of the arrays is capable of beamforming independently. Considering a codebook-based beamforming system (the set of possible beamforming directions is fixed a priori, e.g., to facilitate limited feedback), we observe that the complexity of jointly optimizing the beamforming directions across the multiple arrays is highly prohibitive, even for very reasonable system parameters. To overcome this bottleneck, we develop reduced complexity algorithms for optimizing the choice of beamforming directions, premised on the sparse multipath structure of the mmwave channel. Specifically, we reduce the cardinality of the joint beamforming search space, by restricting attention to a small set of dominant candidate directions. To obtain the set of dominant directions, we develop two complementary approaches: 1) based on computation of a novel spatial power metric; a detailed analysis of this metric shows that, in the limit of large antenna arrays, the selected candidate directions approach the channel's dominant angles of arrival and departure, and 2) precise estimation of the channel's (long-term) dominant angles of arrival, exploiting the correlations of the signals received across the different receiver subarrays. Our methods enable a drastic reduction of the optimization search space (a factor of 100 reduction), while delivering close to optimal performance, thereby indicating the potential feasibility of achieving diversity and multiplexing gains in mmwave systems." ] }
1601.06834
2264968112
Graphlet analysis is an approach to network analysis that is particularly popular in bioinformatics. We show how to set up a system of linear equations that relate the orbit counts and can be used in an algorithm that is significantly faster than the existing approaches based on direct enumeration of graphlets. The algorithm requires existence of a vertex with certain properties; we show that such vertex exists for graphlets of arbitrary size, except for complete graphs and @math , which are treated separately. Empirical analysis of running time agrees with the theoretical results.
Counting all non-induced subgraphs is as hard as counting all induced subgraphs because they are connected through a system of linear equations. Despite this it is sometimes beneficial to compute induced counts from non-induced ones. Rapid Graphlet Enumerator (RAGE) @cite_7 takes this approach for counting four-node graphlets. Instead of counting induced subgraphs directly, it reconstructs them from counts of non-induced subgraphs. For computing the latter, it uses specifically crafted methods for each of the 6 possible subgraphs ( @math , claw, @math , paw, diamond and @math ). The time complexity of counting non-induced cycles and complete graphs is @math , while counting other subgraphs runs in @math . However, the run-time of counting cycles and cliques in real-world networks is usually much lower.
{ "cite_N": [ "@cite_7" ], "mid": [ "2122909744" ], "abstract": [ "Counting network graphlets (and motifs) was shown to have an important role in studying a wide range of complex networks. However, when the network size is large, as in the case of the Internet topology and WWW graphs, counting the number of graphlets becomes prohibitive for graphlets of size 4 and above. Devising efficient graphlet counting algorithms thus becomes an important goal. In this paper, we present efficient counting algorithms for 4-node graphlets. We show how to efficiently count the total number of each type of graphlet, and the number of graphlets adjacent to a node. We further present a new algorithm for node position-aware graphlet counting, namely partitioning the graphlet count by the node position in the graphlet. Since our algorithms are based on non-induced graphlet count, we also show how to calculate the count of induced graphlets given the non-induced count. We implemented our algorithms on a set of both synthetic and real-world graphs. Our evaluation shows that the algorithms are scalable and perform up to 30 times faster than the state-of-the-art. We then apply the algorithms on the Internet Autonomous Systems (AS) graph, and show how fast graphlet counting can be leveraged for efficient and scalable classification of the ASes that comprise the Internet. Finally, we present RAGE, a tool for rapid graphlet enumeration available online." ] }
1601.06834
2264968112
Graphlet analysis is an approach to network analysis that is particularly popular in bioinformatics. We show how to set up a system of linear equations that relate the orbit counts and can be used in an algorithm that is significantly faster than the existing approaches based on direct enumeration of graphlets. The algorithm requires existence of a vertex with certain properties; we show that such vertex exists for graphlets of arbitrary size, except for complete graphs and @math , which are treated separately. Empirical analysis of running time agrees with the theoretical results.
Some approaches exploit the relations between the numbers of occurrences of induced subgraphs in a graph. Kloks . @cite_2 showed how to construct a system of equations that allows computing the number of occurrences of all six possible induced four-node subgraphs if we know the count of any of them. The time complexity of setting up the system equals the time complexity of multiplying two square matrices of size @math . Kowaluk @cite_10 generalized the result by Kloks to counting subgraph patterns of arbitrary size. Their solution depends on the size of the independent set in the pattern graph and relies on fast matrix multiplication techniques. They also provide an analysis of their approach on sparse graphs, where they avoid matrix multiplications and derive the time bounds in terms of the number of edges in the graph.
{ "cite_N": [ "@cite_10", "@cite_2" ], "mid": [ "2070188445", "2150875823" ], "abstract": [ "We present a general technique for detecting and counting small subgraphs. It consists of forming special linear combinations of the numbers of occurrences of different induced subgraphs of fixed size in a graph. These combinations can be efficiently computed by rectangular matrix multiplication. Our two main results utilizing the technique are as follows. Let @math be a fixed graph with @math vertices and an independent set of size @math 1. Detecting if an @math -vertex graph contains a (not necessarily induced) subgraph isomorphic to @math can be done in time @math , where @math is the exponent of fast arithmetic matrix multiplication of an @math matrix by an @math matrix. 2. When @math counting the number of (not necessarily induced) subgraphs isomorphic to @math can be done in the same time, i.e., in time @math It follows in particular that we can count the nu...", "We give two algorithms for listing all simplicial vertices of a graph. The first of these algorithms takes O(nα) time, where n is the number of vertices in the graph and O(nα) is the time needed to perform a fast matrix multiplication. The second algorithm can be implemented to run in (O(e^ 2 + 1 ) = O(e^ 1.41 ) ), where e is the number of edges in the graph." ] }
1601.06834
2264968112
Graphlet analysis is an approach to network analysis that is particularly popular in bioinformatics. We show how to set up a system of linear equations that relate the orbit counts and can be used in an algorithm that is significantly faster than the existing approaches based on direct enumeration of graphlets. The algorithm requires existence of a vertex with certain properties; we show that such vertex exists for graphlets of arbitrary size, except for complete graphs and @math , which are treated separately. Empirical analysis of running time agrees with the theoretical results.
Floderus . @cite_12 researched whether some induced subgraphs are easier to count than others as is the case with non-induced subgraphs. For example, we can count non-induced stars with @math nodes, @math , in linear time. They conjectured that all induced subgraphs are equally hard to count. They showed that the time complexity in terms of the size of G for counting any pattern graph @math on @math nodes in graph @math is at least as high as counting independent sets on @math nodes in terms of the size of @math .
{ "cite_N": [ "@cite_12" ], "mid": [ "1960457407" ], "abstract": [ "The complexity of the subgraph isomorphism problem where the pattern graph is of fixed size is well known to depend on the topology of the pattern graph. For instance, the larger the maximum independent set of the pattern graph is the more efficient algorithms are known. The situation seems to be substantially different in the case of induced subgraph isomorphism for pattern graphs of fixed size. We present two results which provide evidence that no topology of an induced subgraph of fixed size can be easier to detect or count than an independent set of related size. We show that: Any fixed pattern graph that has a maximum independent set of size k that is disjoint from other maximum independent sets is not easier to detect as an induced subgraph than an independent set of size k. It follows in particular that an induced path on k vertices is not easier to detect than an independent set on ⌈k 2 ⌉ vertices, and that an induced even cycle on k vertices is not easier to detect than an independent set on k 2 vertices. In view of linear time upper bounds on induced paths of length three and four, our lower bound is tight. Similar corollaries hold for the detection of induced complete bipartite graphs and induced complete split graphs. For an arbitrary pattern graph H on k vertices with no isolated vertices, there is a simple subdivision of H, resulting from splitting each edge into a path of length four and attaching a distinct path of length three at each vertex of degree one, that is not easier to detect or count than an independent set on k vertices, respectively. Finally, we show that the so called diamond, paw and C 4 are not easier to detect as induced subgraphs than an independent set on three vertices." ] }
1601.06834
2264968112
Graphlet analysis is an approach to network analysis that is particularly popular in bioinformatics. We show how to set up a system of linear equations that relate the orbit counts and can be used in an algorithm that is significantly faster than the existing approaches based on direct enumeration of graphlets. The algorithm requires existence of a vertex with certain properties; we show that such vertex exists for graphlets of arbitrary size, except for complete graphs and @math , which are treated separately. Empirical analysis of running time agrees with the theoretical results.
Vassilevska and Williams @cite_4 studied the problem of finding and counting individual non-induced subgraphs. Their results depend on the size @math of the independent set in the pattern graph and rely on efficient computations of matrix permanents and not on fast matrix multiplication techniques like some other approaches. If we restrict the problem to counting small patterns and therefore treat @math and @math as small constants, their approach counts a non-induced pattern in @math time. This is an improvement over a simple enumeration when @math . Kowaluk @cite_10 also improved on the result of Vassilevska and Williams when @math . @cite_3 developed algorithms for counting non-induced cycles with 3 to 7 nodes in @math , where @math represents the exponent of matrix multiplication algorithms.
{ "cite_N": [ "@cite_10", "@cite_4", "@cite_3" ], "mid": [ "2070188445", "2364489530", "1967066104" ], "abstract": [ "We present a general technique for detecting and counting small subgraphs. It consists of forming special linear combinations of the numbers of occurrences of different induced subgraphs of fixed size in a graph. These combinations can be efficiently computed by rectangular matrix multiplication. Our two main results utilizing the technique are as follows. Let @math be a fixed graph with @math vertices and an independent set of size @math 1. Detecting if an @math -vertex graph contains a (not necessarily induced) subgraph isomorphic to @math can be done in time @math , where @math is the exponent of fast arithmetic matrix multiplication of an @math matrix by an @math matrix. 2. When @math counting the number of (not necessarily induced) subgraphs isomorphic to @math can be done in the same time, i.e., in time @math It follows in particular that we can count the nu...", "The role of self-relevance has been somewhat neglected in static face processing paradigms but may be important in understanding how emotional faces impact on attention, cognition and affect. The aim of the current study was to investigate the effect of self-relevant primes on processing emotional composite faces. Sentence primes created an expectation of the emotion of the face before sad, happy, neutral or composite face photos were viewed. Eye movements were recorded and subsequent responses measured the cognitive and affective impact of the emotion expressed. Results indicated that primes did not guide attention, but impacted on judgments of valence intensity and self-esteem ratings. Negative self-relevant primes led to the most negative self-esteem ratings, although the effect of the prime was qualified by salient facial features. Self-relevant expectations about the emotion of a face and subsequent attention to a face that is congruent with these expectations strengthened the affective impact of viewing the face.", "" ] }
1601.06040
2950623994
In topology recognition, each node of an anonymous network has to deterministically produce an isomorphic copy of the underlying graph, with all ports correctly marked. This task is usually unfeasible without any a priori information. Such information can be provided to nodes as advice. An oracle knowing the network can give a (possibly different) string of bits to each node, and all nodes must reconstruct the network using this advice, after a given number of rounds of communication. During each round each node can exchange arbitrary messages with all its neighbors and perform arbitrary local computations. The time of completing topology recognition is the number of rounds it takes, and the size of advice is the maximum length of a string given to nodes. We investigate tradeoffs between the time in which topology recognition is accomplished and the minimum size of advice that has to be given to nodes. We provide upper and lower bounds on the minimum size of advice that is sufficient to perform topology recognition in a given time, in the class of all graphs of size @math and diameter @math , for any constant @math . In most cases, our bounds are asymptotically tight.
Many papers @cite_29 @cite_26 @cite_28 @cite_1 @cite_15 @cite_2 @cite_0 @cite_13 @cite_14 @cite_20 @cite_17 @cite_27 @cite_9 @cite_24 @cite_4 @cite_6 considered the problem of increasing the efficiency of network tasks by providing nodes with some information of arbitrary kind. This approach was referred to as algorithms using informative labeling schemes , or equivalently, algorithms with advice . Advice is given either to nodes of the network or to mobile agents performing some network task. Several authors studied the minimum size of advice required to solve the respective network problem in an efficient way. Thus the framework of advice permits to quantify the amount of information needed for an efficient solution of a given network problem, regardless of the type of information that is provided.
{ "cite_N": [ "@cite_26", "@cite_14", "@cite_4", "@cite_28", "@cite_29", "@cite_9", "@cite_1", "@cite_6", "@cite_0", "@cite_24", "@cite_27", "@cite_2", "@cite_15", "@cite_13", "@cite_20", "@cite_17" ], "mid": [ "1519256469", "1975595616", "2174013141", "", "2025590344", "", "2046334554", "2045446569", "1971694274", "2056295140", "1983693678", "1975011672", "2109659895", "2038319432", "", "2034501275" ], "abstract": [ "We address the problem of labeling the nodes of a tree such that one can determine the identifier of the least common ancestor of any two nodes by looking only at their labels. This problem has application in routing and in distributed computing in peer-to-peer networks. A labeling scheme using i¾?(log2n)-bit labels has been previously presented by Peleg. By engineering this scheme, we obtain a variety of data structures with the same asymptotic performances. We conduct a thorough experimental evaluation of all these data structures. Our results clearly show which variants achieve the best performances in terms of space usage, construction time, and query time.", "We use the recently introduced advising scheme framework for measuring the difficulty of locally distributively computing a Minimum Spanning Tree (MST). An (m,t)-advising scheme for a distributed problem P is a way, for every possible input I of P, to provide an \"advice\" (i.e., a bit string) about I to each node so that: (1) the maximum size of the advices is at most m bits, and (2) the problem P can be solved distributively in at most t rounds using the advices as inputs. In case of MST, the output returned by each node of a weighted graph G is the edge leading to its parent in some rooted MST T of G. Clearly, there is a trivial (log n,0)-advising scheme for MST (each node is given the local port number of the edge leading to the root of some MST T), and it is known that any (0,t)-advising scheme satisfies t ≥ Ω (√n). Our main result is the construction of an (O(1),O(log n))-advising scheme for MST. That is, by only giving a constant number of bits of advice to each node, one can decrease exponentially the distributed computation time of MST in arbitrary graph, compared to algorithms dealing with the problem in absence of any a priori information. We also consider the average size of the advices. On the one hand, we show that any (m,0)-advising scheme for MST gives advices of average size Ω(log n). On the other hand we design an (m,1)-advising scheme for MST with advices of constant average size, that is one round is enough to decrease the average size of the advices from log(n) to constant.", "[L. Blin, P. Fraigniaud, N. Nisse, S. Vial, Distributing chasing of network intruders, in: 13th Colloquium on Structural Information and Communication Complexity, SIROCCO, in: LNCS, vol. 4056, Springer-Verlag, 2006, pp. 70-84] introduced a new measure of difficulty for a distributed task in a network. The smallest number of bits of advice of a distributed problem is the smallest number of bits of information that has to be available to nodes in order to accomplish the task efficiently. Our paper deals with the number of bits of advice required to perform efficiently the graph searching problem in a distributed setting. In this variant of the problem, all searchers are initially placed at a particular node of the network. The aim of the team of searchers is to clear a contaminated graph in a monotone connected way, i.e., the cleared part of the graph is permanently connected, and never decreases while the search strategy is executed. Moreover, the clearing of the graph must be performed using the optimal number of searchers, i.e. the minimum number of searchers sufficient to clear the graph in a monotone connected way in a centralized setting. We show that the minimum number of bits of advice permitting the monotone connected and optimal clearing of a network in a distributed setting is @Q(nlogn), where n is the number of nodes of the network. More precisely, we first provide a labelling of the vertices of any graph G, using a total of O(nlogn) bits, and a protocol using this labelling that enables the optimal number of searchers to clear G in a monotone connected distributed way. Then, we show that this number of bits of advice is optimal: any distributed protocol requires @W(nlogn) bits of advice to clear a network in a monotone connected way, using an optimal number of searchers.", "", "We consider the following problem. Given a rooted tree @math , label the nodes of @math in the most compact way such that, given the labels of two nodes @math and @math , one can determine in constant time, by looking only at the labels, whether @math is ancestor of @math . The best known labeling scheme is rather straightforward and uses labels of length at most @math bits each, where @math is the number of nodes in the tree. Our main result in this paper is a labeling scheme with maximum label length @math . Our motivation for studying this problem is enhancing the performance of web search engines. In the context of this application each indexed document is a tree, and the labels of all trees are maintained in main memory. Therefore even small improvements in the maximum label length are important.", "", "We study the problem of the amount of information required to draw a complete or a partial map of a graph with unlabeled nodes and arbitrarily labeled ports. A mobile agent, starting at any node of an unknown connected graph and walking in it, has to accomplish one of the following tasks: draw a complete map of the graph, i.e., find an isomorphic copy of it including port numbering, or draw a partial map, i.e., a spanning tree, again with port numbering. The agent executes a deterministic algorithm and cannot mark visited nodes in any way. None of these map drawing tasks is feasible without any additional information, unless the graph is a tree. Hence we investigate the minimum number of bits of information (minimum size of advice) that has to be given to the agent to complete these tasks. It turns out that this minimum size of advice depends on the number n of nodes or the number m of edges of the graph, and on a crucial parameter @m, called the multiplicity of the graph, which measures the number of nodes that have an identical view of the graph. We give bounds on the minimum size of advice for both above tasks. For @m=1 our bounds are asymptotically tight for both tasks and show that the minimum size of advice is very small. For @m>1 the minimum size of advice increases abruptly. In this case our bounds are asymptotically tight for topology recognition and asymptotically almost tight for spanning tree construction.", "Let G = (V,E) be an undirected weighted graph with vVv = n and vEv = m. Let k ≥ 1 be an integer. We show that G = (V,E) can be preprocessed in O(kmn1 k) expected time, constructing a data structure of size O(kn1p1 k), such that any subsequent distance query can be answered, approximately, in O(k) time. The approximate distance returned is of stretch at most 2k−1, that is, the quotient obtained by dividing the estimated distance by the actual distance lies between 1 and 2k−1. A 1963 girth conjecture of Erdos, implies that Ω(n1p1 k) space is needed in the worst case for any real stretch strictly smaller than 2kp1. The space requirement of our algorithm is, therefore, essentially optimal. The most impressive feature of our data structure is its constant query time, hence the name \"oracle\". Previously, data structures that used only O(n1p1 k) space had a query time of Ω(n1 k).Our algorithms are extremely simple and easy to implement efficiently. They also provide faster constructions of sparse spanners of weighted graphs, and improved tree covers and distance labelings of weighted or unweighted graphs.", "We study the amount of knowledge about a communication network that must be given to its nodes in order to efficiently disseminate information. Our approach is quantitative: we investigate the minimum total number of bits of information (minimum size of advice) that has to be available to nodes, regardless of the type of information provided. We compare the size of advice needed to perform broadcast and wakeup (the latter is a broadcast in which nodes can transmit only after getting the source information), both using a linear number of messages (which is optimal). We show that the minimum size of advice permitting the wakeup with a linear number of messages in an n-node network, is @Q(nlogn), while the broadcast with a linear number of messages can be achieved with advice of size O(n). We also show that the latter size of advice is almost optimal: no advice of size o(n) can permit to broadcast with a linear number of messages. Thus an efficient wakeup requires strictly more information about the network than an efficient broadcast.", "This paper addresses the problem of locally verifying global properties. Several natural questions are studied, such as “how expensive is local verification?” and more specifically, “how expensive is local verification compared to computation?” A suitable model is introduced in which these questions are studied in terms of the number of bits a vertex needs to communicate. The model includes the definition of a proof labeling scheme (a pair of algorithms- one to assign the labels, and one to use them to verify that the global property holds). In addition, approaches are presented for the efficient construction of schemes, and upper and lower bounds are established on the bit complexity of schemes for multiple basic problems. The paper also studies the role and cost of unique identities in terms of impossibility and complexity, in the context of proof labeling schemes. Previous studies on related questions deal with distributed algorithms that simultaneously compute a configuration and verify that this configuration has a certain desired property. It turns out that this combined approach enables the verification to be less costly sometimes, since the configuration is typically generated so as to be easily verifiable. In contrast, our approach separates the configuration design from the verification. That is, it first generates the desired configuration without bothering with the need to verify it, and then handles the task of constructing a suitable verification scheme. Our approach thus allows for a more modular design of algorithms, and has the potential to aid in verifying properties even when the original design of the structures for maintaining them was done without verification in mind.", "We study deterministic broadcasting in radio networks in the recently introduced framework of network algorithms with advice. We concentrate on the problem of trade-offs between the number of bits of information (size of advice) available to nodes and the time in which broadcasting can be accomplished. In particular, we ask what is the minimum number of bits of information that must be available to nodes of the network, in order to broadcast very fast. For networks in which constant time broadcast is possible under a complete knowledge of the network we give a tight answer to the above question: O(n) bits of advice are sufficient but o(n) bits are not, in order to achieve constant broadcasting time in all these networks. This is in sharp contrast with geometric radio networks of constant broadcasting time: we show that in these networks a constant number of bits suffices to broadcast in constant time. For arbitrary radio networks we present a broadcasting algorithm whose time is inverse-proportional to the size of the advice.", "We study the problem of the amount of information (advice) about a graph that must be given to its nodes in order to achieve fast distributed computations. The required size of the advice enables to measure the information sensitivity of a network problem. A problem is information sensitive if little advice is enough to solve the problem rapidly (i.e., much faster than in the absence of any advice), whereas it is information insensitive if it requires giving a lot of information to the nodes in order to ensure fast computation of the solution. In this paper, we study the information sensitivity of distributed graph coloring.", "We consider a model for online computation in which the online algorithm receives, together with each request, some information regarding the future, referred to as advice. The advice is a function, defined by the online algorithm, of the whole request sequence. The advice provided to the online algorithm may allow an improvement in its performance, compared to the classical model of complete lack of information regarding the future. We are interested in the impact of such advice on the competitive ratio, and in particular, in the relation between the size b of the advice, measured in terms of bits of information per request, and the (improved) competitive ratio. Since b=0 corresponds to the classical online model, and b=@?log|A|@?, where A is the algorithm's action space, corresponds to the optimal (offline) one, our model spans a spectrum of settings ranging from classical online algorithms to offline ones. In this paper we propose the above model and illustrate its applicability by considering two of the most extensively studied online problems, namely, metrical task systems (MTS) and the k-server problem. For MTS we establish tight (up to constant factors) upper and lower bounds on the competitive ratio of deterministic and randomized online algorithms with advice for any choice of 1@?b@?@Q(logn), where n is the number of states in the system: we prove that any randomized online algorithm for MTS has competitive ratio @W(log(n) b) and we present a deterministic online algorithm for MTS with competitive ratio O(log(n) b). For the k-server problem we construct a deterministic online algorithm for general metric spaces with competitive ratio k^O^(^1^ ^b^) for any choice of @Q(1)@?b@?logk.", "We study the amount of knowledge about the network that is required in order to efficiently solve a task concerning this network. The impact of available information on the efficiency of solving network problems, such as communication or exploration, has been investigated before but assumptions concerned availability of particular items of information about the network, such as the size, the diameter, or a map of the network. In contrast, our approach is quantitative: we investigate the minimum number of bits of information (bits of advice) that has to be given to an algorithm in order to perform a task with given efficiency. We illustrate this quantitative approach to available knowledge by the task of tree exploration. A mobile entity (robot) has to traverse all edges of an unknown tree, using as few edge traversals as possible. The quality of an exploration algorithm A is measured by its competitive ratio, i.e., by comparing its cost (number of edge traversals) to the length of the shortest path containing all edges of the tree. Depth-First-Search has competitive ratio 2 and, in the absence of any information about the tree, no algorithm can beat this value. We determine the minimum number of bits of advice that has to be given to an exploration algorithm in order to achieve competitive ratio strictly smaller than 2. Our main result establishes an exact threshold number of bits of advice that turns out to be roughly loglogD, where D is the diameter of the tree. More precisely, for any constant c, we construct an exploration algorithm with competitive ratio smaller than 2, using at most loglogD-c bits of advice, and we show that every algorithm using loglogD-g(D) bits of advice, for any function g unbounded from above, has competitive ratio at least 2.", "", "We consider the problem of labeling the nodes of a graph in a way that will allow one to compute the distance between any two nodes directly from their labels (without using any additional information). Our main interest is in the minimal length of labels needed in different cases. We obtain upper and lower bounds for several interesting families of graphs. In particular, our main results are the following. For general graphs, we show that the length needed is Θ(n). For trees, we show that the length needed is Θ(log2 n). For planar graphs, we show an upper bound of O(√nlogn) and a lower bound of Ω(n1 3). For bounded degree graphs, we show a lower bound of Ω(√n). The upper bounds for planar graphs and for trees follow by a more general upper bound for graphs with a r(n)-separator. The two lower bounds, however, are obtained by two different arguments that may be interesting in their own right. We also show some lower bounds on the length of the labels, even if it is only required that distances be approximated to a multiplicative factor s. For example, we show that for general graphs the required length is Ω(n) for every s < 3. We also consider the problem of the time complexity of the distance function once the labels are computed. We show that there are graphs with optimal labels of length 3 log n, such that if we use any labels with fewer than n bits per label, computing the distance function requires exponential time. A similar result is obtained for planar and bounded degree graphs." ] }
1601.05961
2252834271
Power consumption is a major obstacle for High Performance Computing (HPC) systems in their quest towards the holy grail of ExaFLOP performance. Significant advances in power efficiency have to be made before this goal can be attained and accurate modeling is an essential step towards power efficiency by optimizing system operating parameters to match dynamic energy needs. In this paper we present a study of power consumption by jobs in Eurora, a hybrid CPU-GPU-MIC system installed at the largest Italian data center. Using data from a dedicated monitoring framework, we build a data-driven model of power consumption for each user in the system and use it to predict the power requirements of future jobs. We are able to achieve good prediction results for over 80 of the users in the system. For the remaining users, we identify possible reasons why prediction performance is not as good. Possible applications for our predictive modeling results include scheduling optimization, power-aware billing and system-scale power modeling. All the scripts used for the study have been made available on GitHub.
On the road towards ExaFLOP performance, special attention has been given to system-level power consumption by clusters. Recent work at Google @cite_8 describes the use of Artificial Neural Networks to model Power Usage Effectiveness using a mixture of workload and cooling features. System-level prediction of power consumption is also one application of our predictive model . In terms of power-aware scheduling, another possible application of our models, the authors in @cite_1 @cite_9 introduce a method based on Constraint Programming, to achieve power capping on , the same HPC system analyzed here. This could benefit greatly from power prediction offered by our framework.
{ "cite_N": [ "@cite_9", "@cite_1", "@cite_8" ], "mid": [ "1548185962", "2402437550", "" ], "abstract": [ "Supercomputers machines, HPC systems in general, embed sophisticated and advanced cooling circuits to remove heat and ensuring the required peak performance. Unfortunately removing heat, by means of cold water or air, costs additional power which decreases the overall supercomputer energy efficiency. Free-cooling uses ambient air instead than chiller to cool down warm air or liquid temperature. The amount of heat which can be removed for-free depends on ambient conditions such as temperature and humidity. Power capping can be used to reduce the supercomputer power dissipation to maximize the cooling efficiency. In this paper we present a power capping approach based on Constraint Programming which enables to estimate at every scheduling interval the power consumption of a given job schedule and to select among all possible job schedules the one which maximizes the supercomputer efficiency.", "Power consumption is a key factor in modern ICT infrastructure, especially in the expanding world of High Performance Computing, Cloud Computing and Big Data. Such consumption is bound to become an even greater issue as supercomputers are envisioned to enter the Exascale by 2020, granted that they obtain an order of magnitude energy efficiency gain. An important component in many strategies devised to decrease energy usage is \"power capping\", i.e., the possibility to constrain the system power consumption within certain power budget. In this paper we propose two novel approaches for power capped workload dispatching and we demonstrate them on a real-life high-performance machine: the Eurora supercomputer hosted at CINECA computing center in Bologna. Power capping is a feature not included in the commercial Portable Batch System PBS dispatcher currently in use on Eurora. The first method is based on a heuristic technique while the second one relies on a hybrid strategy which combines a CP and a heuristic approach. Both systems are evaluated and compared on simulated job traces.", "" ] }
1601.06081
613888575
Urban legends are viral deceptive texts, in between credible and incredible.To be credible they mimic news articles while being incredible like a fairy tale.High level features: "who where when" of news, "emotional readable" of fairy tales.Quantitative analysis and machine learning experiments for recognizing urban legends. Urban legends are a genre of modern folklore, consisting of stories about rare and exceptional events, just plausible enough to be believed, which tend to propagate inexorably across communities. In our view, while urban legends represent a form of "sticky" deceptive text, they are marked by a tension between the credible and incredible. They should be credible like a news article and incredible like a fairy tale to go viral. In particular we will focus on the idea that urban legends should mimic the details of news (who, where, when) to be credible, while they should be emotional and readable like a fairy tale to be catchy and memorable. Using NLP tools we will provide a quantitative analysis of these prototypical characteristics. We also lay out some machine learning experiments showing that it is possible to recognize an urban legend using just these simple features.
1) Recognizing the linguistic characteristics of deceptive content in the social web: for example preventing deceptive consumer reviews on sites like Trip Advisor is fundamental both for consumers seeking genuine reviews, and for the reputation of the site itself. Deceptive consumer reviews are fictitious opinions that have been deliberately written to sound authentic. Another example concerns online advertising : detecting fraudulent ads is in the interest of users, of service providers (e.g. Google AdWords system), and other advertisers. An interesting phenomenon at the crossroad of viral phenomena and deceptive customer reviews, where ironic reviews (such as the case of the mountain three wolf moon) create phenomena of social contagion, is discussed in @cite_0 .
{ "cite_N": [ "@cite_0" ], "mid": [ "2009578396" ], "abstract": [ "The research described in this work focuses on identifying key components for the task of irony detection. By means of analyzing a set of customer reviews, which are considered ironic both in social and mass media, we try to find hints about how to deal with this task from a computational point of view. Our objective is to gather a set of discriminating elements to represent irony, in particular, the kind of irony expressed in such reviews. To this end, we built a freely available data set with ironic reviews collected from Amazon. Such reviews were posted on the basis of an online viral effect; i.e. contents that trigger a chain reaction in people. The findings were assessed employing three classifiers. Initial results are largely positive, and provide valuable insights into the subjective issues of language facing tasks such as sentiment analysis, opinion mining and decision making." ] }
1601.06260
2949127674
Current person re-identification (ReID) methods typically rely on single-frame imagery features, whilst ignoring space-time information from image sequences often available in the practical surveillance scenarios. Single-frame (single-shot) based visual appearance matching is inherently limited for person ReID in public spaces due to the challenging visual ambiguity and uncertainty arising from non-overlapping camera views where viewing condition changes can cause significant people appearance variations. In this work, we present a novel model to automatically select the most discriminative video fragments from noisy incomplete image sequences of people from which reliable space-time and appearance features can be computed, whilst simultaneously learning a video ranking function for person ReID. Using the PRID @math , iLIDS-VID, and HDA+ image sequence datasets, we extensively conducted comparative evaluations to demonstrate the advantages of the proposed model over contemporary gait recognition, holistic image sequence matching and state-of-the-art single- multi-shot ReID methods.
Space-time feature representations have been extensively explored in action activity recognition @cite_26 @cite_14 @cite_54 . One common representation is constructed based on space-time interest points @cite_40 @cite_9 @cite_37 @cite_11 . They facilitate a compact description of image sequences based on sparse interest points, but are somewhat sensitive to shadows and highlights in appearance @cite_3 and may lose discriminative information @cite_12 . Therefore, they may not be suitable for person ReID scenarios where lighting variations and viewpoints are unknown and uncontrolled. Relatively, space-time volume patch based representations @cite_26 can be richer and more robust. Mostly these representations are spatial-temporal extensions of corresponding image descriptors, e.g. HoGHoF @cite_34 , 3D-SIFT @cite_2 and HOG3D @cite_35 . In this study, we adopt HOG3D @cite_35 as the space-time feature of video fragment because: (1) It can be computed efficiently; (2) It contains both spatial gradient and temporal dynamic information, and is therefore potentially more expressive @cite_14 @cite_35 ; (3) It is more robust against cluttered background and occlusions @cite_35 . The choice of space-time feature is independent of our model.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_14", "@cite_26", "@cite_54", "@cite_9", "@cite_3", "@cite_40", "@cite_2", "@cite_34", "@cite_12", "@cite_11" ], "mid": [ "2024868105", "1534763723", "1993229407", "2106996050", "607820920", "2533739470", "2029477555", "2020163092", "2108333036", "2142194269", "2537988662", "2136917337" ], "abstract": [ "In this work, we present a novel local descriptor for video sequences. The proposed descriptor is based on histograms of oriented 3D spatio-temporal gradients. Our contribution is four-fold. (i) To compute 3D gradients for arbitrary scales, we develop a memory-efficient algorithm based on integral videos. (ii) We propose a generic 3D orientation quantization which is based on regular polyhedrons. (iii) We perform an in-depth evaluation of all descriptor parameters and optimize them for action recognition. (iv) We apply our descriptor to various action datasets (KTH, Weizmann, Hollywood) and show that we outperform the state-of-the-art.", "Over the years, several spatio-temporal interest point detectors have been proposed. While some detectors can only extract a sparse set of scale-invariant features, others allow for the detection of a larger amount of features at user-defined scales. This paper presents for the first time spatio-temporal interest points that are at the same time scale-invariant (both spatially and temporally) and densely cover the video content. Moreover, as opposed to earlier work, the features can be computed efficiently. Applying scale-space theory, we show that this can be achieved by using the determinant of the Hessian as the saliency measure. Computations are speeded-up further through the use of approximative box-filter operations on an integral video structure. A quantitative evaluation and experimental results on action recognition show the strengths of the proposed detector in terms of repeatability, accuracy and speed, in comparison with previously proposed detectors.", "Local space-time features have recently become a popular video representation for action recognition. Several methods for feature localization and description have been proposed in the literature and promising recognition results were demonstrated for a number of action classes. The comparison of existing methods, however, is often limited given the different experimental settings used. The purpose of this paper is to evaluate and compare previously proposed space-time features in a common experimental setup. In particular, we consider four different feature detectors and six local feature descriptors and use a standard bag-of-features SVM approach for action recognition. We investigate the performance of these methods on a total of 25 action classes distributed over three datasets with varying difficulty. Among interesting conclusions, we demonstrate that regular sampling of space-time features consistently outperforms all tested space-time interest point detectors for human actions in realistic settings. We also demonstrate a consistent ranking for the majority of methods over different datasets and discuss their advantages and limitations.", "Vision-based human action recognition is the process of labeling image sequences with action labels. Robust solutions to this problem have applications in domains such as visual surveillance, video retrieval and human-computer interaction. The task is challenging due to variations in motion performance, recording settings and inter-personal differences. In this survey, we explicitly address these challenges. We provide a detailed overview of current advances in the field. Image representations and the subsequent classification process are discussed separately to focus on the novelties of recent research. Moreover, we discuss limitations of the state of the art and outline promising directions of research.", "This book presents a comprehensive treatment of visual analysis of behaviour from computational-modelling and algorithm-design perspectives. Topics: covers learning-group activity models, unsupervised behaviour profiling, hierarchical behaviour discovery, learning behavioural context, modelling rare behaviours, and man-in-the-loop active learning; examines multi-camera behaviour correlation, person re-identification, and connecting-the-dots for abnormal behaviour detection; discusses Bayesian information criterion, Bayesian networks, bag-of-words representation, canonical correlation analysis, dynamic Bayesian networks, Gaussian mixtures, and Gibbs sampling; investigates hidden conditional random fields, hidden Markov models, human silhouette shapes, latent Dirichlet allocation, local binary patterns, locality preserving projection, and Markov processes; explores probabilistic graphical models, probabilistic topic models, space-time interest points, spectral clustering, and support vector machines.", "A common trend in object recognition is to detect and leverage the use of sparse, informative feature points. The use of such features makes the problem more manageable while providing increased robustness to noise and pose variation. In this work we develop an extension of these ideas to the spatio-temporal case. For this purpose, we show that the direct 3D counterparts to commonly used 2D interest point detectors are inadequate, and we propose an alternative. Anchoring off of these interest points, we devise a recognition algorithm based on spatio-temporally windowed data. We present recognition results on a variety of datasets including both human and rodent behavior.", "Real-world actions occur often in crowded, dynamic environments. This poses a difficult challenge for current approaches to video event detection because it is difficult to segment the actor from the background due to distracting motion from other objects in the scene. We propose a technique for event recognition in crowded videos that reliably identifies actions in the presence of partial occlusion and background clutter. Our approach is based on three key ideas: (1) we efficiently match the volumetric representation of an event against oversegmented spatio-temporal video volumes; (2) we augment our shape-based features using flow; (3) rather than treating an event template as an atomic entity, we separately match by parts (both in space and time), enabling robustness against occlusions and actor variability. Our experiments on human actions, such as picking up a dropped object or waving in a crowd show reliable detection with few false positives.", "Local image features or interest points provide compact and abstract representations of patterns in an image. In this paper, we extend the notion of spatial interest points into the spatio-temporal domain and show how the resulting features often reflect interesting events that can be used for a compact representation of video data as well as for interpretation of spatio-temporal events. To detect spatio-temporal events, we build on the idea of the Harris and Forstner interest point operators and detect local structures in space-time where the image values have significant local variations in both space and time. We estimate the spatio-temporal extents of the detected events by maximizing a normalized spatio-temporal Laplacian operator over spatial and temporal scales. To represent the detected events, we then compute local, spatio-temporal, scale-invariant N-jets and classify each event with respect to its jet descriptor. For the problem of human motion analysis, we illustrate how a video representation in terms of local space-time features allows for detection of walking people in scenes with occlusions and dynamic cluttered backgrounds.", "In this paper we introduce a 3-dimensional (3D) SIFT descriptor for video or 3D imagery such as MRI data. We also show how this new descriptor is able to better represent the 3D nature of video data in the application of action recognition. This paper will show how 3D SIFT is able to outperform previously used description methods in an elegant and efficient manner. We use a bag of words approach to represent videos, and present a method to discover relationships between spatio-temporal words in order to better describe the video data.", "The aim of this paper is to address recognition of natural human actions in diverse and realistic video settings. This challenging but important subject has mostly been ignored in the past due to several problems one of which is the lack of realistic and annotated video datasets. Our first contribution is to address this limitation and to investigate the use of movie scripts for automatic annotation of human actions in videos. We evaluate alternative methods for action retrieval from scripts and show benefits of a text-based classifier. Using the retrieved action samples for visual learning, we next turn to the problem of action classification in video. We present a new method for video classification that builds upon and extends several recent ideas including local space-time features, space-time pyramids and multi-channel non-linear SVMs. The method is shown to improve state-of-the-art results on the standard KTH action dataset by achieving 91.8 accuracy. Given the inherent problem of noisy labels in automatic annotation, we particularly investigate and show high tolerance of our method to annotation errors in the training set. We finally apply the method to learning and classifying challenging action classes in movies and show promising results.", "Within the field of action recognition, features and descriptors are often engineered to be sparse and invariant to transformation. While sparsity makes the problem tractable, it is not necessarily optimal in terms of class separability and classification. This paper proposes a novel approach that uses very dense corner features that are spatially and temporally grouped in a hierarchical process to produce an overcomplete compound feature set. Frequently reoccurring patterns of features are then found through data mining, designed for use with large data sets. The novel use of the hierarchical classifier allows real time operation while the approach is demonstrated to handle camera motion, scale, human appearance variations, occlusions and background clutter. The performance of classification, outperforms other state-of-the-art action recognition algorithms on the three datasets; KTH, multi-KTH, and Hollywood. Multiple action localisation is performed, though no groundtruth localisation data is required, using only weak supervision of class labels for each training sequence. The Hollywood dataset contain complex realistic actions from movies, the approach outperforms the published accuracy on this dataset and also achieves real time performance.", "Much of recent action recognition research is based on space-time interest points extracted from video using a Bag of Words (BOW) representation. It mainly relies on the discriminative power of individual local space-time descriptors, whilst ignoring potentially valuable information about the global spatio-temporal distribution of interest points. In this paper, we propose a novel action recognition approach which differs significantly from previous interest points based approaches in that only the global spatiotemporal distribution of the interest points are exploited. This is achieved through extracting holistic features from clouds of interest points accumulated over multiple temporal scales followed by automatic feature selection. Our approach avoids the non-trivial problems of selecting the optimal space-time descriptor, clustering algorithm for constructing a codebook, and selecting codebook size faced by previous interest points based methods. Our model is able to capture smooth motions, robust to view changes and occlusions at a low computation cost. Experiments using the KTH and WEIZMANN datasets demonstrate that our approach outperforms most existing methods." ] }
1601.06260
2949127674
Current person re-identification (ReID) methods typically rely on single-frame imagery features, whilst ignoring space-time information from image sequences often available in the practical surveillance scenarios. Single-frame (single-shot) based visual appearance matching is inherently limited for person ReID in public spaces due to the challenging visual ambiguity and uncertainty arising from non-overlapping camera views where viewing condition changes can cause significant people appearance variations. In this work, we present a novel model to automatically select the most discriminative video fragments from noisy incomplete image sequences of people from which reliable space-time and appearance features can be computed, whilst simultaneously learning a video ranking function for person ReID. Using the PRID @math , iLIDS-VID, and HDA+ image sequence datasets, we extensively conducted comparative evaluations to demonstrate the advantages of the proposed model over contemporary gait recognition, holistic image sequence matching and state-of-the-art single- multi-shot ReID methods.
Space-time information of sequences has been extensively exploited by gait recognition @cite_23 @cite_32 @cite_53 @cite_33 . However, these methods often make stringent assumptions on the image sequences, e.g. uncluttered background, consistent silhouette extraction and alignment, accurate gait phase estimation and complete gait cycles, most of which are unrealistic in ordinary person ReID scenarios. It is challenging to extract a suitable gait representation from typical ReID data. In contrast, our approach relaxes significantly these assumptions by simultaneously selecting discriminative video fragments from noisy sequences, learning and matching them without temporal alignment.
{ "cite_N": [ "@cite_53", "@cite_33", "@cite_32", "@cite_23" ], "mid": [ "2126680226", "116734009", "2151458682", "" ], "abstract": [ "In this paper, we propose a new spatio-temporal gait representation, called Gait Energy Image (GEI), to characterize human walking properties for individual recognition by gait. To address the problem of the lack of training templates, we also propose a novel approach for human recognition by combining statistical gait features from real and synthetic templates. We directly compute the real templates from training silhouette sequences, while we generate the synthetic templates from training sequences by simulating silhouette distortion. We use a statistical approach for learning effective features from real and synthetic templates. We compare the proposed GEI-based gait recognition approach with other gait recognition approaches on USF HumanID Database. Experimental results show that the proposed GEI is an effective and efficient gait representation for individual recognition, and the proposed approach achieves highly competitive performance with respect to the published gait recognition approaches", "The advantage of gait over other biometrics such as face or fingerprint is that it can operate from a distance and without subject cooperation. However, this also makes gait subject to changes in various covariate conditions including carrying, clothing, surface and view angle. Existing approaches attempt to address these condition changes by feature selection, feature transformation or discriminant subspace learning. However, they suffer from lack of training samples from each subject, can only cope with changes in a subset of conditions with limited success, and are based on the invalid assumption that the covariate conditions are known a priori. They are thus unable to perform gait recognition under a genuine uncooperative setting. We propose a novel approach which casts gait recognition as a bipartite ranking problem and leverages training samples from different classes people and even from different datasets. This makes our approach suitable for recognition under a genuine uncooperative setting and robust against any covariate types, as demonstrated by our extensive experiments.", "Identification of people by analysis of gait patterns extracted from video has recently become a popular research problem. However, the conditions under which the problem is \"solvable\" are not understood or characterized. To provide a means for measuring progress and characterizing the properties of gait recognition, we introduce the humanlD gait challenge problem. The challenge problem consists of a baseline algorithm, a set of 12 experiments, and a large data set. The baseline algorithm estimates silhouettes by background subtraction and performs recognition by temporal correlation of silhouettes. The 12 experiments are of increasing difficulty, as measured by the baseline algorithm, and examine the effects of five covariates on performance. The covariates are: change in viewing angle, change in shoe type, change in walking surface, carrying or not carrying a briefcase, and elapsed time between sequences being compared. Identification rates for the 12 experiments range from 78 percent on the easiest experiment to 3 percent on the hardest. All five covariates had statistically significant effects on performance, with walking surface and time difference having the greatest impact. The data set consists of 1,870 sequences from 122 subjects spanning five covariates (1.2 gigabytes of data). This infrastructure supports further development of gait recognition algorithms and additional experiments to understand the strengths and weaknesses of new algorithms. The experimental results are presented, the more detailed is the possible meta-analysis and greater is the understanding. It is this potential from the adoption of this challenge problem that represents a radical departure from traditional computer vision research methodology.", "" ] }
1601.06260
2949127674
Current person re-identification (ReID) methods typically rely on single-frame imagery features, whilst ignoring space-time information from image sequences often available in the practical surveillance scenarios. Single-frame (single-shot) based visual appearance matching is inherently limited for person ReID in public spaces due to the challenging visual ambiguity and uncertainty arising from non-overlapping camera views where viewing condition changes can cause significant people appearance variations. In this work, we present a novel model to automatically select the most discriminative video fragments from noisy incomplete image sequences of people from which reliable space-time and appearance features can be computed, whilst simultaneously learning a video ranking function for person ReID. Using the PRID @math , iLIDS-VID, and HDA+ image sequence datasets, we extensively conducted comparative evaluations to demonstrate the advantages of the proposed model over contemporary gait recognition, holistic image sequence matching and state-of-the-art single- multi-shot ReID methods.
Multiple images from a sequence of the same person have been exploited for person re-identification. For example, interest points were accumulated across images for capturing appearance variability @cite_61 , manifold geometric structures in image sequences of people were utilised to construct more compact spatial descriptors of people @cite_44 , and the time index of image frames and identity consistency of a sequence were used to constrain spatial feature similarity estimation @cite_7 . There were also attempts on training a person appearance model from image sets @cite_24 or by selecting best pairs @cite_42 . Multiple images of a person sequence were often used either to enhance spatial feature descriptions of local image regions or patches @cite_50 @cite_49 @cite_5 @cite_21 , or to extract additional appearance information such as appearance change statistics @cite_52 . In contrast, the proposed model aims to simultaneously select and match discriminative video appearance and space-time features for maximising cross-view identity ranking. Our experiments show the advantages of the proposed model over existing multi-shot models for person ReID.
{ "cite_N": [ "@cite_61", "@cite_7", "@cite_42", "@cite_21", "@cite_52", "@cite_44", "@cite_24", "@cite_50", "@cite_49", "@cite_5" ], "mid": [ "2107475454", "", "2047632871", "", "", "199203893", "2100603339", "", "1979260620", "" ], "abstract": [ "We present and evaluate a person re-identification scheme for multi-camera surveillance system. Our approach uses matching of signatures based on interest-points descriptors collected on short video sequences. One of the originalities of our method is to accumulate interest points on several sufficiently time-spaced images during person tracking within each camera, in order to capture appearance variability. A first experimental evaluation conducted on a publicly available set of low-resolution videos in a commercial mall shows very promising inter-camera person re-identification performances (a precision of 82 for a recall of 78 ). It should also be noted that our matching method is very fast: 1 8s for re-identification of one target person among 10 previously seen persons, and a logarithmic dependence with the number of stored person models, making re- identification among hundreds of persons computationally feasible in less than 1 5 second.", "", "In this paper, we propose a new approach for matching images observed in different camera views with complex cross-view transforms and apply it to person re-identification. It jointly partitions the image spaces of two camera views into different configurations according to the similarity of cross-view transforms. The visual features of an image pair from different views are first locally aligned by being projected to a common feature space and then matched with softly assigned metrics which are locally optimized. The features optimal for recognizing identities are different from those for clustering cross-view transforms. They are jointly learned by utilizing sparsity-inducing norm and information theoretical regularization. This approach can be generalized to the settings where test images are from new camera views, not the same as those in the training set. Extensive experiments are conducted on public datasets and our own dataset. Comparisons with the state-of-the-art metric learning and person re-identification methods show the superior performance of our approach.", "", "", "This paper presents a solution of the appearance-based people re-identification problem in a surveillance system including multiple cameras with different fields of vision. We first utilize different color-based features, combined with several illuminant invariant normalizations in order to characterize the silhouettes in static frames. A graph-based approach which is capable of learning the global structure of the manifold and preserving the properties of the original data in a lower dimensional representation is then introduced to reduce the effective working space and to realize the comparison of the video sequences. The global system was tested on a real data set collected by two cameras installed on board a train. The experimental results show that the combination of color-based features, invariant normalization procedures and the graph-based approach leads to very satisfactory results.", "We describe a system that learns from examples to recognize persons in images taken indoors. Images of full-body persons are represented by color-based and shape-based features. Recognition is carried out through combinations of Support Vector Machine (SVM) classifiers. Different types of multi-class strategies based on SVMs are explored and compared to k-Nearest Neighbors classifiers. The experimental results show high recognition rates and indicate the strength of SVM-based classifiers to improve both generalization and run-time performance. The system works in real-time.", "", "In this paper, we present an appearance-based method for person re-identification. It consists in the extraction of features that model three complementary aspects of the human appearance: the overall chromatic content, the spatial arrangement of colors into stable regions, and the presence of recurrent local motifs with high entropy. All this information is derived from different body parts, and weighted opportunely by exploiting symmetry and asymmetry perceptual principles. In this way, robustness against very low resolution, occlusions and pose, viewpoint and illumination changes is achieved. The approach applies to situations where the number of candidates varies continuously, considering single images or bunch of frames for each individual. It has been tested on several public benchmark datasets (ViPER, iLIDS, ETHZ), gaining new state-of-the-art performances.", "" ] }
1601.06128
2252902088
Buses are the primary means of public transportation in the city of Rio de Janeiro, carrying around 100 million passengers every month. Recently, real-time GPS coordinates of all operating public buses has been made publicly available - roughly 1 million GPS entries each captured each day. In an initial study, we observed that a substantial number of buses follow trajectories that do not follow the expected behavior. In this paper, we present RioBusData, a tool that helps users identify and explore, through different visualizations, the behavior of outlier trajectories. We describe how the system automatically detects these outliers using a Convolutional Neural Network (CNN) and we also discuss a series of case studies which show how RioBusData helps users better understand not only the flow and service of outlier buses but also the bus system as a whole.
Novotny and Hauser @cite_7 presented a method for focus+context visualization in parallel coordinates where, through the use of a binning and filtering algorithm, certain points are detected as outliers. This work differs from RioBusData in significant ways. First, the outlier detection is performed through the visualization. In contrast, RioBusData allows users to visually explore previously identified outliers. Second, detected outliers are not correlated, whereas in RioBusData we deal with trajectories consisting of a sequence of (related) GPS entries. Last, but not least, RioBusData was designed for the exploration of spatio-temporal data sets.
{ "cite_N": [ "@cite_7" ], "mid": [ "2129086861" ], "abstract": [ "Focus+context visualization integrates a visually accentuated representation of selected data items in focus (more details, more opacity, etc.) with a visually deemphasized representation of the rest of the data, i.e., the context. The role of context visualization is to provide an overview of the data for improved user orientation and improved navigation. A good overview comprises the representation of both outliers and trends. Up to now, however, context visualization not really treated outliers sufficiently. In this paper we present a new approach to focus+context visualization in parallel coordinates which is truthful to outliers in the sense that small-scale features are detected before visualization and then treated specially during context visualization. Generally, we present a solution which enables context visualization at several levels of abstraction, both for the representation of outliers and trends. We introduce outlier detection and context generation to parallel coordinates on the basis of a binned data representation. This leads to an output-oriented visualization approach which means that only those parts of the visualization process are executed which actually affect the final rendering. Accordingly, the performance of this solution is much more dependent on the visualization size than on the data size which makes it especially interesting for large datasets. Previous approaches are outperformed, the new solution was successfully applied to datasets with up to 3 million data records and up to 50 dimensions" ] }
1601.06128
2252902088
Buses are the primary means of public transportation in the city of Rio de Janeiro, carrying around 100 million passengers every month. Recently, real-time GPS coordinates of all operating public buses has been made publicly available - roughly 1 million GPS entries each captured each day. In an initial study, we observed that a substantial number of buses follow trajectories that do not follow the expected behavior. In this paper, we present RioBusData, a tool that helps users identify and explore, through different visualizations, the behavior of outlier trajectories. We describe how the system automatically detects these outliers using a Convolutional Neural Network (CNN) and we also discuss a series of case studies which show how RioBusData helps users better understand not only the flow and service of outlier buses but also the bus system as a whole.
More closely related to RioBusData is the work by @cite_0 . They described a web-based visualization package that summarizes spatial patterns and temporal trends. They proposed data mining algorithms for filtering out data sets to identify spatial outlier patterns which, like RioBusData, were implemented and tested using a real-world traffic data set. RioBusData is different from this work in the sense that the outliers are previously and automatically detected with a Machine Learning model, and the visualizations are used to inspect, understand, and take actions based on the already processed information. Finally, propose a simulation-based method that helps in the visual detection of outliers in spatio-temporal data by adjusting functional boxplots @cite_5 . This work is also different from RioBusData in the sense that the detection of outliers is one of the goals of the visualizations. Besides, the proposed method is not suited for large, continuous streams of data, which is the case of the data set explored in our work.
{ "cite_N": [ "@cite_0", "@cite_5" ], "mid": [ "2099181838", "2159426136" ], "abstract": [ "Data mining is the process of extracting implicit, valuable, and interesting information from large sets of data. Visualization is the process of visually exploring data for pattern and trend analysis, and it is a common method of browsing spatial datasets to look for patterns. However the growing volume of spatial datasets make it difficult for humans to browse such datasets in their entirety, and data mining algorithms are needed to filter out large uninteresting parts of spatial datasets. We construct a web-based visualization software package for observing the summarization of spatial patterns and temporal trends. We also present data mining algorithms for filtering out vast parts of datasets for spatial outlier patterns. The algorithms were implemented and tested with a real-world set of Minneapolis-St. Paul (Twin Cities) traffic data.", "This article proposes a simulation-based method to adjust functional boxplots for correlations when visualizing functional and spatio-temporal data, as well as detecting outliers. We start by investigating the relationship between the spatiotemporal dependence and the 1.5 times the 50 central region empirical outlier detection rule. Then, we propose to simulate observations without outliers on the basis of a robust estimator of the covariance function of the data. We select the constant factor in the functional boxplot to control the probability of correctly detecting no outliers. Finally, we apply the selected factor to the functional boxplot of the original data. As applications, the factor selection procedure and the adjusted functional boxplots are demonstrated on sea surface temperatures, spatio-temporal precipitation and general circulation model (GCM) data. The outlier detection performance is also compared before and after the factor adjustment. Copyright © 2011 John Wiley & Sons, Ltd." ] }
1601.06128
2252902088
Buses are the primary means of public transportation in the city of Rio de Janeiro, carrying around 100 million passengers every month. Recently, real-time GPS coordinates of all operating public buses has been made publicly available - roughly 1 million GPS entries each captured each day. In an initial study, we observed that a substantial number of buses follow trajectories that do not follow the expected behavior. In this paper, we present RioBusData, a tool that helps users identify and explore, through different visualizations, the behavior of outlier trajectories. We describe how the system automatically detects these outliers using a Convolutional Neural Network (CNN) and we also discuss a series of case studies which show how RioBusData helps users better understand not only the flow and service of outlier buses but also the bus system as a whole.
The detection of outliers in temporal series has not been widely explored in the context of Deep Learning algorithms. However, various works already demonstrated the potential of neural network techniques for the time series tasks. @cite_2 , for example, deep neural networks were applied for ultra-short-term wind prediction, and the results showed that deep neural networks outperform shallow architectures.
{ "cite_N": [ "@cite_2" ], "mid": [ "1519238132" ], "abstract": [ "The aim of this paper is to present input variable selection algorithm and deep neural networks application to ultra-short-term wind prediction. Shallow and deep neural networks coupled with input variable selection algorithm are compared on the ultra-short-term wind prediction task for a set of different locations. Results show that carefully selected deep neural networks outperform shallow ones. Input variable selection use reduces the neural network complexity and simplifies deep neural network training." ] }
1601.06128
2252902088
Buses are the primary means of public transportation in the city of Rio de Janeiro, carrying around 100 million passengers every month. Recently, real-time GPS coordinates of all operating public buses has been made publicly available - roughly 1 million GPS entries each captured each day. In an initial study, we observed that a substantial number of buses follow trajectories that do not follow the expected behavior. In this paper, we present RioBusData, a tool that helps users identify and explore, through different visualizations, the behavior of outlier trajectories. We describe how the system automatically detects these outliers using a Convolutional Neural Network (CNN) and we also discuss a series of case studies which show how RioBusData helps users better understand not only the flow and service of outlier buses but also the bus system as a whole.
The deep models are compared to a classical Dynamic Time Warping (DTW) approach and the results indicated that the deep model is not only more efficient (especially for large data sets) than the state of the art, but also yielded better accuracy for two standard benchmarks. More closely related to our approach is the work by @cite_6 . They used Replicator Neural Networks (RNNs) to reconstruct the input sample and, once the model was trained, samples that have a high reconstruction error were marked as outliers. Our approach to outlier detection is similar to this work in the sense that both models learn only frequent samples, while uncommon samples yield higher errors. On the other hand, RioBusData does not use a sample reconstruction approach: instead of applying RNNs over the data, it uses CNNs. To the best of our knowledge, RioBusData is the first application of CNNs for outlier detection, and, as outlined in , the results are promising.
{ "cite_N": [ "@cite_6" ], "mid": [ "1876967670" ], "abstract": [ "We consider the problem of finding outliers in large multivariate databases. Outlier detection can be applied during the data cleansing process of data mining to identify problems with the data itself, and to fraud detection where groups of outliers are often of particular interest. We use replicator neural networks (RNNs) to provide a measure of the outlyingness of data records. The performance of the RNNs is assessed using a ranked score measure. The effectiveness of the RNNs for outlier detection is demonstrated on two publicly available databases." ] }
1601.05569
2268081922
As research in the Internet of Thing area progresses, and a multitude of proposals exist to solve a variety of problems, the need for a general principled software engineering approach for the systematic development of IoT systems and applications arises. In this paper, by synthesizing form the state of the art in the area, we attempt at framing the key concepts and abstractions that revolve around the design and development of IoT systems and applications, and draft a software engineering methodology centered on these abstractions.
Some proposals for development frameworks for the IoT or for the WoT (whether middleware architecture @cite_28 or programming models @cite_19 @cite_23 ), are also accompanied by guidelines towards the development of applications. However, such guidelines are not grounded on general abstractions and haven't a general applicability beside the specific framework in which they are conceived. Similar considerations apply to the area of smart cities and urban computing @cite_42 , where middleware and programming approaches are being proposed -- mostly of a special-purpose nature and focussed on specific application scenarios such as participatory sensing @cite_8 @cite_18 or mobility management @cite_14 -- but without accounting for the issue of defining general design and development methodologies.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_8", "@cite_28", "@cite_42", "@cite_19", "@cite_23" ], "mid": [ "", "2062962580", "1978867192", "2029420318", "2150857078", "", "1992572367" ], "abstract": [ "", "In this article, the authors introduce the vision of smart social mobility services, noting that widespread deployment will require the identification and implementation of a general-purpose coordination infrastructure to support the effective realization of such services.", "In this paper, we introduce MobIoT, a service-oriented middleware that enables large-scale mobile participatory sensing. Scalability is achieved by limiting the participation of redundant sensing devices. Precisely, MobIoT allows a new device to register its services only if it increases the sensing coverage of a physical attribute, along its expected path, for the set of registered devices. We present the design and implementation of MobIoT, which mobile devices use to determine their registration decision and become accessible for their services. Through experiments performed on real datasets, we show that our solution scales, while meeting sensing coverage requirements.", "The Web of Things extends the Internet of Things by leveraging Web-based languages and protocols to access and control each physical object. In this article, the authors summarize ongoing work promoting the concept of an avatar as a new virtual abstraction to extend physical objects on the Web. An avatar is an extensible and distributed runtime environment endowed with an autonomous behavior. Avatars rely on Web languages, protocols, and reason about semantic annotations to dynamically drive connected objects, exploit their capabilities, and expose user-understandable functionalities as Web services. Avatars are also able to collaborate together to achieve complex tasks.", "Sooner or later, we'll all become part of an urban superorganism, putting our ICT devices and unique human capabilities to use for the good of both ourselves and society.", "", "Swarmlets are applications and services that leverage networked sensors and actuators with cloud services and mobile devices. This article offers a way to construct swarmlets by composing \"accessors,\"' which are wrappers for sensors, actuators, and services, that export an actor interface. Actor semantics provides ways to compose accessors with disciplined and understandable concurrency models, while hiding from the swarmlet the details of the mechanisms by which the accessor provides sensor data, controls an actuator, or accesses a service. This architecture can leverage the enormous variety of mechanisms that have emerged for such interactions, including HTTP, Websockets, CoAP, and MQTT. Recognizing that these standards have emerged because of huge variability of requirements for bandwidth, latency, and security, accessors embrace heterogeneity instead of attempting to homogenize." ] }
1601.05495
2949762901
Rank aggregation systems collect ordinal preferences from individuals to produce a global ranking that represents the social preference. Rank-breaking is a common practice to reduce the computational complexity of learning the global ranking. The individual preferences are broken into pairwise comparisons and applied to efficient algorithms tailored for independent paired comparisons. However, due to the ignored dependencies in the data, naive rank-breaking approaches can result in inconsistent estimates. The key idea to produce accurate and consistent estimates is to treat the pairwise comparisons unequally, depending on the topology of the collected data. In this paper, we provide the optimal rank-breaking estimator, which not only achieves consistency but also achieves the best error bound. This allows us to characterize the fundamental tradeoff between accuracy and complexity. Further, the analysis identifies how the accuracy depends on the spectral gap of a corresponding comparison graph.
In an orthogonal direction, new discrete choice models with sparse structures has been proposed recently in @cite_18 and optimization algorithms for revenue management has been proposed @cite_6 . In a similar direction, new discrete choice models based on Markov chains has been introduced in @cite_2 , and corresponding revenue management algorithms has been studied in @cite_51 . However, typically these models are analyzed in the asymptotic regime with infinite samples, with the exception of @cite_53 . A non-parametric choice models for pairwise comparisons also have been studied in @cite_50 @cite_34 . This provides an interesting opportunities to studying learning to rank for these new choice models.
{ "cite_N": [ "@cite_18", "@cite_53", "@cite_6", "@cite_50", "@cite_2", "@cite_34", "@cite_51" ], "mid": [ "2148039157", "2066389806", "2098763115", "2140560120", "2420698967", "1815702575", "2737040173" ], "abstract": [ "We visit the following fundamental problem: For a 'generic' model of consumer choice (namely, distributions over preference lists) and a limited amount of data on how consumers actually make decisions (such as marginal preference information), how may one predict revenues from offering a particular assortment of choices? This problem is central to areas within operations research, marketing and econometrics. We present a framework to answer such questions and design a number of tractable algorithms (from a data and computational standpoint) for the same.", "The need to rank items based on user input arises in many practical applications such as elections, group-decision making and recommendation systems. The primary challenge in such scenarios is to decide a global ranking based on partial preferences provides by users. The standard approach to address this challenge is to ask users to provide explicit numerical ratings (cardinal information) of a subset of items. The main appeal of such an approach is the ease of aggregation. However, the rating scale as well as the individual ratings are often arbitrary and may not be consistent from one user to another. A more natural alternative to numerical ratings requires users to compare pairs of items (ordinal information). In contrast to cardinal information, such comparisons provide an “absolute” indicator of the user's preference. However, it is often hard to combine or aggregate comparisons to obtain a consistent global ranking. In this work, we provide a tractable framework for utilizing comparison data as well as first-order marginal information for the purpose of ranking. We treat the available information as partial samples from an unknown distribution over permutations. Using the Principle of Maximum Entropy, we devise a concise parameterization of distribution consistent with observations using only O(n2) parameters, where n is the number of items in question. We propose a distributed, iterative algorithm for estimating the parameters of the distribution. We establish the correctness of the algorithm as well as identify the rate of convergence explicitly. Using the learnt distribution, we provide efficient approach to (a) learn the mode of the distribution using ‘maximum weight matching’, (b) identification of top k items, and (c) an aggregate ranking of all n items. Through evaluation of our approach on real-data, we verify effectiveness of our solutions as well as the scalability of the algorithm.", "Choice models today are ubiquitous across a range of applications in operations and marketing. Real-world implementations of many of these models face the formidable stumbling block of simply identifying the “right” model of choice to use. Because models of choice are inherently high-dimensional objects, the typical approach to dealing with this problem is positing, a priori, a parametric model that one believes adequately captures choice behavior. This approach can be substantially suboptimal in scenarios where one cares about using the choice model learned to make fine-grained predictions; one must contend with the risks of mis-specification and overfitting underfitting. Thus motivated, we visit the following problem: For a “generic” model of consumer choice namely, distributions over preference lists and a limited amount of data on how consumers actually make decisions such as marginal information about these distributions, how may one predict revenues from offering a particular assortment of choices? An outcome of our investigation is a nonparametric approach in which the data automatically select the right choice model for revenue predictions. The approach is practical. Using a data set consisting of automobile sales transaction data from a major U.S. automaker, our method demonstrates a 20 improvement in prediction accuracy over state-of-the-art benchmark models; this improvement can translate into a 10 increase in revenues from optimizing the offer set. We also address a number of theoretical issues, among them a qualitative examination of the choice models implicitly learned by the approach. We believe that this paper takes a step toward “automating” the crucial task of choice model selection. This paper was accepted by Yossi Aviv, operations management.", "There has been much interest recently in the problem of rank aggregation from pairwise data. A natural question that arises is: under what sorts of statistical assumptions do various rank aggregation algorithms converge to an 'optimal' ranking? In this paper, we consider this question in a natural setting where pairwise comparisons are drawn randomly and independently from some underlying probability distribution. We first show that, under a 'time-reversibility' or Bradley-Terry-Luce (BTL) condition on the distribution, the rank centrality (PageRank) and least squares (HodgeRank) algorithms both converge to an optimal ranking. Next, we show that a matrix version of the Borda count algorithm, and more surprisingly, an algorithm which performs maximum likelihood estimation under a BTL assumption, both converge to an optimal ranking under a 'low-noise' condition that is strictly more general than BTL. Finally, we propose a new SVM-based algorithm for rank aggregation from pairwise data, and show that this converges to an optimal ranking under an even more general condition that we term 'generalized low-noise'. In all cases, we provide explicit sample complexity bounds for exact recovery of an optimal ranking. Our experiments confirm our theoretical findings and help to shed light on the statistical behavior of various rank aggregation algorithms.", "Assortment planning is an important problem that arises in many industries such as retailing and airlines. One of the key challenges in an assortment planning problem is to identify the “right” model for the substitution behavior of customers from the data. Error in model selection can lead to highly suboptimal decisions. In this paper, we consider a Markov chain based choice model and show that it provides a simultaneous approximation for all random utility based discrete choice models including the multinomial logit (MNL), the probit, the nested logit and mixtures of multinomial logit models. In the Markov chain model, substitution from one product to another is modeled as a state transition in the Markov chain. We show that the choice probabilities computed by the Markov chain based model are a good approximation to the true choice probabilities for any random utility based choice model under mild conditions. Moreover, they are exact if the underlying model is a generalized attraction model (GAM) of which the MNL model is a special case. We also show that the assortment optimization problem for our choice model can be solved efficiently in polynomial time. In addition to the theoretical bounds, we also conduct numerical experiments and observe that the average maximum relative error of the choice probabilities of our model with respect to the true probabilities for any offer set is less than 3 where the average is taken over different offer sets. Therefore, our model provides a tractable approach to choice modeling and assortment optimization that is robust to model selection errors. Moreover, the state transition primitive for substitution provides interesting insights to model the substitution behavior in many real-world applications.", "There are various parametric models for analyzing pairwise comparison data, including the Bradley-Terry-Luce (BTL) and Thurstone models, but their reliance on strong parametric assumptions is limiting. In this work, we study a flexible model for pairwise comparisons, under which the probabilities of outcomes are required only to satisfy a natural form of stochastic transitivity. This class includes parametric models including the BTL and Thurstone models as special cases, but is considerably more general. We provide various examples of models in this broader stochastically transitive class for which classical parametric models provide poor fits. Despite this greater flexibility, we show that the matrix of probabilities can be estimated at the same rate as in standard parametric models. On the other hand, unlike in the BTL and Thurstone models, computing the minimax-optimal estimator in the stochastically transitive model is non-trivial, and we explore various computationally tractable alternatives. We show that a simple singular value thresholding algorithm is statistically consistent but does not achieve the minimax rate. We then propose and study algorithms that achieve the minimax rate over interesting sub-classes of the full stochastically transitive class. We complement our theoretical results with thorough numerical simulations.", "We consider revenue management problems when customers choose among the offered products according to the Markov chain choice model. In this choice model, a customer arrives into the system to purchase a particular product. If this product is available for purchase, then the customer purchases it. Otherwise, the customer transitions to another product or to the no purchase option, until she reaches an available product or the no purchase option. We consider three classes of problems. First, we study assortment problems, where the goal is to find a set of products to offer to maximize the expected revenue obtained from each customer. We give a linear program to obtain the optimal solution. Second, we study single resource revenue management problems, where the goal is to adjust the set of offered products over a selling horizon when the sale of each product consumes the resource. We show how the optimal set of products to offer changes with the remaining resource inventory. Third, we study network revenue ..." ] }
1601.05495
2949762901
Rank aggregation systems collect ordinal preferences from individuals to produce a global ranking that represents the social preference. Rank-breaking is a common practice to reduce the computational complexity of learning the global ranking. The individual preferences are broken into pairwise comparisons and applied to efficient algorithms tailored for independent paired comparisons. However, due to the ignored dependencies in the data, naive rank-breaking approaches can result in inconsistent estimates. The key idea to produce accurate and consistent estimates is to treat the pairwise comparisons unequally, depending on the topology of the collected data. In this paper, we provide the optimal rank-breaking estimator, which not only achieves consistency but also achieves the best error bound. This allows us to characterize the fundamental tradeoff between accuracy and complexity. Further, the analysis identifies how the accuracy depends on the spectral gap of a corresponding comparison graph.
We consider a fixed design setting, where inference is separate from data collection. There is a parallel line of research which focuses on adaptive ranking, mainly based on pairwise comparisons. When performing sorting from noisy pairwise comparisons, in @cite_54 proposed efficient approaches and provided performance guarantees. Following this work, there has been recent advances in adaptive ranking @cite_10 @cite_7 @cite_39 .
{ "cite_N": [ "@cite_54", "@cite_10", "@cite_7", "@cite_39" ], "mid": [ "1584796555", "2161073979", "2155817812", "" ], "abstract": [ "This paper studies problems of inferring order given noisy information. In these problems there is an unknown order (permutation) @math on @math elements denoted by @math . We assume that information is generated in a way correlated with @math . The goal is to find a maximum likelihood @math given the information observed. We will consider two different types of observations: noisy comparisons and noisy orders. The data in Noisy orders are permutations given from an exponential distribution correlated with (this is also called the Mallow's model). The data in Noisy Comparisons is a signal given for each pair of elements which is correlated with their true ordering. In this paper we present polynomial time algorithms for solving both problems with high probability. As part of our proof we show that for both models the maximum likelihood solution @math is close to the original permutation @math . Our results are of interest in applications to ranking, such as ranking in sports, or ranking of search items based on comparisons by experts.", "Given a set V of n elements we wish to linearly order them using pairwise preference labels which may be non-transitive (due to irrationality or arbitrary noise). The goal is to linearly order the elements while disagreeing with as few pairwise preference labels as possible. Our performance is measured by two parameters: The number of disagreements (loss) and the query complexity (number of pairwise preference labels). Our algorithm adaptively queries at most O(n poly(log n, e-1)) preference labels for a regret of e times the optimal loss. This is strictly better, and often significantly better than what non-adaptive sampling could achieve. Our main result helps settle an open problem posed by learning-to-rank (from pairwise information) theoreticians and practitioners: What is a provably correct way to sample preference labels?", "This paper examines the problem of ranking a collection of objects using pairwise comparisons (rankings of two objects). In general, the ranking of @math objects can be identified by standard sorting methods using @math pairwise comparisons. We are interested in natural situations in which relationships among the objects may allow for ranking using far fewer pairwise comparisons. Specifically, we assume that the objects can be embedded into a @math -dimensional Euclidean space and that the rankings reflect their relative distances from a common reference point in @math . We show that under this assumption the number of possible rankings grows like @math and demonstrate an algorithm that can identify a randomly selected ranking using just slightly more than @math adaptively selected pairwise comparisons, on average. If instead the comparisons are chosen at random, then almost all pairwise comparisons must be made in order to identify any ranking. In addition, we propose a robust, error-tolerant algorithm that only requires that the pairwise comparisons are probably correct. Experimental studies with synthetic and real datasets support the conclusions of our theoretical analysis.", "" ] }
1601.05595
2263213887
A code symbol of a linear code is said to have locality r if this symbol could be recovered by at most r other code symbols. An (n,k,r) locally repairable code (LRC) with all symbol locality is a linear code with length n, dimension k, and locality r for all symbols. Recently, there are lots of studies on the bounds and constructions of LRCs, most of which are essentially based on the generator matrix of the linear code. Up to now, the most important bounds of minimum distance for LRCs might be the well-known Singleton-like bound and the Cadambe-Mazumdar bound concerning the field size. In this paper, we study the bounds and constructions of LRCs from views of parity-check matrices. Firstly, we set up a new characterization of the parity-check matrix for an LRC. Then, the proposed parity-check matrix is employed to analyze the minimum distance. We give an alternative simple proof of the well-known Singleton-like bound for LRCs with all symbol locality, and then easily generalize it to a more general bound, which essentially coincides with the Cadambe-Mazumdar bound and includes the Singleton-like bound as a specific case. Based on the proposed characterization of parity-check matrices, necessary conditions of meeting the Singleton-like bound are obtained, which naturally lead to a construction framework of good LRCs. Finally, two classes of optimal LRCs based on linearized polynomial theories and Vandermonde matrices are obtained under the construction framework.
These codes were first discussed in @cite_18 @cite_32 . If a code symbol of an LRC has @math disjoint repair sets, each of which has size at most @math , then the code symbol is said to have @math @cite_18 . Ankit @cite_18 and Wang @cite_32 derived the upper bound on the minimum distance for LRCs with locality @math and availability @math for information symbols. @cite_27 @cite_26 also derived some bounds for LRCs with availability @math . @cite_0 @cite_18 @cite_32 @cite_23 @cite_16 @cite_36 constructed LRCs with availability.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_36", "@cite_32", "@cite_0", "@cite_27", "@cite_23", "@cite_16" ], "mid": [ "2011081940", "2025216096", "1639538057", "2018102393", "1997044393", "1969158823", "2080339104", "1678309266" ], "abstract": [ "This paper studies the problem of code symbol availability: a code symbol is said to have @math -availability if it can be reconstructed from @math disjoint groups of other symbols, each of size at most @math . For example, @math -replication supports @math -availability as each symbol can be read from its @math other (disjoint) replicas, i.e., @math . However, the rate of replication must vanish like @math as the availability increases. This paper shows that it is possible to construct codes that can support a scaling number of parallel reads while keeping the rate to be an arbitrarily high constant. It further shows that this is possible with the minimum distance arbitrarily close to the Singleton bound. This paper also presents a bound demonstrating a trade-off between minimum distance, availability and locality. Our codes match the aforementioned bound and their construction relies on combinatorial objects called resolvable designs. From a practical standpoint, our codes seem useful for distributed storage applications involving hot data, i.e., the information which is frequently accessed by multiple processes in parallel.", "A locally recoverable code (LRC code) is a code over a finite alphabet such that every symbol in the encoding is a function of a small number of other symbols that form a recovering set. Bounds on the rate and distance of such codes have been extensively studied in the literature. In this paper we derive upper bounds on the rate and distance of codes in which every symbol has @math disjoint recovering sets.", "In this work, we present a new upper bound on the minimum distance d of linear locally repairable codes (LRCs) with information locality and availability. The bound takes into account the code length n, dimension k, locality r, availability t, and field size q. We use tensor product codes to construct several families of LRCs with information locality, and then we extend the construction to design LRCs with information locality and availability. Some of these codes are shown to be optimal with respect to their minimum distance, achieving the new bound. Finally, we study the all-symbol locality and availability properties of several classes of one-step majority-logic decodable codes, including cyclic simplex codes, cyclic difference-set codes, and 4-cycle free regular low-density parity-check (LDPC) codes. We also investigate their optimality using the new bound.", "In distributed storage systems, erasure codes with locality (r ) are preferred because a coordinate can be locally repaired by accessing at most (r ) other coordinates which in turn greatly reduces the disk I O complexity for small (r ) . However, the local repair may not be performed when some of the (r ) coordinates are also erased. To overcome this problem, we propose the ((r, )_ c ) -locality providing ( -1 ) nonoverlapping local repair groups of size no more than (r ) for a coordinate. Consequently, the repair locality (r ) can tolerate ( -1 ) erasures in total. We derive an upper bound on the minimum distance for any linear ([n,k] ) code with information ((r, )_ c ) -locality. Then, we prove existence of the codes that attain this bound when (n k(r( -1)+1) ) . Although the locality ((r, ) ) defined by provides the same level of locality and local repair tolerance as our definition, codes with ((r, )_ c ) -locality attaining the bound are proved to have more advantage in the minimum distance. In particular, we construct a class of codes with all symbol ((r, )_ c ) -locality where the gain in minimum distance is ( ( r ) ) and the information rate is close to 1.", "A code over a finite alphabet is called locally recoverable (LRC) if every symbol in the encoding is a function of a small number (at most r ) other symbols. We present a family of LRC codes that attain the maximum possible value of the distance for a given locality parameter and code cardinality. The codewords are obtained as evaluations of specially constructed polynomials over a finite field, and reduce to a Reed-Solomon code if the locality parameter r is set to be equal to the code dimension. The size of the code alphabet for most parameters is only slightly greater than the code length. The recovery procedure is performed by polynomial interpolation over r points. We also construct codes with several disjoint recovering sets for every symbol. This construction enables the system to conduct several independent and simultaneous recovery processes of a specific symbol by accessing different parts of the codeword. This property enables high availability of frequently accessed data (“hot data”).", "Repair locality is a desirable property for erasure codes in distributed storage systems. Recently, different structures of local repair groups have been proposed in the definitions of repair locality. In this paper, the concept of regenerating set is introduced to characterize the local repair groups. A definition of locality @math (i.e., locality @math with repair tolerance @math ) under the most general structure of regenerating sets is given. All previously studied locality turns out to be special cases of this definition. Furthermore, three representative concepts of locality proposed before are reinvestigated under the framework of regenerating sets, and their respective upper bounds on the minimum distance are reproved in a uniform and brief form. Additionally, a more precise distance bound is derived for the square code which is a class of linear codes with locality @math and high information rate, and an explicit code construction attaining the optimal distance bound is obtained.", "Distributed storage systems need to store data redundantly in order to provide some fault-tolerance and guarantee system reliability. Different coding techniques have been proposed to provide the required redundancy more efficiently than traditional replication schemes. However, compared to replication, coding techniques are less efficient for repairing lost redundancy, as they require retrieval of larger amounts of data from larger subsets of storage nodes. To mitigate these problems, several recent works have presented locally repairable codes designed to minimize the repair traffic and the number of nodes involved per repair. Unfortunately, existing methods often lead to codes where there is only one subset of nodes able to repair a piece of lost data, limiting the local repairability to the availability of the nodes in this subset. In this paper, we present a new family of locally repairable codes that allows different trade-offs between the number of contacted nodes per repair, and the number of different subsets of nodes that enable this repair. We show that slightly increasing the number of contacted nodes per repair allows to have repair alternatives, which in turn increases the probability of being able to perform efficient repairs. Finally, we present pg-BLRC, an explicit construction of locally repairable codes with multiple repair alternatives, constructed from partial geometries, in particular from Generalized Quadrangles. We show how these codes can achieve practical lengths and high rates, while requiring a small number of nodes per repair, and providing multiple repair alternatives.", "The @math th coordinate of an @math code is said to have locality @math and availability @math if there exist @math disjoint groups, each containing at most @math other coordinates that can together recover the value of the @math th coordinate. This property is particularly useful for codes for distributed storage systems because it permits local repair and parallel accesses of hot data. In this paper, for any positive integers @math and @math , we construct a binary linear code of length @math which has locality @math and availability @math for all coordinates. The information rate of this code attains @math , which is always higher than that of the direct product code, the only known construction that can achieve arbitrary locality and availability." ] }
1601.05595
2263213887
A code symbol of a linear code is said to have locality r if this symbol could be recovered by at most r other code symbols. An (n,k,r) locally repairable code (LRC) with all symbol locality is a linear code with length n, dimension k, and locality r for all symbols. Recently, there are lots of studies on the bounds and constructions of LRCs, most of which are essentially based on the generator matrix of the linear code. Up to now, the most important bounds of minimum distance for LRCs might be the well-known Singleton-like bound and the Cadambe-Mazumdar bound concerning the field size. In this paper, we study the bounds and constructions of LRCs from views of parity-check matrices. Firstly, we set up a new characterization of the parity-check matrix for an LRC. Then, the proposed parity-check matrix is employed to analyze the minimum distance. We give an alternative simple proof of the well-known Singleton-like bound for LRCs with all symbol locality, and then easily generalize it to a more general bound, which essentially coincides with the Cadambe-Mazumdar bound and includes the Singleton-like bound as a specific case. Based on the proposed characterization of parity-check matrices, necessary conditions of meeting the Singleton-like bound are obtained, which naturally lead to a construction framework of good LRCs. Finally, two classes of optimal LRCs based on linearized polynomial theories and Vandermonde matrices are obtained under the construction framework.
These codes @cite_2 are a class of MDS codes with subpacketization. When a single node failure occures, the repair process involves more than @math nodes and each node transfers a linear combination of the packets it stores, which reduces bandwidths compared to classical MDS codes. See @cite_22 @cite_17 for the construction of regenerating codes and @cite_29 for a survey.
{ "cite_N": [ "@cite_29", "@cite_17", "@cite_22", "@cite_2" ], "mid": [ "2058863419", "2126295689", "2150777202", "2105185344" ], "abstract": [ "Distributed storage systems often introduce redundancy to increase reliability. When coding is used, the repair problem arises: if a node storing encoded information fails, in order to maintain the same level of reliability we need to create encoded information at a new node. This amounts to a partial recovery of the code, whereas conventional erasure coding focuses on the complete recovery of the information from a subset of encoded packets. The consideration of the repair network traffic gives rise to new design challenges. Recently, network coding techniques have been instrumental in addressing these challenges, establishing that maintenance bandwidth can be reduced by orders of magnitude compared to standard erasure codes. This paper provides an overview of the research results on this topic.", "The high repair cost of (n, k) Maximum Distance Separable (MDS) erasure codes has recently motivated a new class of MDS codes, called Repair MDS codes, that can significantly reduce repair bandwidth over conventional MDS codes. In this paper, we describe (n, k, d) Exact-Repair MDS codes, which allow for any failed node to be repaired exactly with access to d survivor nodes, where k ≤ d ≤ n-1. We construct Exact-Repair MDS codes that are optimal in repair bandwidth for the cases of: (α) k n ≤ 1 2 and d ≥ 2k - 11; (b) k ≤ 3. Our codes are deterministic and require a finite-field size of at most 2(n - k). Our constructive codes are based on interference alignment techniques.", "Regenerating codes are a class of distributed storage codes that allow for efficient repair of failed nodes, as compared to traditional erasure codes. An [n, k, d] regenerating code permits the data to be recovered by connecting to any k of the n nodes in the network, while requiring that a failed node be repaired by connecting to any d nodes. The amount of data downloaded for repair is typically much smaller than the size of the source data. Previous constructions of exact-regenerating codes have been confined to the case n=d+1 . In this paper, we present optimal, explicit constructions of (a) Minimum Bandwidth Regenerating (MBR) codes for all values of [n, k, d] and (b) Minimum Storage Regenerating (MSR) codes for all [n, k, d ≥ 2k-2], using a new product-matrix framework. The product-matrix framework is also shown to significantly simplify system operation. To the best of our knowledge, these are the first constructions of exact-regenerating codes that allow the number n of nodes in the network, to be chosen independent of the other parameters. The paper also contains a simpler description, in the product-matrix framework, of a previously constructed MSR code with [n=d+1, k, d ≥ 2k-1].", "Distributed storage systems provide reliable access to data through redundancy spread over individually unreliable nodes. Application scenarios include data centers, peer-to-peer storage systems, and storage in wireless networks. Storing data using an erasure code, in fragments spread across nodes, requires less redundancy than simple replication for the same level of reliability. However, since fragments must be periodically replaced as nodes fail, a key question is how to generate encoded fragments in a distributed way while transferring as little data as possible across the network. For an erasure coded system, a common practice to repair from a single node failure is for a new node to reconstruct the whole encoded data object to generate just one encoded block. We show that this procedure is sub-optimal. We introduce the notion of regenerating codes, which allow a new node to communicate functions of the stored data from the surviving nodes. We show that regenerating codes can significantly reduce the repair bandwidth. Further, we show that there is a fundamental tradeoff between storage and repair bandwidth which we theoretically characterize using flow arguments on an appropriately constructed graph. By invoking constructive results in network coding, we introduce regenerating codes that can achieve any point in this optimal tradeoff." ] }
1601.05187
2952437517
The paper studies dynamic information flow security policies in an automaton-based model. Two semantic interpretations of such policies are developed, both of which generalize the notion of TA-security [van der Meyden ESORICS 2007] for static intransitive noninterference policies. One of the interpretations focuses on information flows permitted by policy edges, the other focuses on prohibitions implied by absence of policy edges. In general, the two interpretations differ, but necessary and sufficient conditions are identified for the two interpretations to be equivalent. Sound and complete proof techniques are developed for both interpretations. Two applications of the theory are presented. The first is a general result showing that access control mechanisms are able to enforce a dynamic information flow policy. The second is a simple capability system motivated by the Flume operating system.
However, there are also significant points of difference. Generally these works concern a programming language framework in which the secrets to be concealed are already encoded into the initial state, rather than our interactive" model in which secrets are generated on the fly as the result of nondeterministic choices of action made by the agents. Frequently, the programming languages studied are deterministic, so a direct relationship to our setting is not immediately clear. There exists an approach by Clarke and Hunt @cite_25 to handling interactivity in a programming language setting by including stream variables in the initial state to represent the sequence of future inputs that will be selected by an agent over the course of the run. However, this, in effect, assumes that the scheduler also is deterministic. Adding a variable for the schedule and allowing other agents to learn the complete schedule would be more permissive than a definition such as , since this restricts the information that agents may learn about the schedule. It therefore seems that establishing an exact correspondence would require some detailed work.
{ "cite_N": [ "@cite_25" ], "mid": [ "1920159904" ], "abstract": [ "We consider the problem of defining an appropriate notion of non-interference (NI) for deterministic interactive programs. Previous work on the security of interactive programs by O'Neill, Clarkson and Chong (CSFW 2006) builds on earlier ideas due to Wittbold and Johnson (Symposium on Security and Privacy 1990), and argues for a notion of NI defined in terms of strategies modelling the behaviour of users. We show that, for deterministic interactive programs, it is not necessary to consider strategies and that a simple stream model of the users' behaviour is sufficient. The key technical result is that, for deterministic programs, stream-based NI implies the apparently more general strategy-based NI (in fact we consider a wider class of strategies than those of O'). We give our results in terms of a simple notion of Input-Output Labelled Transition System, thus allowing application of the results to a large class of deterministic interactive programming languages." ] }
1601.05187
2952437517
The paper studies dynamic information flow security policies in an automaton-based model. Two semantic interpretations of such policies are developed, both of which generalize the notion of TA-security [van der Meyden ESORICS 2007] for static intransitive noninterference policies. One of the interpretations focuses on information flows permitted by policy edges, the other focuses on prohibitions implied by absence of policy edges. In general, the two interpretations differ, but necessary and sufficient conditions are identified for the two interpretations to be equivalent. Sound and complete proof techniques are developed for both interpretations. Two applications of the theory are presented. The first is a general result showing that access control mechanisms are able to enforce a dynamic information flow policy. The second is a simple capability system motivated by the Flume operating system.
Consider a situation with domains @math . Let @math be the policy @math . Intuitively, @math here is the policy authority, and @math states that all domains are permitted to know the state of the policy. In the initial state of the system, take the policy to be @math . Any actions performed by @math are, intuitively, permitted by this policy to be recorded in the state of @math , but should remain unknown to @math . Suppose that after some actions by @math , the policy agent changes the policy to @math . Now, @math is permitted to communicate with @math . According to our intuitions and formal definitions, @math may now send to @math the information it has about the actions that @math performed before the policy change. Such flows from @math to @math via @math are typical of the flows that intransitive policies are intended to permit @cite_4 .
{ "cite_N": [ "@cite_4" ], "mid": [ "2048500751" ], "abstract": [ "A noninterference formulation of MLS applicable to the Secure Ada® Target (SAT) Abstract Model is developed. An analogous formulation is developed to handle the SAT type enforcement policy. Unwinding theorems are presented for both MLS and Multidomain Security (MDS) and the SAT Abstract Model is shown to satisfy both MLS and MDS. Generalizations and extensions are also considered." ] }
1601.05187
2952437517
The paper studies dynamic information flow security policies in an automaton-based model. Two semantic interpretations of such policies are developed, both of which generalize the notion of TA-security [van der Meyden ESORICS 2007] for static intransitive noninterference policies. One of the interpretations focuses on information flows permitted by policy edges, the other focuses on prohibitions implied by absence of policy edges. In general, the two interpretations differ, but necessary and sufficient conditions are identified for the two interpretations to be equivalent. Sound and complete proof techniques are developed for both interpretations. Two applications of the theory are presented. The first is a general result showing that access control mechanisms are able to enforce a dynamic information flow policy. The second is a simple capability system motivated by the Flume operating system.
However, according to Askarov and Chong's definitions (both for the perfect recall attacker and weaker attackers), if an action now copies this information from @math 's state and it is observed by @math , this is a violation of security, because in a transition where it makes the observation, @math learns new information about @math , whereas the current state of the policy prohibits it from doing so. Thus, although Askarov and Chong permit intransitive policies, we would argue that their definitions do not handle dynamic intransitivity in a way that fits our intuitions: their semantics can best be characterized as a dynamic version of the classical purge-based definition of security, rather than a dynamic version of a semantics for intransitive noninterference. The details of Askarov and Chong's semantics are refined in @cite_24 , but similar remarks apply to to this work.
{ "cite_N": [ "@cite_24" ], "mid": [ "2950519199" ], "abstract": [ "Security policies are naturally dynamic. Reflecting this, there has been a growing interest in studying information-flow properties which change during program execution, including concepts such as declassification, revocation, and role-change. A static verification of a dynamic information flow policy, from a semantic perspective, should only need to concern itself with two things: 1) the dependencies between data in a program, and 2) whether those dependencies are consistent with the intended flow policies as they change over time. In this paper we provide a formal ground for this intuition. We present a straightforward extension to the principal flow-sensitive type system introduced by Hunt and Sands (POPL '06, ESOP '11) to infer both end-to-end dependencies and dependencies at intermediate points in a program. This allows typings to be applied to verification of both static and dynamic policies. Our extension preserves the principal type system's distinguishing feature, that type inference is independent of the policy to be enforced: a single, generic dependency analysis (typing) can be used to verify many different dynamic policies of a given program, thus achieving a clean separation between (1) and (2). We also make contributions to the foundations of dynamic information flow. Arguably, the most compelling semantic definitions for dynamic security conditions in the literature are phrased in the so-called knowledge-based style. We contribute a new definition of knowledge-based termination insensitive security for dynamic policies. We show that the new definition avoids anomalies of previous definitions and enjoys a simple and useful characterisation as a two-run style property." ] }
1601.05447
2950836656
Object proposals for detecting moving or static video objects need to address issues such as speed, memory complexity and temporal consistency. We propose an efficient Video Object Proposal (VOP) generation method and show its efficacy in learning a better video object detector. A deep-learning based video object detector learned using the proposed VOP achieves state-of-the-art detection performance on the Youtube-Objects dataset. We further propose a clustering of VOPs which can efficiently be used for detecting objects in video in a streaming fashion. As opposed to applying per-frame convolutional neural network (CNN) based object detection, our proposed method called Objects in Video Enabler thRough LAbel Propagation (OVERLAP) needs to classify only a small fraction of all candidate proposals in every video frame through streaming clustering of object proposals and class-label propagation. Source code will be made available soon.
Unsupervised category-independent detection proposals are evidently shown to be effective for object detection in images. Some of these methods are Objectness @cite_26 , category-independent object proposals @cite_8 , SelectiveSearch @cite_9 , MCG @cite_25 , GOP @cite_33 , BING @cite_18 , EdgeBoxes @cite_19 . A comparative literature survey on object proposal methods and their evaluations can be found in @cite_38 @cite_15 . Although there is no best'' detection proposal method, EdgeBoxes, which scores windows based on edge content, achieve better balance between recall and repeatabilty.
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_26", "@cite_33", "@cite_8", "@cite_9", "@cite_19", "@cite_15", "@cite_25" ], "mid": [ "2131625065", "2010181071", "", "", "1555385401", "2088049833", "7746136", "1958328135", "1991367009" ], "abstract": [ "In this paper, we extend a recently proposed method for generic object detection in images, category-independent object proposals, to the case of video. Given a video, the output of our algorithm is a set of video segments that are likely to contain an object. This can be useful, e.g., as a first step in a video object detection system. Given the sheer amount of pixels in a video, a straightforward extension of the 2D methods to a 3D (spatiotemporal) volume is not feasible. Instead, we start by extracting object proposals in each frame separately. These are linked across frames into object hypotheses, which are then used as higher-order potentials in a graph-based video segmentation framework. Running multiple segmentations and ranking the segments based on the likelihood that they correspond to an object, yields our final set of video object proposals.", "Training a generic objectness measure to produce a small set of candidate object windows, has been shown to speed up the classical sliding window object detection paradigm. We observe that generic objects with well-defined closed boundary can be discriminated by looking at the norm of gradients, with a suitable resizing of their corresponding image windows in to a small fixed size. Based on this observation and computational reasons, we propose to resize the window to 8 × 8 and use the norm of the gradients as a simple 64D feature to describe it, for explicitly training a generic objectness measure. We further show how the binarized version of this feature, namely binarized normed gradients (BING), can be used for efficient objectness estimation, which requires only a few atomic operations (e.g. ADD, BITWISE SHIFT, etc.). Experiments on the challenging PASCAL VOC 2007 dataset show that our method efficiently (300fps on a single laptop CPU) generates a small set of category-independent, high quality object windows, yielding 96.2 object detection rate (DR) with 1, 000 proposals. Increasing the numbers of proposals and color spaces for computing BING features, our performance can be further improved to 99.5 DR.", "", "", "We propose a category-independent method to produce a bag of regions and rank them, such that top-ranked regions are likely to be good segmentations of different objects. Our key objectives are completeness and diversity: every object should have at least one good proposed region, and a diverse set should be top-ranked. Our approach is to generate a set of segmentations by performing graph cuts based on a seed region and a learned affinity function. Then, the regions are ranked using structured learning based on various cues. Our experiments on BSDS and PASCAL VOC 2008 demonstrate our ability to find most objects within a small bag of proposed regions.", "This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html ).", "The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy.", "Current top performing object detectors employ detection proposals to guide the search for objects, thereby avoiding exhaustive sliding window search across images. Despite the popularity and widespread use of detection proposals, it is unclear which trade-offs are made when using them during object detection. We provide an in-depth analysis of twelve proposal methods along with four baselines regarding proposal repeatability, ground truth annotation recall on PASCAL, ImageNet, and MS COCO, and their impact on DPM, R-CNN, and Fast R-CNN detection performance. Our analysis shows that for object detection improving proposal localisation accuracy is as important as improving recall. We introduce a novel metric, the average recall (AR), which rewards both high recall and good localisation and correlates surprisingly well with detection performance. Our findings show common strengths and weaknesses of existing methods, and provide insights and metrics for selecting and tuning proposal methods.", "We propose a unified approach for bottom-up hierarchical image segmentation and object candidate generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object candidates by exploring efficiently their combinatorial space. We conduct extensive experiments on both the BSDS500 and on the PASCAL 2012 segmentation datasets, showing that MCG produces state-of-the-art contours, hierarchical regions and object candidates." ] }
1601.05447
2950836656
Object proposals for detecting moving or static video objects need to address issues such as speed, memory complexity and temporal consistency. We propose an efficient Video Object Proposal (VOP) generation method and show its efficacy in learning a better video object detector. A deep-learning based video object detector learned using the proposed VOP achieves state-of-the-art detection performance on the Youtube-Objects dataset. We further propose a clustering of VOPs which can efficiently be used for detecting objects in video in a streaming fashion. As opposed to applying per-frame convolutional neural network (CNN) based object detection, our proposed method called Objects in Video Enabler thRough LAbel Propagation (OVERLAP) needs to classify only a small fraction of all candidate proposals in every video frame through streaming clustering of object proposals and class-label propagation. Source code will be made available soon.
Applying image object proposals directly for each frame in video may be problematic due to time complexity and temporal consistency. In addition, issues like motion blur and compression artifacts can pose significant obstacles to identifying spatial contours, which degrades the object proposal qualities. Recent advances like SPPnet @cite_35 , Fast R-CNN @cite_7 , and Faster R-CNN @cite_14 have dramatically reduced the running time by computing deep features for all image locations at the same time and snapping them on appropriate proposal boxes. Per-frame object detection still needs classification of proposal windows and temporal consistency still remains a challenge. The proposed framework dispenses with the need of classifying every candidate window of every video frame through spatio-temporal clustering, thus addressing temporal consistency.
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_7" ], "mid": [ "2179352600", "2953106684", "" ], "abstract": [ "Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g. 224×224) input image. This requirement is “artificial” and may hurt the recognition accuracy for the images or sub-images of an arbitrary size scale. In this work, we equip the networks with a more principled pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size scale. By removing the fixed-size limitation, we can improve all CNN-based image classification methods in general. Our SPP-net achieves state-of-the-art accuracy on the datasets of ImageNet 2012, Pascal VOC 2007, and Caltech101.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.", "" ] }
1601.05150
2278597217
Finetuning from a pretrained deep model is found to yield state-of-the-art performance for many vision tasks. This paper investigates many factors that influence the performance in finetuning for object detection. There is a long-tailed distribution of sample numbers for classes in object detection. Our analysis and empirical results show that classes with more samples have higher impact on the feature learning. And it is better to make the sample number more uniform across classes. Generic object detection can be considered as multiple equally important tasks. Detection of each class is a task. These classes tasks have their individuality in discriminative visual appearance representation. Taking this individuality into account, we cluster objects into visually similar class groups and learn deep representations for these groups separately. A hierarchical feature learning scheme is proposed. In this scheme, the knowledge from the group with large number of classes is transferred for learning features in its sub-groups. Finetuned on the GoogLeNet model, experimental results show 4.7 absolute mAP improvement of our approach on the ImageNet object detection dataset without increasing much computational cost at the testing stage.
The long-tail property is noticed by researchers working on scene parsing @cite_20 and zero-shot learning @cite_11 . Yang @cite_20 expand the samples of rare classes and achieve more balanced superpixel classification results. Norouzi @cite_11 use the semantically similar object classes to predict the unseen classes of images. Deep learning is considered as a good representation sharing approach in the battle against the long tail @cite_35 . The influence of long tail in deep learning, to our knowledge, is not investigated. We provide analysis and experimental investigation on the influence of the long tail in learning features. Our investigation provides knowledge for training data preparation in deep learning.
{ "cite_N": [ "@cite_35", "@cite_20", "@cite_11" ], "mid": [ "", "2051458493", "2950700180" ], "abstract": [ "", "This paper presents a scalable scene parsing algorithm based on image retrieval and superpixel matching. We focus on rare object classes, which play an important role in achieving richer semantic understanding of visual scenes, compared to common background classes. Towards this end, we make two novel contributions: rare class expansion and semantic context description. First, considering the long-tailed nature of the label distribution, we expand the retrieval set by rare class exemplars and thus achieve more balanced superpixel classification results. Second, we incorporate both global and local semantic context information through a feedback based mechanism to refine image retrieval and superpixel matching. Results on the SIFTflow and LMSun datasets show the superior performance of our algorithm, especially on the rare classes, without sacrificing overall labeling accuracy.", "Several recent publications have proposed methods for mapping images into continuous semantic embedding spaces. In some cases the embedding space is trained jointly with the image transformation. In other cases the semantic embedding space is established by an independent natural language processing task, and then the image transformation into that space is learned in a second stage. Proponents of these image embedding systems have stressed their advantages over the traditional classification framing of image understanding, particularly in terms of the promise for zero-shot learning -- the ability to correctly annotate images of previously unseen object categories. In this paper, we propose a simple method for constructing an image embedding system from any existing image classifier and a semantic word embedding model, which contains the @math class labels in its vocabulary. Our method maps images into the semantic embedding space via convex combination of the class label embedding vectors, and requires no additional training. We show that this simple and direct method confers many of the advantages associated with more complex image embedding schemes, and indeed outperforms state of the art methods on the ImageNet zero-shot learning task." ] }
1601.05281
2287503382
Millimeter wave (mmWave) links will offer high capacity but are poor at penetrating into or diffracting around solid objects. Thus, we consider a hybrid cellular network with traditional sub-6 GHz macrocells coexisting with denser mmWave small cells, where a mobile user can connect to either opportunistically. We develop a general analytical model to characterize and derive the uplink and downlink cell association in the view of the signal-to-interference-and-noise-ratio and rate coverage probabilities in such a mixed deployment. We offer extensive validation of these analytical results (which rely on several simplifying assumptions) with simulation results. Using the analytical results, different decoupled uplink and downlink cell association strategies are investigated and their superiority is shown compared with the traditional coupled approach. Finally, small cell biasing in mmWave is studied, and we show that unprecedented biasing values are desirable due to the wide bandwidth.
Meanwhile, starting with @cite_18 , modeling and analyzing cellular networks using stochastic geometry has become a popular and accepted approach to understanding their performance trends. Most relevant to this study, mmWave networks were analyzed assuming a Poisson point process (PPP) for the base station (BS) distribution in @cite_15 @cite_19 @cite_22 . In @cite_15 a line-of-sight (LOS) ball model was considered for blockage modeling where BSs inside the LOS ball were considered to be in LOS whereas any BS outside of the LOS ball was treated as NLOS. In @cite_19 , this blocking model was modified by adding a LOS probability within the LOS ball, and this approach was shown to reflect several realistic blockage scenarios. Therefore we consider the same approach in this paper. Decoupled association in a mixed sub-6GHz and mmWave deployment was very recently considered in @cite_30 from a resource allocation perspective. However, there is no complete or analytical study to our knowledge on downlink-uplink decoupling for mmWave networks or the mmWave-sub-6GHz hybrid network considered in this paper.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_22", "@cite_19", "@cite_15" ], "mid": [ "2245353433", "2150166076", "", "1953553238", "2031858701" ], "abstract": [ "The forthcoming 5G cellular network is expected to overlay millimeter-wave (mmW) transmissions with the incumbent micro-wave ( @math ) architecture. The overall mm- @math resource management should, therefore, harmonize with each other. This paper aims at maximizing the overall downlink (DL) rate with a minimum uplink (UL) rate constraint, and concludes: mmW tends to focus more on DL transmissions while @math has high priority for complementing UL, under time-division duplex (TDD) mmW operations. Such UL dedication of @math results from the limited use of mmW UL bandwidth due to excessive power consumption and or high peak-to-average power ratio (PAPR) at mobile users. To further relieve this UL bottleneck, we propose mmW UL decoupling that allows each legacy @math base station (BS) to receive mmW signals. Its impact on mm- @math resource management is provided in a tractable way by virtue of a novel closed-form mm- @math spectral efficiency (SE) derivation. In an ultra-dense cellular network (UDN), our derivation verifies mmW (or @math ) SE is a logarithmic function of BS-to-user density ratio. This strikingly simple yet practically valid analysis is enabled by exploiting stochastic geometry in conjunction with real three-dimensional (3-D) building blockage statistics in Seoul, South Korea.", "Cellular networks are usually modeled by placing the base stations on a grid, with mobile users either randomly scattered or placed deterministically. These models have been used extensively but suffer from being both highly idealized and not very tractable, so complex system-level simulations are used to evaluate coverage outage probability and rate. More tractable models have long been desirable. We develop new general models for the multi-cell signal-to-interference-plus-noise ratio (SINR) using stochastic geometry. Under very general assumptions, the resulting expressions for the downlink SINR CCDF (equivalent to the coverage probability) involve quickly computable integrals, and in some practical special cases can be simplified to common integrals (e.g., the Q-function) or even to simple closed-form expressions. We also derive the mean rate, and then the coverage gain (and mean rate loss) from static frequency reuse. We compare our coverage predictions to the grid model and an actual base station deployment, and observe that the proposed model is pessimistic (a lower bound on coverage) whereas the grid model is optimistic, and that both are about equally accurate. In addition to being more tractable, the proposed model may better capture the increasingly opportunistic and dense placement of base stations in future networks.", "", "Millimeter wave (mmWave) cellular systems will require high-gain directional antennas and dense base station (BS) deployments to overcome a high near-field path loss and poor diffraction. As a desirable side effect, high-gain antennas offer interference isolation, providing an opportunity to incorporate self-backhauling , i.e., BSs backhauling among themselves in a mesh architecture without significant loss in the throughput, to enable the requisite large BS densities. The use of directional antennas and resource sharing between access and backhaul links leads to coverage and rate trends that significantly differ from conventional UHF cellular systems. In this paper, we propose a general and tractable mmWave cellular model capturing these key trends and characterize the associated rate distribution. The developed model and analysis are validated using actual building locations from dense urban settings and empirically derived path loss models. The analysis shows that, in sharp contrast to the interference-limited nature of UHF cellular networks, the spectral efficiency of mmWave networks (besides the total rate) also increases with the BS density, particularly at the cell edge. Increasing the system bandwidth does not significantly influence the cell edge rate, although it boosts the median and peak rates. With self-backhauling, different combinations of the wired backhaul fraction (i.e., the fraction of BSs with a wired connection) and the BS density are shown to guarantee the same median rate (QoS).", "Millimeter wave (mmWave) holds promise as a carrier frequency for fifth generation cellular networks. Because mmWave signals are sensitive to blockage, prior models for cellular networks operated in the ultra high frequency (UHF) band do not apply to analyze mmWave cellular networks directly. Leveraging concepts from stochastic geometry, this paper proposes a general framework to evaluate the coverage and rate performance in mmWave cellular networks. Using a distance-dependent line-of-site (LOS) probability function, the locations of the LOS and non-LOS base stations are modeled as two independent non-homogeneous Poisson point processes, to which different path loss laws are applied. Based on the proposed framework, expressions for the signal-to-noise-and-interference ratio (SINR) and rate coverage probability are derived. The mmWave coverage and rate performance are examined as a function of the antenna geometry and base station density. The case of dense networks is further analyzed by applying a simplified system model, in which the LOS region of a user is approximated as a fixed LOS ball. The results show that dense mmWave networks can achieve comparable coverage and much higher data rates than conventional UHF cellular systems, despite the presence of blockages. The results suggest that the cell size to achieve the optimal SINR scales with the average size of the area that is LOS to a user." ] }
1601.05439
2294876259
Replica Exchange (RE) simulations have emerged as an important algorithmic tool for the molecular sciences. RE simulations involve the concurrent execution of independent simulations which infrequently interact and exchange information. The next set of simulation parameters are based upon the outcome of the exchanges. Typically RE functionality is integrated into the molecular simulation software package. A primary motivation of the tight integration of RE functionality with simulation codes has been performance. This is limiting at multiple levels. First, advances in the RE methodology are tied to the molecular simulation code. Consequently these advances remain confined to the molecular simulation code for which they were developed. Second, it is difficult to extend or experiment with novel RE algorithms, since expertise in the molecular simulation code is typically required. In this paper, we propose the RepEx framework which address these aforementioned shortcomings of existing approaches, while striking the balance between flexibility (any RE scheme) and scalability (tens of thousands of replicas) over a diverse range of platforms. RepEx is designed to use a pilot-job based runtime system and support diverse RE Patterns and Execution Modes. RE Patterns are concerned with synchronization mechanisms in RE simulation, and Execution Modes with spatial and temporal mapping of workload to the CPU cores. We discuss how the design and implementation yield the following primary contributions of the RepEx framework: (i) its ability to support different RE schemes independent of molecular simulation codes, (ii) provide the ability to execute different exchange schemes and replica counts independent of the specific availability of resources, (iii) provide a runtime system that has first-class support for task-level parallelism, and (iv) required scalability along multiple dimensions.
REPDSTR module of the CHARMM: Ref. @cite_16 presents an implementation of a 2D US H-REMD method, implemented in REPDSTR module of the CHARMM @cite_25 . REPDSTR uses an MPI level parallel parallel mode where to each replica are assigned multiple MPI processes and dedicated I O routines. To improve sampling efficiency, exchange attempts are performed alternatively along the two dimensions.
{ "cite_N": [ "@cite_16", "@cite_25" ], "mid": [ "2328082688", "2132262459" ], "abstract": [ "An extremely scalable computational strategy is described for calculations of the potential of mean force (PMF) in multidimensions on massively distributed supercomputers. The approach involves coupling thousands of umbrella sampling (US) simulation windows distributed to cover the space of order parameters with a Hamiltonian molecular dynamics replica-exchange (H-REMD) algorithm to enhance the sampling of each simulation. In the present application, US H-REMD is carried out in a two-dimensional (2D) space and exchanges are attempted alternatively along the two axes corresponding to the two order parameters. The US H-REMD strategy is implemented on the basis of parallel parallel multiple copy protocol at the MPI level, and therefore can fully exploit computing power of large-scale supercomputers. Here the novel technique is illustrated using the leadership supercomputer IBM Blue Gene P with an application to a typical biomolecular calculation of general interest, namely the binding of calcium ions to the ...", "CHARMM (Chemistry at HARvard Molecular Mechanics) is a highly versatile and widely used molecu- lar simulation program. It has been developed over the last three decades with a primary focus on molecules of bio- logical interest, including proteins, peptides, lipids, nucleic acids, carbohydrates, and small molecule ligands, as they occur in solution, crystals, and membrane environments. For the study of such systems, the program provides a large suite of computational tools that include numerous conformational and path sampling methods, free energy estima- tors, molecular minimization, dynamics, and analysis techniques, and model-building capabilities. The CHARMM program is applicable to problems involving a much broader class of many-particle systems. Calculations with CHARMM can be performed using a number of different energy functions and models, from mixed quantum mechanical-molecular mechanical force fields, to all-atom classical potential energy functions with explicit solvent and various boundary conditions, to implicit solvent and membrane models. The program has been ported to numer- ous platforms in both serial and parallel architectures. This article provides an overview of the program as it exists today with an emphasis on developments since the publication of the original CHARMM article in 1983." ] }
1601.05439
2294876259
Replica Exchange (RE) simulations have emerged as an important algorithmic tool for the molecular sciences. RE simulations involve the concurrent execution of independent simulations which infrequently interact and exchange information. The next set of simulation parameters are based upon the outcome of the exchanges. Typically RE functionality is integrated into the molecular simulation software package. A primary motivation of the tight integration of RE functionality with simulation codes has been performance. This is limiting at multiple levels. First, advances in the RE methodology are tied to the molecular simulation code. Consequently these advances remain confined to the molecular simulation code for which they were developed. Second, it is difficult to extend or experiment with novel RE algorithms, since expertise in the molecular simulation code is typically required. In this paper, we propose the RepEx framework which address these aforementioned shortcomings of existing approaches, while striking the balance between flexibility (any RE scheme) and scalability (tens of thousands of replicas) over a diverse range of platforms. RepEx is designed to use a pilot-job based runtime system and support diverse RE Patterns and Execution Modes. RE Patterns are concerned with synchronization mechanisms in RE simulation, and Execution Modes with spatial and temporal mapping of workload to the CPU cores. We discuss how the design and implementation yield the following primary contributions of the RepEx framework: (i) its ability to support different RE schemes independent of molecular simulation codes, (ii) provide the ability to execute different exchange schemes and replica counts independent of the specific availability of resources, (iii) provide a runtime system that has first-class support for task-level parallelism, and (iv) required scalability along multiple dimensions.
Asynchronous approaches: Ref. @cite_14 @cite_20 presented ASyncRE package, developed to perform large-scale asynchronous REMD simulations on HPC systems. ASyncRE has an emphasis on asynchronous RE. Package supports Amber @cite_15 and IMPACT @cite_17 MD engines. It implements two REMD algorithms, namely multi-dimensional RE umbrella sampling with Amber and BEDAM @math RE alchemical binding free energy calculations with the IMPACT. AsyncRE uses a similar runtime system as RepEx, is capable of launching more replicas than there are CPU cores allocated and is fault tolerant: failure of a single (or multiple) replicas does not result in failure of a whole simulation. If needed, new replicas can be launched to compensate for a failed ones.
{ "cite_N": [ "@cite_15", "@cite_14", "@cite_20", "@cite_17" ], "mid": [ "2103325328", "2172043048", "2321157445", "" ], "abstract": [ "Molecular dynamics (MD) allows the study of biological and chemical systems at the atomistic level on timescales from femtoseconds to milliseconds. It complements experiment while also offering a way to follow processes difficult to discern with experimental techniques. Numerous software packages exist for conducting MD simulations of which one of the widest used is termed Amber. Here, we outline the most recent developments, since version 9 was released in April 2006, of the Amber and AmberTools MD software packages, referred to here as simply the Amber package. The latest release represents six years of continued development, since version 9, by multiple research groups and the culmination of over 33 years of work beginning with the first version in 1979. The latest release of the Amber package, version 12 released in April 2012, includes a substantial number of important developments in both the scientific and computer science arenas. We present here a condensed vision of what Amber currently supports and where things are likely to head over the coming years. Figure 1 shows the performance in ns day of the Amber package version 12 on a single-core AMD FX-8120 8-Core 3.6GHz CPU, the Cray XT5 system, and a single GPU GTX680. © 2012 John Wiley & Sons, Ltd.", "Replica exchange represents a powerful class of algorithms used for enhanced configurational and energetic sampling in a range of physical systems. Computationally it represents a type of application with multiple scales of communication. At a fine-grained level there is often communication with a replica, typically an MPI process. At a coarse-grained level, the replicas communicate with other replicas -- both temporally as well as in amount of data exchanged. This paper outlines a novel framework developed to support the flexible execution of large-scale replica exchange. The framework is flexible in the sense that it supports different coupling schemes between replicas and is agnostic to the specific underlying simulation -- classical or quantum, serial or parallel simulation. The scalability of the framework is assessed using standard simulation benchmarks. In spite of the increasing communication and coordination requirements as a function of the number of replicas, our framework supports the execution of hundreds replicas without significant overhead. Although there are several specific aspects that will benefit from further optimization, a first working prototype has the ability to fundamentally change the scale of replica exchange simulations possible on production distributed cyberinfrastructure such as XSEDE, as well as support novel usage modes. This paper also represents the release of the framework to the broader biophysical simulation community and provides details on its usage.", "Replica exchange molecular dynamics has emerged as a powerful tool for efficiently sampling free energy landscapes for conformational and chemical transitions. However, daunting challenges remain in efficiently getting such simulations to scale to the very large number of replicas required to address problems in state spaces beyond two dimensions. The development of enabling technology to carry out such simulations is in its infancy, and thus it remains an open question as to which applications demand extension into higher dimensions. In the present work, we explore this problem space by applying asynchronous Hamiltonian replica exchange molecular dynamics with a combined quantum mechanical molecular mechanical potential to explore the conformational space for a simple ribonucleoside. This is done using a newly developed software framework capable of executing >3,000 replicas with only enough resources to run 2,000 simultaneously. This may not be possible with traditional synchronous replica exchange appr...", "" ] }
1601.05439
2294876259
Replica Exchange (RE) simulations have emerged as an important algorithmic tool for the molecular sciences. RE simulations involve the concurrent execution of independent simulations which infrequently interact and exchange information. The next set of simulation parameters are based upon the outcome of the exchanges. Typically RE functionality is integrated into the molecular simulation software package. A primary motivation of the tight integration of RE functionality with simulation codes has been performance. This is limiting at multiple levels. First, advances in the RE methodology are tied to the molecular simulation code. Consequently these advances remain confined to the molecular simulation code for which they were developed. Second, it is difficult to extend or experiment with novel RE algorithms, since expertise in the molecular simulation code is typically required. In this paper, we propose the RepEx framework which address these aforementioned shortcomings of existing approaches, while striking the balance between flexibility (any RE scheme) and scalability (tens of thousands of replicas) over a diverse range of platforms. RepEx is designed to use a pilot-job based runtime system and support diverse RE Patterns and Execution Modes. RE Patterns are concerned with synchronization mechanisms in RE simulation, and Execution Modes with spatial and temporal mapping of workload to the CPU cores. We discuss how the design and implementation yield the following primary contributions of the RepEx framework: (i) its ability to support different RE schemes independent of molecular simulation codes, (ii) provide the ability to execute different exchange schemes and replica counts independent of the specific availability of resources, (iii) provide a runtime system that has first-class support for task-level parallelism, and (iv) required scalability along multiple dimensions.
Ref. @cite_3 introduced another REMD package targeted at asynchronous RE, optimized for volunteer computing resources. Package can be used on HPC clusters as well. It is customized for IMPACT as MD simulation engine and supports both 1D and 2D REMD simulations. Distinctive features are: fault tolerance, the ability to use a dynamic pool of resources and to use less CPU cores than replicas. Exchange phase is performed on coordination server, meaning that output data must be moved from target resource to coordination server.
{ "cite_N": [ "@cite_3" ], "mid": [ "2153923848" ], "abstract": [ "We describe methods to perform replica exchange molecular dynamics (REMD) simulations asynchronously (ASyncRE). The methods are designed to facilitate large scale REMD simulations on grid computing networks consisting of heterogeneous and distributed computing environments as well as on homogeneous high-performance clusters. We have implemented these methods on NSF (National Science Foundation) XSEDE (Extreme Science and Engineering Discovery Environment) clusters and BOINC (Berkeley Open Infrastructure for Network Computing) distributed computing networks at Temple University and Brooklyn College at CUNY (the City University of New York). They are also being implemented on the IBM World Community Grid. To illustrate the methods, we have performed extensive (more than 60 ms in aggregate) simulations for the beta-cyclodextrin-heptanoate host-guest system in the context of one- and two-dimensional ASyncRE, and we used the results to estimate absolute binding free energies using the binding energy distribution analysis method. We propose ways to improve the efficiency of REMD simulations: these include increasing the number of exchanges attempted after a specified molecular dynamics (MD) period up to the fast exchange limit and or adjusting the MD period to allow sufficient internal relaxation within each thermodynamic state. Although ASyncRE simulations generally require long MD periods (>picoseconds) per replica exchange cycle to minimize the overhead imposed by heterogeneous computing networks, we found that it is possible to reach an efficiency similar to conventional synchronous REMD, by optimizing the combination of the MD period and the number of exchanges attempted per cycle. © 2015 Wiley Periodicals, Inc." ] }
1601.05347
2264380076
Cross modal face matching between the thermal and visible spectrum is a much desired capability for night-time surveillance and security applications. Due to a very large modality gap, thermal-to-visible face recognition is one of the most challenging face matching problem. In this paper, we present an approach to bridge this modality gap by a significant margin. Our approach captures the highly non-linear relationship between the two modalities by using a deep neural network. Our model attempts to learn a non-linear mapping from the visible to the thermal spectrum while preserving the identity information. We show substantive performance improvement on three difficult thermal---visible face datasets. The presented approach improves the state-of-the-art by more than 10 on the UND-X1 dataset and by more than 15---30 on the NVESD dataset in terms of Rank-1 identification. Our method bridges the drop in performance due to the modality gap by more than 40 .
One of the very first comparative studies on visible and thermal face recognition was performed by @cite_19 . They concluded that "LWIR thermal imagery of human faces is not only a valid biometric, but almost surely a superior one to comparable visible imagery." A good survey on single model and cross modal face recognition methods can be found in @cite_16 .
{ "cite_N": [ "@cite_19", "@cite_16" ], "mid": [ "2171982656", "2087932678" ], "abstract": [ "We present a comprehensive performance analysis of multiple appearance-based face recognition methodologies, on visible and thermal infrared imagery. We compare algorithms within and between modalities in terms of recognition performance, false alarm rates and requirements to achieve specified performance levels. The effect of illumination conditions on recognition performance is emphasized, as it underlines the relative advantage of radiometrically calibrated thermal imagery for face recognition.", "High performance for face recognition systems occurs in controlled environments and degrades with variations in illumination, facial expression, and pose. Efforts have been made to explore alternate face modalities such as infrared (IR) and 3-D for face recognition. Studies also demonstrate that fusion of multiple face modalities improve performance as compared with singlemodal face recognition. This paper categorizes these algorithms into singlemodal and multimodal face recognition and evaluates methods within each category via detailed descriptions of representative work and summarizations in tables. Advantages and disadvantages of each modality for face recognition are analyzed. In addition, face databases and system evaluations are also covered." ] }
1601.05347
2264380076
Cross modal face matching between the thermal and visible spectrum is a much desired capability for night-time surveillance and security applications. Due to a very large modality gap, thermal-to-visible face recognition is one of the most challenging face matching problem. In this paper, we present an approach to bridge this modality gap by a significant margin. Our approach captures the highly non-linear relationship between the two modalities by using a deep neural network. Our model attempts to learn a non-linear mapping from the visible to the thermal spectrum while preserving the identity information. We show substantive performance improvement on three difficult thermal---visible face datasets. The presented approach improves the state-of-the-art by more than 10 on the UND-X1 dataset and by more than 15---30 on the NVESD dataset in terms of Rank-1 identification. Our method bridges the drop in performance due to the modality gap by more than 40 .
In the cross-modal (infra-red-visible) face recognition scenario, most of the earlier efforts focus only in the NIR to Visible matching. One of the first investigation by @cite_24 uses Linear discriminant analysis (LDA) and canonical correspondence analysis to perform linear regression between NIR and visible images. A number of approaches build on using local feature descriptors to represent the face. @cite_2 first used this approach on NIR to visible face recognition by processing face images with a difference of Gaussian (DoG) filter, and encoding them using multiblock local binary patterns. Gentle AdaBoost feature selection was used in conjunction with LDA to improve the recognition accuracy. @cite_25 followed this work on NIR to visible face recognition by also incorporating SIFT feature descriptors and an LDA scheme. @cite_10 applied coupled spectral regression for NIR to visible recognition. Few methods have also focused on SWIR-to-visible face recognition @cite_13 , @cite_29 . NIR or SWIR to visible face matching produces relatively better results as both the modalities are very similar because of the small spectral gap. Because of their limited use in the night-time surveillance applications, a much needed research focus is required in the thermal to visible matching domain.
{ "cite_N": [ "@cite_13", "@cite_29", "@cite_24", "@cite_2", "@cite_10", "@cite_25" ], "mid": [ "2123052420", "2043167551", "1500607227", "38226588", "2154671648", "1971359334" ], "abstract": [ "The problem of face verification across the short wave infrared spectrum (SWIR) is studied in order to illustrate the advantages and limitations of SWIR face verification. The contributions of this work are two-fold. First, a database of 50 subjects is assembled and used to illustrate the challenges associated with the problem. Second, a set of experiments is performed in order to demonstrate the possibility of SWIR cross-spectral matching. Experiments also show that images captured under different SWIR wavelengths can be matched to visible images with promising results. The role of multispectral fusion in improving recognition performance in SWIR images is finally illustrated. To the best of our knowledge, this is the first time cross-spectral SWIR face recognition is being investigated in the open literature.", "Short wave infrared (SWIR) is an emerging imaging modality in surveillance applications. It is able to capture clear long range images of a subject in harsh atmospheric conditions and at night time. However, matching SWIR images against a gallery of color images is a very challenging task. The photometric properties of images in these two spectral bands are highly distinct. This work presents a novel cross-spectral face recognition scheme that encodes images filtered with a bank of Gabor filters followed by three local operators: Simplified Weber Local Descriptor, Local Binary Pattern, and Generalized Local Binary Pattern. Both magnitude and phase of filtered images are encoded. Matching encoded face images is performed by using a symmetric I-divergence. We quantify the verification and identification performance of the cross-spectral matcher on two multispectral face datasets. In the first dataset (PRE-TINDERS), both SWIR and visible gallery images are captured at a close distance (about 2 meters). In the second dataset (TINDERS), the probe SWIR images are collected at longer ranges (50 and 106 meters). The results on PRE-TINDERS dataset form a baseline for matching long range data. We also demonstrate the capability of the proposed approach by comparing its performance with the performance of Faceit G8, a commercial face recognition engine distributed by L1. The results show that the designed method outperforms Faceit G8 in terms of verification and identification rates on both datasets.", "In many applications, such as E-Passport and driver's license, the enrollment of face templates is done using visible light (VIS) face images. Such images are normally acquired in controlled environment where the lighting is approximately frontal. However, Authentication is done in variable lighting conditions. Matching of faces in VIS images taken in different lighting conditions is still a big challenge. A recent development in near infrared (NIR) image based face recognition [1] has well overcome the difficulty arising from lighting changes. However, it requires that enrollment face images be acquired using NIR as well. In this paper, we present a new problem, that of matching a face in an NIR image against one in a VIS images, and propose a solution to it. The work is aimed to develop a new solution for meeting the accuracy requirement of face-based biometric recognition, by taking advantages of the recent NIR face technology while allowing the use of existing VIS face photos as gallery templates. Face recognition is done by matching an NIR probe face against a VIS gallery face. Based on an analysis of properties of NIR and VIS face images, we propose a learning-based approach for the different modality matching. A mechanism of correlation between NIR and VIS faces is learned from NIR → VIS face pairs, and the learned correlation is used to evaluate similarity between an NIR face and a VIS face. We provide preliminary results of NIR → VIS face matching for recognition under different illumination conditions. The results demonstrate advantages of NIR → VIS matching over VIS → VIS matching.", "Heterogeneous face images come from different lighting conditions or different imaging devices, such as visible light (VIS) and near infrared (NIR) based. Because heterogeneous face images can have different skin spectra-optical properties, direct appearance based matching is no longer appropriate for solving the problem. Hence we need to find facial features common in heterogeneous images. For this, first we use Difference-of-Gaussian filtering to obtain a normalized appearance for all heterogeneous faces. We then apply MB-LBP, an extension of LBP operator, to encode the local image structures in the transformed domain, and further learn the most discriminant local features for recognition. Experiments show that the proposed method significantly outperforms existing ones in matching between VIS and NIR face images.", "Face recognition algorithms need to deal with variable lighting conditions. Near infrared (NIR) image based face recognition technology has been proposed to effectively overcome this difficulty. However, it requires that the enrolled face images be captured using NIR images whereas many applications require visual (VIS) images for enrollment templates. To take advantage of NIR face images for illumination-invariant face recognition and allow the use of VIS face images for enrollment, we encounter a new face image pattern recognition problem, that is, heterogeneous face matching between NIR versus VIS faces. In this paper, we present a subspace learning framework named Coupled Spectral Regression (CSR) to solve this challenge problem of coupling the two types of face images and matching between them. CSR first models the properties of different types of data separately and then learns two associated projections to project heterogeneous data (e.g. VIS and NIR) respectively into a discriminative common subspace in which classification is finally performed. Compared to other existing methods, CSR is computational efficient, benefiting from the efficiency of spectral regression and has better generalization performance. Experimental results on VIS-NIR face database show that the proposed CSR method significantly outperforms the existing methods.", "Matching near-infrared (NIR) face images to visible light (VIS) face images offers a robust approach to face recognition with unconstrained illumination. In this paper we propose a novel method of heterogeneous face recognition that uses a common feature-based representation for both NIR images as well as VIS images. Linear discriminant analysis is performed on a collection of random subspaces to learn discriminative projections. NIR and VIS images are matched (i) directly using the random subspace projections, and (ii) using sparse representation classification. Experimental results demonstrate the effectiveness of the proposed approach for matching NIR and VIS face images." ] }
1601.05266
2950117358
Mobile users are envisioned to exploit direct communication opportunities between their portable devices, in order to enrich the set of services they can access through cellular or WiFi networks. Sharing contents of common interest or providing access to resources or services between peers can enhance a mobile node's capabilities, offload the cellular network, and disseminate information to nodes without Internet access. Interest patterns, i.e. how many nodes are interested in each content or service (popularity), as well as how many users can provide a content or service (availability) impact the performance and feasibility of envisioned applications. In this paper, we establish an analytical framework to study the effects of these factors on the delay and success probability of a content service access request through opportunistic communication. We also apply our framework to the mobile data offloading problem and provide insights for the optimization of its performance. We validate our model and results through realistic simulations, using datasets of real opportunistic networks.
Under a different setting, @cite_24 @cite_8 study content sharing mechanisms with limited resources (e.g. buffer sizes, number of holders). @cite_24 , authors analytically investigate the data dissemination cost-effectiveness tradeoffs, and propose techniques based on contact patterns (i.e. @math ) and nodes interests. Similarly, CEDO @cite_8 aims at maximizing the total content delivery rate: by maintaining a utility per content, nodes make appropriate drop and scheduling decisions.
{ "cite_N": [ "@cite_24", "@cite_8" ], "mid": [ "2171053358", "2161604644" ], "abstract": [ "Data dissemination is useful for many applications of Disruption Tolerant Networks (DTNs). Current data dissemination schemes are generally network-centric ignoring user interests. In this paper, we propose a novel approach for user-centric data dissemination in DTNs, which considers satisfying user interests and maximizes the cost-effectiveness of data dissemination. Our approach is based on a social centrality metric, which considers the social contact patterns and interests of mobile users simultaneously, and thus ensures effective relay selection. The performance of our approach is evaluated from both theoretical and experimental perspectives. By formal analysis, we show the lower bound on the cost-effectiveness of data dissemination, and analytically investigate the tradeoff between the effectiveness of relay selection and the overhead of maintaining network information. By trace-driven simulations, we show that our approach achieves better cost-effectiveness than existing data dissemination schemes.", "Emerging challenged networks require new protocols and strategies to cope with a high degree of mobility, high delays and unknown, possibly non-existing routes within the network. Researchers have proposed different store-carry-and-forward protocols for data delivery in challenged networks. These have been complemented with appropriate drop and scheduling policies that deal with the limitations of the nodes' buffers and the limited duration of opportunistic encounters in these networks. Nevertheless, the vast majority of these protocols and strategies are designed for end-to-end transmissions. Yet, a paradigm shift from the traditional way of addressing the endpoints in the network has been occurring towards content-centric networking. To this end, we present CEDO, a content-centric dissemination algorithm for challenged networks. CEDO aims at maximizing the total delivery-rate of distributed content in a setting where a range of contents of different popularity may be requested and stored, but nodes have limited resources. It achieves this by maintaining a delivery-rate utility per content that is proportional to the content miss rate and that is used by the nodes to make appropriate drop and scheduling decisions. This delivery-rate utility can be estimated locally by each node using unbiased estimators fed by sampled information on the mobile network obtained by gossiping. Both simulations and theory suggest that CEDO achieves its set goal, and outperforms a baseline LRU-based policy by 72 , even in relatively small scenarios. The framework followed by CEDO is general enough to be applied to other global performance objectives as well." ] }
1601.05266
2950117358
Mobile users are envisioned to exploit direct communication opportunities between their portable devices, in order to enrich the set of services they can access through cellular or WiFi networks. Sharing contents of common interest or providing access to resources or services between peers can enhance a mobile node's capabilities, offload the cellular network, and disseminate information to nodes without Internet access. Interest patterns, i.e. how many nodes are interested in each content or service (popularity), as well as how many users can provide a content or service (availability) impact the performance and feasibility of envisioned applications. In this paper, we establish an analytical framework to study the effects of these factors on the delay and success probability of a content service access request through opportunistic communication. We also apply our framework to the mobile data offloading problem and provide insights for the optimization of its performance. We validate our model and results through realistic simulations, using datasets of real opportunistic networks.
Recently, further novel content-centric application have been proposed, like location-based applications @cite_9 @cite_1 and mobile data offloading @cite_27 @cite_34 @cite_32 . The latter category, due to the rapid increase of mobile data demand, has attracted a lot of attention. In the setting of @cite_27 , content copies are initially distributed (through the infrastructure) to a subset of mobile nodes, which then start propagating the contents epidemically. Differently, in @cite_34 the authors consider a limited number of holders, and study how to select the best holders-target-set for each message. @cite_21 , the same problem is considered, and (centralized) optimization algorithms are proposed that take into account more information about the network: namely, size and lifetimes of different contents, and interests, privacy policies and buffer sizes of each node.
{ "cite_N": [ "@cite_9", "@cite_21", "@cite_1", "@cite_32", "@cite_27", "@cite_34" ], "mid": [ "2037806787", "", "2042928587", "", "1498081891", "2167718801" ], "abstract": [ "In the inaugural issue of MC2R in April 1997 [24], I highlighted the seminal influence of mobility in computing. At that time, the goal of \"information at your fingertips anywhere, anytime\" was only a dream. Today, through relentless pursuit of innovations in wireless technology, energy-efficient portable hardware and adaptive software, we have largely attained this goal. Ubiquitous email and Web access is a reality that is experienced by millions of users worldwide through their Blackberries, iPhones, iPads, Windows Phone devices, and Android-based devices. Mobile Web-based services and location-aware advertising opportunities have emerged, triggering large commercial investments. Mobile computing has arrived as a lucrative business proposition. Looking ahead, what are the dreams that will inspire our future efforts in mobile computing? We begin this paper by considering some imaginary mobile computing scenarios from the future. We then extract the deep assumptions implicit in these scenarios, and use them to speculate on the future trajectory of mobile computing.", "", "Opportunistic communication between mobile nodes allows for asynchronous content sharing within groups. Limiting the spread of information to a geographic area creates an infrastructure-less variant of digital graffiti, a social network with coupling in space and limited decoupling in time. Due to its nature, this kind of a communication network lends itself readily to name-oriented abstractions. In this paper, we extend our previous work on floating content, extract its fundamental characteristics, and define a system model and a simple API with a set of basic programming elements to support applications in leveraging opportunistic content sharing as a generic communication facility. We validate our API through application examples and show how their communication needs are mapped to our model. In addition, we also implement our API in our simulator and demonstrate the feasibility of these kinds of applications.", "", "Major wireless operators are nowadays facing network capacity issues in striving to meet the growing demands of mobile users. At the same time, 3G-enabled devices increasingly benefit from ad hoc radio connectivity (e.g., Wi-Fi). In this context of hybrid connectivity, we propose Push-and-track, a content dissemination framework that harnesses ad hoc communication opportunities to minimize the load on the wireless infrastructure while guaranteeing tight delivery delays. It achieves this through a control loop that collects user-sent acknowledgements to determine if new copies need to be reinjected into the network through the 3G interface. Push-and-Track includes multiple strategies to determine how many copies of the content should be injected, when, and to whom. The short delay-tolerance of common content, such as news or road traffic updates, make them suitable for such a system. Based on a realistic large-scale vehicular dataset from the city of Bologna composed of more than 10,000 vehicles, we demonstrate that Push-and-Track consistently meets its delivery objectives while reducing the use of the 3G network by over 90 .", "3G networks are currently overloaded, due to the increasing popularity of various applications for smartphones. Offloading mobile data traffic through opportunistic communications is a promising solution to partially solve this problem, because there is almost no monetary cost for it. We propose to exploit opportunistic communications to facilitate information dissemination in the emerging Mobile Social Networks (MoSoNets) and thus reduce the amount of mobile data traffic. As a case study, we investigate the target-set selection problem for information delivery. In particular, we study how to select the target set with only k users, such that we can minimize the mobile data traffic over cellular networks. We propose three algorithms, called Greedy, Heuristic, and Random, for this problem and evaluate their performance through an extensive trace-driven simulation study. Our simulation results verify the efficiency of these algorithms for both synthetic and real-world mobility traces. For example, the Heuristic algorithm can offload mobile data traffic by up to 73.66 percent for a real-world mobility trace. Moreover, to investigate the feasibility of opportunistic communications for mobile phones, we implement a proof-of-concept prototype, called Opp-off, on Nokia N900 smartphones, which utilizes their Bluetooth interface for device service discovery and content transfer." ] }
1601.04743
2295394089
We present an efficient proof system for Multipoint Arithmetic Circuit Evaluation: for every arithmetic circuit @math of size @math and degree @math over a field @math , and any inputs @math , @math the Prover sends the Verifier the values @math and a proof of @math length, and @math the Verifier tosses @math coins and can check the proof in about @math time, with probability of error less than @math . For small degree @math , this "Merlin-Arthur" proof system (a.k.a. MA-proof system) runs in nearly-linear time, and has many applications. For example, we obtain MA-proof systems that run in @math time (for various @math ) for the Permanent, @math Circuit-SAT for all sublinear-depth circuits, counting Hamiltonian cycles, and infeasibility of @math - @math linear programs. In general, the value of any polynomial in Valiant's class @math can be certified faster than "exhaustive summation" over all possible assignments. These results strongly refute a Merlin-Arthur Strong ETH and Arthur-Merlin Strong ETH posed by Russell Impagliazzo and others. We also give a three-round (AMA) proof system for quantified Boolean formulas running in @math time, nearly-linear time MA-proof systems for counting orthogonal vectors in a collection and finding Closest Pairs in the Hamming metric, and a MA-proof system running in @math -time for counting @math -cliques in graphs. We point to some potential future directions for refuting the Nondeterministic Strong ETH.
Goldwasser, Kalai, and Rothblum @cite_14 study what they call , proving (for example) that for all logspace-uniform NC circuits @math , one can prove that @math on an input @math of length @math with @math verification time, @math space, and @math communication complexity between the prover and verifier. Despite the amazingly low running time and space usage, the protocols of this work are highly non-interactive: they need @math between the prover and verifier as well.
{ "cite_N": [ "@cite_14" ], "mid": [ "2146099890" ], "abstract": [ "In this work we study interactive proofs for tractable languages. The (honest) prover should be efficient and run in polynomial time, or in other words a \"muggle\". The verifier should be super-efficient and run in nearly-linear time. These proof systems can be used for delegating computation: a server can run a computation for a client and interactively prove the correctness of the result. The client can verify the result's correctness in nearly-linear time (instead of running the entire computation itself). Previously, related questions were considered in the Holographic Proof setting by Babai, Fortnow, Levin and Szegedy, in the argument setting under computational assumptions by Kilian, and in the random oracle model by Micali. Our focus, however, is on the original interactive proof model where no assumptions are made on the computational power or adaptiveness of dishonest provers. Our main technical theorem gives a public coin interactive proof for any language computable by a log-space uniform boolean circuit with depth d and input length n. The verifier runs in time (n+d) • polylog(n) and space O(log(n)), the communication complexity is d • polylog(n), and the prover runs in time poly(n). In particular, for languages computable by log-space uniform NC (circuits of polylog(n) depth), the prover is efficient, the verifier runs in time n • polylog(n) and space O(log(n)), and the communication complexity is polylog(n). Using this theorem we make progress on several questions: We show how to construct short (polylog size) computationally sound non-interactive certificates of correctness for any log-space uniform NC computation, in the public-key model. The certificates can be verified in quasi-linear time and are for a designated verifier: each certificate is tailored to the verifier's public key. This result uses a recent transformation of Kalai and Raz from public-coin interactive proofs to one-round arguments. The soundness of the certificates is based on the existence of a PIR scheme with polylog communication. Interactive proofs with public-coin, log-space, poly-time verifiers for all of P. This settles an open question regarding the expressive power of proof systems with such verifiers. Zero-knowledge interactive proofs with communication complexity that is quasi-linear in the witness, length for any NP language verifiable in NC, based on the existence of one-way functions. Probabilistically checkable arguments (a model due to Kalai and Raz) of size polynomial in the witness length (rather than the instance length) for any NP language verifiable in NC, under computational assumptions." ] }
1601.04276
2762400634
We analyze the exact exponential decay rate of the expected amount of information leaked to the wiretapper in Wyner’s wiretap channel setting using wiretap channel codes constructed from both i.i.d. and constant-composition random codes. Our analysis for those sampled from i.i.d. random coding ensemble shows that the previously known achievable secrecy exponent using this ensemble is indeed the exact exponent for an average code in the ensemble. Furthermore, our analysis on wiretap channel codes constructed from the ensemble of constant-composition random codes leads to an exponent which, in addition to being the exact exponent for an average code, is larger than the achievable secrecy exponent that has been established so far in the literature for this ensemble (which in turn was known to be smaller than that achievable by wiretap channel codes sampled from i.i.d. random coding ensemble). We show examples where the exact secrecy exponent for the wiretap channel codes constructed from random constant-composition codes is larger than that of those constructed from i.i.d. random codes and examples where the exact secrecy exponent for the wiretap channel codes constructed from i.i.d. random codes is larger than that of those constructed from constant-composition random codes. We, hence, conclude that, unlike the error correction problem, there is no general ordering between the two random coding ensembles in terms of their secrecy exponent.
In addition to those cited above, @cite_25 also presents a simple achievability proof for channel resolvability. Based on this proof the authors, in their subsequent work @cite_21 , establish strong secrecy for wiretap channel using resolvability-based constructions for wiretap channel codes. The performance of a code for the wiretap channel is measured via two figures of merit, namely, the error probability and information leakage, both of which decay exponentially in block-length when a wiretap channel code sampled from the ensemble of random codes is employed on stationary memoryless channels (as we will also discuss in Theorem ). The trade-off between secrecy and error exponents (as well as other generalizations of the model) is studied in @cite_7 .
{ "cite_N": [ "@cite_21", "@cite_25", "@cite_7" ], "mid": [ "2963645489", "2010669591", "2006333615" ], "abstract": [ "", "The minimum rate needed to accurately approximate a product distribution based on an unnormalized informational divergence is shown to be a mutual information. This result subsumes results of Wyner on common information and Han-Verdu on resolvability. The result also extends to cases where the source distribution is unknown but the entropy is known.", "We consider the secret key generation problem when sources are randomly excited by the sender and there is a noiseless public discussion channel. Our setting is thus similar to recent works on channels with action-dependent states where the channel state may be influenced by some of the parties involved. We derive single-letter expressions for the secret key capacity through a type of source emulation analysis. We also derive lower bounds on the achievable reliability and secrecy exponents, i.e., the exponential rates of decay of the probability of decoding error and of the information leakage. These exponents allow us to determine a set of strongly-achievable secret key rates. For degraded eavesdroppers the maximum strongly-achievable rate equals the secret key capacity; our exponents can also be specialized to previously known results. In deriving our strong achievability results we introduce a coding scheme that combines wiretap coding (to excite the channel) and key extraction (to distill keys from residual randomness). The secret key capacity is naturally seen to be a combination of both source- and channel-type randomness. Through examples we illustrate a fundamental interplay between the portion of the secret key rate due to each type of randomness. We also illustrate inherent tradeoffs between the achievable reliability and secrecy exponents. Our new scheme also naturally accommodates rate limits on the public discussion. We show that under rate constraints we are able to achieve larger rates than those that can be attained through a pure source emulation strategy." ] }
1601.04276
2762400634
We analyze the exact exponential decay rate of the expected amount of information leaked to the wiretapper in Wyner’s wiretap channel setting using wiretap channel codes constructed from both i.i.d. and constant-composition random codes. Our analysis for those sampled from i.i.d. random coding ensemble shows that the previously known achievable secrecy exponent using this ensemble is indeed the exact exponent for an average code in the ensemble. Furthermore, our analysis on wiretap channel codes constructed from the ensemble of constant-composition random codes leads to an exponent which, in addition to being the exact exponent for an average code, is larger than the achievable secrecy exponent that has been established so far in the literature for this ensemble (which in turn was known to be smaller than that achievable by wiretap channel codes sampled from i.i.d. random coding ensemble). We show examples where the exact secrecy exponent for the wiretap channel codes constructed from random constant-composition codes is larger than that of those constructed from i.i.d. random codes and examples where the exact secrecy exponent for the wiretap channel codes constructed from i.i.d. random codes is larger than that of those constructed from constant-composition random codes. We, hence, conclude that, unlike the error correction problem, there is no general ordering between the two random coding ensembles in terms of their secrecy exponent.
Another important problem, in the realm of information-theoretic secrecy, is @cite_5 @cite_3 . The secrecy exponents related to this model are studied in @cite_27 @cite_12 @cite_7 @cite_9 and, in particular, in @cite_12 @cite_9 shown to be exact.
{ "cite_N": [ "@cite_7", "@cite_9", "@cite_3", "@cite_27", "@cite_5", "@cite_12" ], "mid": [ "2006333615", "1652585999", "2108777864", "2148587342", "2106833918", "2027642452" ], "abstract": [ "We consider the secret key generation problem when sources are randomly excited by the sender and there is a noiseless public discussion channel. Our setting is thus similar to recent works on channels with action-dependent states where the channel state may be influenced by some of the parties involved. We derive single-letter expressions for the secret key capacity through a type of source emulation analysis. We also derive lower bounds on the achievable reliability and secrecy exponents, i.e., the exponential rates of decay of the probability of decoding error and of the information leakage. These exponents allow us to determine a set of strongly-achievable secret key rates. For degraded eavesdroppers the maximum strongly-achievable rate equals the secret key capacity; our exponents can also be specialized to previously known results. In deriving our strong achievability results we introduce a coding scheme that combines wiretap coding (to excite the channel) and key extraction (to distill keys from residual randomness). The secret key capacity is naturally seen to be a combination of both source- and channel-type randomness. Through examples we illustrate a fundamental interplay between the portion of the secret key rate due to each type of randomness. We also illustrate inherent tradeoffs between the achievable reliability and secrecy exponents. Our new scheme also naturally accommodates rate limits on the public discussion. We show that under rate constraints we are able to achieve larger rates than those that can be attained through a pure source emulation strategy.", "In this paper, we evaluate the asymptotics of equivocations and their exponents. Specifically, we consider the effect of applying a hash function on a source and we quantify the level of non-uniformity and dependence of the compressed source from another correlated source. Unlike previous works that use the Shannon information measures to quantify randomness or information, in this paper, we consider a more general class of information measures, i.e., the Renyi information measures and their Gallager forms. We prove tight asymptotic results for the equivocation and its exponential decay rates by establishing new non-asymptotic bounds on the equivocation and evaluating these bounds asymptotically.", "As the first part of a study of problems involving common randomness at distance locations, information-theoretic models of secret sharing (generating a common random key at two terminals, without letting an eavesdropper obtain information about this key) are considered. The concept of key-capacity is defined. Single-letter formulas of key-capacity are obtained for several models, and bounds to key-capacity are derived for other models. >", "We derive a new upper bound for Eve's information in secret key generation from a common random number without communication. This bound improves on Bennett 's bound based on the Renyi entropy of order 2 because the bound obtained here uses the Renyi entropy of order 1+s for s ∈ [0,1]. This bound is applied to a wire-tap channel. Then, we derive an exponential upper bound for Eve's information. Our exponent is compared with Hayashi 's exponent. For the additive case, the bound obtained here is better. The result is applied to secret key agreement by public discussion.", "The problem of generating a shared secret key S by two parties knowing dependent random variables X and Y, respectively, but not sharing a secret key initially, is considered. An enemy who knows the random variable Z, jointly distributed with X and Y according to some probability distribution P sub XYZ , can also receive all messages exchanged by the two parties over a public channel. The goal of a protocol is that the enemy obtains at most a negligible amount of information about S. Upper bounds on H(S) as a function of P sub XYZ are presented. Lower bounds on the rate H(S) N (as N to infinity ) are derived for the case in which X=(X sub 1 , . . ., X sub N ), Y=(Y sub 1 , . . ., Y sub N ) and Z=(Z sub 1 , . . ., Z sub N ) result from N independent executions of a random experiment generating X sub i , Y sub i and Z sub i for i=1, . . ., N. It is shown that such a secret key agreement is possible for a scenario in which all three parties receive the output of a binary symmetric source over independent binary symmetric channels, even when the enemy's channel is superior to the other two channels. >", "Motivated by the desirability of universal composability, we analyze in terms of L1 distinguishability the task of secret key generation from a joint random variable. Under this secrecy criterion, using the Renyi entropy of order 1+s for s ∈ [0,1], we derive a new upper bound of Eve's distinguishability under the application of the universal2 hash functions. It is also shown that this bound gives the tight exponential rate of decrease in the case of independent and identical distributions. The result is applied to the wiretap channel model and to secret key generation (distillation) by public discussion." ] }
1601.04406
2950637306
We present an approach for identifying picturesque highlights from large amounts of egocentric video data. Given a set of egocentric videos captured over the course of a vacation, our method analyzes the videos and looks for images that have good picturesque and artistic properties. We introduce novel techniques to automatically determine aesthetic features such as composition, symmetry and color vibrancy in egocentric videos and rank the video frames based on their photographic qualities to generate highlights. Our approach also uses contextual information such as GPS, when available, to assess the relative importance of each geographic location where the vacation videos were shot. Furthermore, we specifically leverage the properties of egocentric videos to improve our highlight detection. We demonstrate results on a new egocentric vacation dataset which includes 26.5 hours of videos taken over a 14 day vacation that spans many famous tourist destinations and also provide results from a user-study to access our results.
Research in video summarization identifies key frames in video shots using optical flow to summarize a single complex shot @cite_12 . Other techniques used low level image analysis and parsing to segment and abstract a video source @cite_28 and used a well-distributed" hierarchy of key frame sequences for summarization @cite_19 . These methods are aimed at the summarization of specific videos from a stable viewpoint and are not directly applicable to long-term egocentric video.
{ "cite_N": [ "@cite_28", "@cite_19", "@cite_12" ], "mid": [ "2085425470", "1598774275", "1840179877" ], "abstract": [ "This paper presents an integrated system solution for computer assisted video parsing and content-based video retrieval and browsing. The effectiveness of this solution lies in its use of video content information derived from a parsing process, being driven by visual feature analysis. That is, parsing will temporally segment and abstract a video source, based on low-level image analyses; then retrieval and browsing of video will be based on key-frame, temporal and motion features of shots. These processes and a set of tools to facilitate content-based video retrieval and browsing using the feature data set are presented in detail as functions of an integrated system.", "This paper presents a novel optimization-based approach for video key frame selection. We define key frames to be a temporally ordered subsequence of the original video sequence, and the optimal k key frames are the subsequence of length k that optimizes an energy function we define on all subsequences. These optimal key subsequences form a hierarchy, with one such subsequence for every k less than the length of the video n, and this hierarchy can be retrieved all at once using a dynamic programming process with polynomial (O(n3)) computation time. To further reduce computation, an approximate solution based on a greedy algorithm can compute the key frame hierarchy in O(n?log(n)). We also present a hybrid method, which flexibly captures the virtues of both approaches. Our empirical comparisons between the optimal and greedy solutions indicate their results are very close. We show that the greedy algorithm is more appropriate for video streaming and network applications where compression ratios may change dynamically, and provide a method to compute the appropriate times to advance through key frames during video playback of the compressed stream. Additionally, we exploit the results of the greedy algorithm to devise an interactive video content browser. To quantify our algorithms' effectiveness, we propose a new evaluation measure, called \"well-distributed\" key frames. Our experimental results on several videos show that both the optimal and the greedy algorithms outperform several popular existing algorithms in terms of summarization quality, computational time, and guaranteed convergence.", "This paper describes a new algorithm for identifying key frames in shots from video programs. We use optical flow computations to identify local minima of motion in a shot-stillness emphasizes the image for the viewer. This technique allows us to identify both gestures which are emphasized by momentary pauses and camera motion which links together several distinct images in a single shot. Results show that our algorithm can successfully select several key frames from a single complex shot which effectively summarize the shot." ] }
1601.04406
2950637306
We present an approach for identifying picturesque highlights from large amounts of egocentric video data. Given a set of egocentric videos captured over the course of a vacation, our method analyzes the videos and looks for images that have good picturesque and artistic properties. We introduce novel techniques to automatically determine aesthetic features such as composition, symmetry and color vibrancy in egocentric videos and rank the video frames based on their photographic qualities to generate highlights. Our approach also uses contextual information such as GPS, when available, to assess the relative importance of each geographic location where the vacation videos were shot. Furthermore, we specifically leverage the properties of egocentric videos to improve our highlight detection. We demonstrate results on a new egocentric vacation dataset which includes 26.5 hours of videos taken over a 14 day vacation that spans many famous tourist destinations and also provide results from a user-study to access our results.
In recent years, summarization efforts have started focussing on leveraging objects and activities within the scene. Features such as informative poses" @cite_33 and object of interest", based on labels provided by the user for a small number of frames @cite_44 , have helped in activity visualization, video summarization, and generating video synopsis from web-cam videos @cite_24 .
{ "cite_N": [ "@cite_44", "@cite_33", "@cite_24" ], "mid": [ "2116946038", "1979185460", "2126802797" ], "abstract": [ "We propose a novel method for removing irrelevant frames from a video given user-provided frame-level labeling for a very small number of frames. We first hypothesize a number of windows which possibly contain the object of interest, and then determine which window(s) truly contain the object of interest. Our method enjoys several favorable properties. First, compared to approaches where a single descriptor is used to describe a whole frame, each window's feature descriptor has the chance of genuinely describing the object of interest; hence it is less affected by background clutter. Second, by considering the temporal continuity of a video instead of treating frames as independent, we can hypothesize the location of the windows more accurately. Third, by infusing prior knowledge into the patch-level model, we can precisely follow the trajectory of the object of interest. This allows us to largely reduce the number of windows and hence reduce the chance of overfitting the data during learning. We demonstrate the effectiveness of the method by comparing it to several other semi-supervised learning approaches on challenging video clips.", "We propose a method for generating visual summaries of video. It reduces browsing time, minimizes screen-space utilization, while preserving the crux of the video content and the sensation of motion. The outputs are images or short clips, denoted as dynamic stills or clip trailers, respectively. The method selects informative poses out of extracted video objects. Optimal rotations and transparency supports visualization of an increased number of poses, leading to concise activity visualization. Our method addresses previously avoided scenarios, e.g., activities occurring in one place, or scenes with non-static background. We demonstrate and evaluate the method for various types of videos.", "The world is covered with millions of Webcams, many transmit everything in their field of view over the Internet 24 hours a day. A Web search finds public webcams in airports, intersections, classrooms, parks, shops, ski resorts, and more. Even more private surveillance cameras cover many private and public facilities. Webcams are an endless resource, but most of the video broadcast will be of little interest due to lack of activity. We propose to generate a short video that will be a synopsis of an endless video streams, generated by webcams or surveillance cameras. We would like to address queries like \"I would like to watch in one minute the highlights of this camera broadcast during the past day\". The process includes two major phases: (i) An online conversion of the video stream into a database of objects and activities (rather than frames), (ii) A response phase, generating the video synopsis as a response to the user's query. To include maximum information in a short synopsis we simultaneously show activities that may have happened at different times. The synopsis video can also be used as an index into the original video stream." ] }
1601.04406
2950637306
We present an approach for identifying picturesque highlights from large amounts of egocentric video data. Given a set of egocentric videos captured over the course of a vacation, our method analyzes the videos and looks for images that have good picturesque and artistic properties. We introduce novel techniques to automatically determine aesthetic features such as composition, symmetry and color vibrancy in egocentric videos and rank the video frames based on their photographic qualities to generate highlights. Our approach also uses contextual information such as GPS, when available, to assess the relative importance of each geographic location where the vacation videos were shot. Furthermore, we specifically leverage the properties of egocentric videos to improve our highlight detection. We demonstrate results on a new egocentric vacation dataset which includes 26.5 hours of videos taken over a 14 day vacation that spans many famous tourist destinations and also provide results from a user-study to access our results.
Other summarization techniques include visualizing short clips in a single image using a schematic storyboard format @cite_6 and visualizing tour videos on a map-based storyboard that allows users to navigate through the video @cite_10 . Non-chronological synopsis has also been explored, where several actions that originally occurred at different times are simultaneously shown together @cite_18 and all the essential activities of the original video are showcased together @cite_21 . While practical, these methods do not scale to the problem we are adressing of extended videos over days of actvities.
{ "cite_N": [ "@cite_18", "@cite_21", "@cite_10", "@cite_6" ], "mid": [ "2163527813", "2115060048", "", "2030371206" ], "abstract": [ "The power of video over still images is the ability to represent dynamic activities. But video browsing and retrieval are inconvenient due to inherent spatio-temporal redundancies, where some time intervals may have no activity, or have activities that occur in a small image region. Video synopsis aims to provide a compact video representation, while preserving the essential activities of the original video. We present dynamic video synopsis, where most of the activity in the video is condensed by simultaneously showing several actions, even when they originally occurred at different times. For example, we can create a \"stroboscopic movie\", where multiple dynamic instances of a moving object are played simultaneously. This is an extension of the still stroboscopic picture. Previous approaches for video abstraction addressed mostly the temporal redundancy by selecting representative key-frames or time intervals. In dynamic video synopsis the activity is shifted into a significantly shorter period, in which the activity is much denser. Video examples can be found online in http: www.vision.huji.ac.il synopsis", "The amount of captured video is growing with the increased numbers of video cameras, especially the increase of millions of surveillance cameras that operate 24 hours a day. Since video browsing and retrieval is time consuming, most captured video is never watched or examined. Video synopsis is an effective tool for browsing and indexing of such a video. It provides a short video representation, while preserving the essential activities of the original video. The activity in the video is condensed into a shorter period by simultaneously showing multiple activities, even when they originally occurred at different times. The synopsis video is also an index into the original video by pointing to the original time of each activity. Video synopsis can be applied to create a synopsis of an endless video streams, as generated by Webcams and by surveillance cameras. It can address queries like \"show in one minute the synopsis of this camera broadcast during the past day''. This process includes two major phases: (i) an online conversion of the endless video stream into a database of objects and activities (rather than frames). (ii) A response phase, generating the video synopsis as a response to the user's query.", "", "We present a method for visualizing short video clips in a single static image, using the visual language of storyboards. These schematic storyboards are composed from multiple input frames and annotated using outlines, arrows, and text describing the motion in the scene. The principal advantage of this storyboard representation over standard representations of video -- generally either a static thumbnail image or a playback of the video clip in its entirety -- is that it requires only a moment to observe and comprehend but at the same time retains much of the detail of the source video. Our system renders a schematic storyboard layout based on a small amount of user interaction. We also demonstrate an interaction technique to scrub through time using the natural spatial dimensions of the storyboard. Potential applications include video editing, surveillance summarization, assembly instructions, composition of graphic novels, and illustration of camera technique for film studies." ] }
1601.04406
2950637306
We present an approach for identifying picturesque highlights from large amounts of egocentric video data. Given a set of egocentric videos captured over the course of a vacation, our method analyzes the videos and looks for images that have good picturesque and artistic properties. We introduce novel techniques to automatically determine aesthetic features such as composition, symmetry and color vibrancy in egocentric videos and rank the video frames based on their photographic qualities to generate highlights. Our approach also uses contextual information such as GPS, when available, to assess the relative importance of each geographic location where the vacation videos were shot. Furthermore, we specifically leverage the properties of egocentric videos to improve our highlight detection. We demonstrate results on a new egocentric vacation dataset which includes 26.5 hours of videos taken over a 14 day vacation that spans many famous tourist destinations and also provide results from a user-study to access our results.
Research on egocentric video analysis has mostly focused on activity recognition and activities of daily living. Activities and objects have been thoroughly leveraged to develop egocentric systems that can understand daily-living activities. Activities, actions and objects are jointly modeled and object-hand interactions are assessed @cite_13 @cite_0 and people and objects are discovered by developing region cues such as nearness to hands, gaze and frequency of occurrences @cite_25 . Other approaches include learning object models from egocentric videos of household objects @cite_7 , and identifying objects being manipulated by hands @cite_26 @cite_5 . The use of objects has also been extended to develop a story-driven summarization approach. Sub-events are detected in the video and linked based on the relationships between objects and how objects contribute to the progression of the events @cite_15 .
{ "cite_N": [ "@cite_26", "@cite_7", "@cite_0", "@cite_5", "@cite_15", "@cite_13", "@cite_25" ], "mid": [ "2033639255", "2031688197", "", "", "2120645068", "2149276562", "2106229755" ], "abstract": [ "Identifying handled objects, i.e. objects being manipulated by a user, is essential for recognizing the person's activities. An egocentric camera as worn on the body enjoys many advantages such as having a natural first-person view and not needing to instrument the environment. It is also a challenging setting, where background clutter is known to be a major source of problems and is difficult to handle with the camera constantly and arbitrarily moving. In this work we develop a bottom-up motion-based approach to robustly segment out foreground objects in egocentric video and show that it greatly improves object recognition accuracy. Our key insight is that egocentric video of object manipulation is a special domain and many domain-specific cues can readily help. We compute dense optical flow and fit it into multiple affine layers. We then use a max-margin classifier to combine motion with empirical knowledge of object location and background movement as well as temporal cues of support region and color appearance. We evaluate our segmentation algorithm on the large Intel Egocentric Object Recognition dataset with 42 objects and 100K frames. We show that, when combined with temporal integration, figure-ground segmentation improves the accuracy of a SIFT-based recognition system from 33 to 60 , and that of a latent-HOG system from 64 to 86 .", "This paper addresses the problem of learning object models from egocentric video of household activities, using extremely weak supervision. For each activity sequence, we know only the names of the objects which are present within it, and have no other knowledge regarding the appearance or location of objects. The key to our approach is a robust, unsupervised bottom up segmentation method, which exploits the structure of the egocentric domain to partition each frame into hand, object, and background categories. By using Multiple Instance Learning to match object instances across sequences, we discover and localize object occurrences. Object representations are refined through transduction and object-level classifiers are trained. We demonstrate encouraging results in detecting novel object instances using models produced by weakly-supervised learning.", "", "", "We present a video summarization approach that discovers the story of an egocentric video. Given a long input video, our method selects a short chain of video sub shots depicting the essential events. Inspired by work in text analysis that links news articles over time, we define a random-walk based metric of influence between sub shots that reflects how visual objects contribute to the progression of events. Using this influence metric, we define an objective for the optimal k-subs hot summary. Whereas traditional methods optimize a summary's diversity or representative ness, ours explicitly accounts for how one sub-event \"leads to\" another-which, critically, captures event connectivity beyond simple object co-occurrence. As a result, our summaries provide a better sense of story. We apply our approach to over 12 hours of daily activity video taken from 23 unique camera wearers, and systematically evaluate its quality compared to multiple baselines with 34 human subjects.", "We present a method to analyze daily activities, such as meal preparation, using video from an egocentric camera. Our method performs inference about activities, actions, hands, and objects. Daily activities are a challenging domain for activity recognition which are well-suited to an egocentric approach. In contrast to previous activity recognition methods, our approach does not require pre-trained detectors for objects and hands. Instead we demonstrate the ability to learn a hierarchical model of an activity by exploiting the consistent appearance of objects, hands, and actions that results from the egocentric context. We show that joint modeling of activities, actions, and objects leads to superior performance in comparison to the case where they are considered independently. We introduce a novel representation of actions based on object-hand interactions and experimentally demonstrate the superior performance of our representation in comparison to standard activity representations such as bag of words.", "We present a video summarization approach for egocentric or “wearable” camera data. Given hours of video, the proposed method produces a compact storyboard summary of the camera wearer's day. In contrast to traditional keyframe selection techniques, the resulting summary focuses on the most important objects and people with which the camera wearer interacts. To accomplish this, we develop region cues indicative of high-level saliency in egocentric video — such as the nearness to hands, gaze, and frequency of occurrence — and learn a regressor to predict the relative importance of any new region based on these cues. Using these predictions and a simple form of temporal event detection, our method selects frames for the storyboard that reflect the key object-driven happenings. Critically, the approach is neither camera-wearer-specific nor object-specific; that means the learned importance metric need not be trained for a given user or context, and it can predict the importance of objects and people that have never been seen previously. Our results with 17 hours of egocentric data show the method's promise relative to existing techniques for saliency and summarization." ] }