aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1401.0191
1644911229
In software, there are the errors anticipated at specification and design time, those encountered at development and testing time, and those that happen in production mode yet never anticipated. In this paper, we aim at reasoning on the ability of software to correctly handle unanticipated exceptions. We propose an algorithm, called short-circuit testing, which injects exceptions during test suite execution so as to simulate unanticipated errors. This algorithm collects data that is used as input for verifying two formal exception contracts that capture two resilience properties. Our evaluation on 9 test suites, with 78 line coverage in average, analyzes 241 executed catch blocks, shows that 101 of them expose resilience properties and that 84 can be transformed to be more resilient.
@cite_13 @cite_18 invented Fiat, an early validation system based on fault injection. Their fault model simulates hardware fault (bit changes in memory). @cite_19 have described Fine'', a fault injection system for Unix kernels. It simulates both hardware and operating system software faults. In comparison, we inject high-level software faults (exceptions) in a modern platform (Java). @cite_9 added assertions in software that can be handled with an assertion violation'' injector. The test driver enumerates different state changes that violate the assertion. By doing so, they are able to improve branch coverage, especially on error recovery code. This is different from our work since: we do not manually add any information in the system under study (tests or application). @cite_12 described a fault injector for exceptions similar to ours in order to improve catch coverage. In comparison to both @cite_9 and @cite_12 , we do not aim at improving the coverage but to identify the try-catch blocks satisfying exception contracts.
{ "cite_N": [ "@cite_18", "@cite_9", "@cite_19", "@cite_13", "@cite_12" ], "mid": [ "2079267582", "1572403161", "2138458852", "2133029931", "2132338288" ], "abstract": [ "The results of several experiments conducted using the fault-injection-based automated testing (FIAT) system are presented. FIAT is capable of emulating a variety of distributed system architectures, and it provides the capabilities to monitor system behavior and inject faults for the purpose of experimental characterization and validation of a system's dependability. The experiments consists of exhaustively injecting three separate fault types into various locations, encompassing both the code and data portions of memory images, of two distinct applications executed with several different data values and sizes. Fault types are variations of memory bit faults. The results show that there are a limited number of system-level fault manifestations. These manifestations follow a normal distribution for each fault type. Error detection latencies are found to be normally distributed. The methodology can be used to predict the system-level fault responses during the system design stage. >", "During testing, it is nearly impossible to run all statements or branches of a program. It is especially difficult to test the code used to respond to exceptional conditions. This untested code, often the error recovery code, will tend to be an error prone part of a system. We show that test coverage can be increased through an \"assertion violation\" technique for injecting software faults during execution. Using our prototype tool, Visual C-Patrol (VCP), we were able to substantially increase test branch coverage in four software systems studied.", "The authors present a fault injection and monitoring environment (FINE) as a tool to study fault propagation in the UNIX kernel. FINE injects hardware-induced software errors and software faults into the UNIX kernel and traces the execution flow and key variables of the kernel. FINE consists of a fault injector, a software monitor, a workload generator, a controller, and several analysis utilities. Experiments on SunOS 4.1.2 are conducted by applying FINE to investigate fault propagation and to evaluate the impact of various types of faults. Fault propagation models are built for both hardware and software faults. Transient Markov reward analysis is performed to evaluate the loss of performance due to an injected fault. Experimental results show that memory and software faults usually have a very long latency, while bus and CPU faults tend to crash the system immediately. About half of the detected errors are data faults, which are detected when the system is tries to access an unauthorized memory location. Only about 8 of faults propagate to other UNIX subsystems. Markov reward analysis shows that the performance loss incurred by bus faults and CPU faults is much higher than that incurred by software and memory faults. Among software faults, the impact of pointer faults is higher than that of nonpointer faults. >", "An automated real-time distributed accelerated fault injection environment (FIAT) is presented as an attempt to provide suitable tools for the validation process. The authors present the concepts and design, as well as the implementation and evaluation of the FIAT environment. As this system has been built, evaluated and is currently in use, an example of fault tolerant systems such as checkpointing and duplicate and match is used to show its usefulness. >", "We present a new approach that uses compiler- directed fault-injection for coverage testing of recovery code in Internet services to evaluate their robustness to op- erating system and I O hardware faults. We define a set of program-fault coverage metrics that enable quantifica- tion of Java catch blocks exercised during fault-injection experiments. We use compiler analyses to instrument appli- cation code in two ways: to direct fault injection to occur at appropriate points during execution, and to measure the resulting coverage. As a proof of concept for these ideas, we have applied our techniques manually to Muffin, a proxy server; we obtained a high degree of coverage of catch blocks, with, on average, 85 of the expected faults per catch being experienced as caught exceptions." ] }
1401.0191
1644911229
In software, there are the errors anticipated at specification and design time, those encountered at development and testing time, and those that happen in production mode yet never anticipated. In this paper, we aim at reasoning on the ability of software to correctly handle unanticipated exceptions. We propose an algorithm, called short-circuit testing, which injects exceptions during test suite execution so as to simulate unanticipated errors. This algorithm collects data that is used as input for verifying two formal exception contracts that capture two resilience properties. Our evaluation on 9 test suites, with 78 line coverage in average, analyzes 241 executed catch blocks, shows that 101 of them expose resilience properties and that 84 can be transformed to be more resilient.
Sinha @cite_4 analyzed the effect of exception handling constructs (throw and catch) on different static analyses. In contrast, we use dynamic information for reasoning on the exception handling code. The same authors described @cite_8 a complete tool chain to help programmers working with exceptions. The information we provide (the list of source-independent, purely-resilient try-catch blocks and so forth) is different, complementary and may be subject to be integrated in such a tool. @cite_20 used exception injection to capture the error-related dependencies between artifacts of an application. They inject checked exceptions as well as 6 runtime, unchecked exceptions. We also use exception injection but for a different goal: verifying try-catch contracts.
{ "cite_N": [ "@cite_4", "@cite_20", "@cite_8" ], "mid": [ "2106837287", "2108155806", "1947276475" ], "abstract": [ "Analysis techniques, such as control flow, data flow, and control dependence, are used for a variety of software engineering tasks, including structural and regression testing, dynamic execution profiling, static and dynamic slicing, and program understanding. To be applicable to programs in languages such as Java and C++, these analysis techniques must account for the effects of exception occurrences and exception handling constructs; failure to do so can cause the analysis techniques to compute incorrect results and, thus, limit the usefulness of the applications that use them. This paper discusses the effects of exception handling constructs on several analysis techniques. The paper presents techniques to construct representations for programs with explicit exception occurrences-exceptions that are raised explicitly through throw statements-and exception handling constructs. The paper presents algorithms that use these representations to perform the desired analyses. The paper also discusses several software engineering applications that use these analyses. Finally, the paper describes empirical results pertaining to the occurrence of exception handling constructs in Java programs and their effect on some analysis tasks.", "Automatic failure-path inference (AFPI) is an application-generic, automatic technique for dynamically discovering the failure dependency graphs of componentized Internet applications. AFPI's first phase is invasive, and relies on controlled fault injection to determine failure propagation; this phase requires no a priori knowledge of the application and takes on the order of hours to run. Once the system is deployed in production, the second, noninvasive phase of AFPI passively monitors the system, and updates the dependency graph as new failures are observed. This process is a good match for the perpetually-evolving software found in Internet systems; since no performance overhead is introduced, AFPI is feasible for live systems. We applied AFPI to J2EE and tested it by injecting Java exceptions into an e-commerce application and an online auction service. The resulting graphs of exception propagation are more detailed and accurate than what could be derived by time-consuming manual inspection or analysis of readily-available static application descriptions.", "Although object-oriented languages can improve programming practices, their characteristics may introduce new problems for software engineers. One important problem is the presence of implicit control flow caused by exception handling and polymorphism. Implicit control flow causes complex interactions, and can thus complicate software-engineering tasks. To address this problem, we present a systematic and structured approach, for supporting these tasks, based on the static and dynamic analyses of constructs that cause implicit control flow. Our approach provides software engineers with information for supporting and guiding development and maintenance tasks. We also present empirical results to illustrate the potential usefulness of our approach. Our studies show that, for the subjects considered, complex implicit control flow is always present and is generally not adequately exercised." ] }
1401.0191
1644911229
In software, there are the errors anticipated at specification and design time, those encountered at development and testing time, and those that happen in production mode yet never anticipated. In this paper, we aim at reasoning on the ability of software to correctly handle unanticipated exceptions. We propose an algorithm, called short-circuit testing, which injects exceptions during test suite execution so as to simulate unanticipated errors. This algorithm collects data that is used as input for verifying two formal exception contracts that capture two resilience properties. Our evaluation on 9 test suites, with 78 line coverage in average, analyzes 241 executed catch blocks, shows that 101 of them expose resilience properties and that 84 can be transformed to be more resilient.
@cite_2 described an exception monitoring system that resembles ours. Beyond the monitoring system we also provide a strategy and a set of analyses to verify two exception contracts.
{ "cite_N": [ "@cite_2" ], "mid": [ "2477564355" ], "abstract": [ "Exception mechanism is important for the development of robust programs to make sure that exceptions are handled appropriately at run-time. In this paper, we develop a dynamic exception monitoring system, which can trace handling and propagation of thrown exceptions in real-time. With this tool, programmers can examine exception handling process in more details and handle exceptions more effectively. Programmers can also trace only interesting exceptions by selecting options before execution. It can also provides profile information after execution, which summarizes exception handling in each method during execution. To reduce performance overhead, we implement the system based on code inlining, and presents some experimental results." ] }
1401.0191
1644911229
In software, there are the errors anticipated at specification and design time, those encountered at development and testing time, and those that happen in production mode yet never anticipated. In this paper, we aim at reasoning on the ability of software to correctly handle unanticipated exceptions. We propose an algorithm, called short-circuit testing, which injects exceptions during test suite execution so as to simulate unanticipated errors. This algorithm collects data that is used as input for verifying two formal exception contracts that capture two resilience properties. Our evaluation on 9 test suites, with 78 line coverage in average, analyzes 241 executed catch blocks, shows that 101 of them expose resilience properties and that 84 can be transformed to be more resilient.
Ghosh and Kelly @cite_16 did a special kind of mutation testing for improving test suites. Their fault model comprises abend'' faults: abnormal ending of catch blocks. It is similar to short-circuiting. We use the term short-circuit'' since it is a precise metaphor of what happens. In comparison, the term abend'' encompasses many more kinds of faults. In our paper, we claim that the new observed behavior resulting from short-circuit testing should not be considered as mutants to be killed. Actually we claim the opposite: short-circuiting should remain undetected for sake of source independence and pure resilience.
{ "cite_N": [ "@cite_16" ], "mid": [ "2031466154" ], "abstract": [ "Developers using third party software components need to test them to satisfy quality requirements. In the past, researchers have proposed fault injection testing approaches in which the component state is perturbed and the resulting effects on the rest of the system are observed. Non-availability of source code in third-party components makes it harder to perform source code level fault injection. Even if Java decompilers are used, they do not work well with obfuscated bytecode. We propose a technique that injects faults in Java software by manipulating the bytecode. Existing test suites are assessed according to their ability to detect the injected faults and improved accordingly. We present a case study using an open source Java component that demonstrates the feasibility and effectiveness of our approach. We also evaluate the usability of our approach on obfuscated bytecode." ] }
1401.0191
1644911229
In software, there are the errors anticipated at specification and design time, those encountered at development and testing time, and those that happen in production mode yet never anticipated. In this paper, we aim at reasoning on the ability of software to correctly handle unanticipated exceptions. We propose an algorithm, called short-circuit testing, which injects exceptions during test suite execution so as to simulate unanticipated errors. This algorithm collects data that is used as input for verifying two formal exception contracts that capture two resilience properties. Our evaluation on 9 test suites, with 78 line coverage in average, analyzes 241 executed catch blocks, shows that 101 of them expose resilience properties and that 84 can be transformed to be more resilient.
Fu and Ryder @cite_0 presented a static analysis for revealing the exception chains (exception encapsulated in one another). In contrast, our approach is a dynamic analysis. We do not focus on exception chains, we propose an analysis of source-independence and pure resilience. Mercadal @cite_5 presented an approach to manage error-handling in a specific domain (pervasive computing). This is forward engineering. On the contrary, we reason on arbitrary legacy Java code, we identify resilient locations and modifies others.
{ "cite_N": [ "@cite_0", "@cite_5" ], "mid": [ "2164317885", "2000180481" ], "abstract": [ "Although it is common in large Java programs to rethrow exceptions, existing exception-flow analyses find only single exception-flow links, thus are unable to identify multiple-link exception propagation paths. This paper presents a new static analysis that, when combined with previous exception-flow analyses, computes chains of semantically-related exception-flow links, and thus reports entire exception propagation paths, instead of just discrete segments of them. These chains can be used 1) to show the error handling architecture of a system, 2) to assess the vulnerability of a single component and the whole system, 3) to support better testing of error recovery code, and 4) to facilitate the tracing of the root cause of a logged problem. Empirical findings and a case history for Tomcat show that a significant portion of the chains found in our benchmarks span multiple components, and thus are hard to find manually.", "The challenging nature of error handling constantly escalates as a growing number of environments consists of networked devices and software components. In these environments, errors cover a uniquely large spectrum of situations related to each layer ranging from hardware to distributed platforms, to software components. Handling errors becomes a daunting task for programmers, whose outcome is unpredictable. Scaling up error handling requires to raise the level of abstraction beyond the code level and the try-catch construct, approaching error handling at the software architecture level. We propose a novel approach that relies on an Architecture Description Language (ADL), which is extended with error-handling declarations. To further raise the level of abstraction, our approach revolves around a domain-specific architectural pattern commonly used in pervasive computing. Error handling is decomposed into components dedicated to platform-wide, error-recovery strategies. At the application level, descriptions of functional components include declarations dedicated to error handling. We have implemented a compiler for an ADL extended with error-handling declarations. It produces customized programming frameworks that drive and support the programming of error handling. Our approach has been validated with a variety of applications for building automation." ] }
1401.0191
1644911229
In software, there are the errors anticipated at specification and design time, those encountered at development and testing time, and those that happen in production mode yet never anticipated. In this paper, we aim at reasoning on the ability of software to correctly handle unanticipated exceptions. We propose an algorithm, called short-circuit testing, which injects exceptions during test suite execution so as to simulate unanticipated errors. This algorithm collects data that is used as input for verifying two formal exception contracts that capture two resilience properties. Our evaluation on 9 test suites, with 78 line coverage in average, analyzes 241 executed catch blocks, shows that 101 of them expose resilience properties and that 84 can be transformed to be more resilient.
Zhang and Elbaum @cite_7 have recently presented an approach that amplifies test to validate exception handling. Their work has been a key source of inspiration for ours. Short-circuit testing is a kind of test amplification. While the technique is the same, the problem domain we explore is really different. They focus on exceptions related to external resources. We focus on any kind of exceptions in order to verify resilience contracts.
{ "cite_N": [ "@cite_7" ], "mid": [ "2104969543" ], "abstract": [ "Validating code handling exceptional behavior is difficult, particularly when dealing with external resources that may be noisy and unreliable, as it requires: 1) the systematic exploration of the space of exceptions that may be thrown by the external resources, and 2) the setup of the context to trigger specific patterns of exceptions. In this work we present an approach that addresses those difficulties by performing an exhaustive amplification of the space of exceptional behavior associated with an external resource that is exercised by a test suite. Each amplification attempts to expose a program exception handling construct to new behavior by mocking an external resource so that it returns normally or throws an exception following a predefined pattern. Our assessment of the approach indicates that it can be fully automated, is powerful enough to detect 65 of the faults reported in the bug reports of this kind, and is precise enough that 77 of the detected anomalies correspond to faults fixed by the developers." ] }
1401.0116
1488338544
Multiple Kernel Learning(MKL) on Support Vector Machines(SVMs) has been a popular front of research in recent times due to its success in application problems like Object Categorization. This success is due to the fact that MKL has the ability to choose from a variety of feature kernels to identify the optimal kernel combination. But the initial formulation of MKL was only able to select the best of the features and misses out many other informative kernels presented. To overcome this, the Lp norm based formulation was proposed by Kloft et. al. This formulation is capable of choosing a non-sparse set of kernels through a control parameter p. Unfortunately, the parameter p does not have a direct meaning to the number of kernels selected. We have observed that stricter control over the number of kernels selected gives us an edge over these techniques in terms of accuracy of classification and also helps us to fine tune the algorithms to the time requirements at hand. In this work, we propose a Controlled Sparsity Kernel Learning (CSKL) formulation that can strictly control the number of kernels which we wish to select. The CSKL formulation introduces a parameter t which directly corresponds to the number of kernels selected. It is important to note that a search in t space is finite and fast as compared to p. We have also provided an efficient Reduced Gradient Descent based algorithm to solve the CSKL formulation, which is proven to converge. Through our experiments on the Caltech101 Object Categorization dataset, we have also shown that one can achieve better accuracies than the previous formulations through the right choice of t.
Even though the details of sparse and non-sparse solutions have been explored, none of these formulations have explicit control of sparsity for their solutions. As we have demonstrated in the experiments section, strict control of sparsity is highly valuable. Hence we propose a formulation, where we can parametrically control the total number of kernels selected and an efficient reduced gradient descent based algorithm to solve it. We have also experimentally shown that our formulation will be able to better state-of-the-art performance on the Caltech101 @cite_8 dataset for object categorization through strict control of sparsity.
{ "cite_N": [ "@cite_8" ], "mid": [ "2155904486" ], "abstract": [ "Current computational approaches to learning visual object categories require thousands of training images, are slow, cannot learn in an incremental manner and cannot incorporate prior information into the learning process. In addition, no algorithm presented in the literature has been tested on more than a handful of object categories. We present an method for learning object categories from just a few training images. It is quick and it uses prior information in a principled way. We test it on a dataset composed of images of objects belonging to 101 widely varied categories. Our proposed method is based on making use of prior information, assembled from (unrelated) object categories which were previously learnt. A generative probabilistic model is used, which represents the shape and appearance of a constellation of features belonging to the object. The parameters of the model are learnt incrementally in a Bayesian manner. Our incremental algorithm is compared experimentally to an earlier batch Bayesian algorithm, as well as to one based on maximum-likelihood. The incremental and batch versions have comparable classification performance on small training sets, but incremental learning is significantly faster, making real-time learning feasible. Both Bayesian methods outperform maximum likelihood on small training sets." ] }
1401.0480
2952813759
Stack Overflow is the most popular CQA for programmers on the web with 2.05M users, 5.1M questions and 9.4M answers. Stack Overflow has explicit, detailed guidelines on how to post questions and an ebullient moderation community. Despite these precise communications and safeguards, questions posted on Stack Overflow can be extremely off topic or very poor in quality. Such questions can be deleted from Stack Overflow at the discretion of experienced community members and moderators. We present the first study of deleted questions on Stack Overflow. We divide our study into two parts (i) Characterization of deleted questions over approx. 5 years (2008-2013) of data, (ii) Prediction of deletion at the time of question creation. Our characterization study reveals multiple insights on question deletion phenomena. We observe a significant increase in the number of deleted questions over time. We find that it takes substantial time to vote a question to be deleted but once voted, the community takes swift action. We also see that question authors delete their questions to salvage reputation points. We notice some instances of accidental deletion of good quality questions but such questions are voted back to be undeleted quickly. We discover a pyramidal structure of question quality on Stack Overflow and find that deleted questions lie at the bottom (lowest quality) of the pyramid. We also build a predictive model to detect the deletion of question at the creation time. We experiment with 47 features based on User Profile, Community Generated, Question Content and Syntactic style and report an accuracy of 66 . Our feature analysis reveals that all four categories of features are important for the prediction task. Our findings reveal important suggestions for content quality maintenance on community based question answering websites.
Nasehi analyze questions on Stack Overflow to understand the quality of a code example @cite_6 . They find nine attributes of good questions like concise code, links to extra resources and inline documentation. Wang and Godfrey analyze iOS and Android developer questions on Stack Overflow to detect API usage obstacles @cite_22 . They used topic models to find a set API classes on iOS and Android documentation which were difficult for developers to understand. Asaduzzaman analyze unanswered questions on Stack Overflow and use a machine learning classifier to predict such questions @cite_11 . They observe certain characteristics of unanswered questions which include vagueness, homework questions etc. Allamanis and Sutton perform a topic modeling analysis on Stack Overflow questions to combine topics, types and code @cite_19 . They find that programming languages are a mixture of concepts and questions on Stack Overflow are concerned with the code example rather than the application domain. In contrast to the aforementioned work, our work specifically focuses on quality of content on Stack Overflow.
{ "cite_N": [ "@cite_19", "@cite_22", "@cite_6", "@cite_11" ], "mid": [ "", "2087832061", "2051204868", "1974311367" ], "abstract": [ "", "Software frameworks provide sets of generic functionalities that can be later customized for a specific task. When developers invoke API methods in a framework, they often encounter obstacles in finding the correct usage of the API, let alone to employ best practices. Previous research addresses this line of questions by mining API usage patterns to induce API usage templates, by conducting and compiling interviews of developers, and by inferring correlations among APIs. In this paper, we analyze API-related posts regarding iOS and Android development from a Q&A Web site, stackoverflow.com. Assuming that API-related posts are primarily about API usage obstacles, we find several iOS and Android API classes that appear to be particularly likely to challenge developers, even after we factor out API usage hotspots, inferred by modelling API usage of open source iOS and Android applications. For each API with usage obstacles, we further apply a topic mining tool to posts that are tagged with the API, and we discover several repetitive scenarios in which API usage obstacles occur. We consider our work as a stepping stone towards understanding API usage challenges based on forum-based input from a multitude of developers, input that is prohibitively expensive to collect through interviews. Our method helps to motivate future research in API usage, and can allow designers of platforms - such as iOS and Android - to better understand the problems developers have in using their platforms, and to make corresponding improvements.", "Programmers learning how to use an API or a programming language often rely on code examples to support their learning activities. However, what makes for an effective ode example remains an open question. Finding the haracteristics of the effective examples is essential in improving the appropriateness of these learning aids. To help answer this question we have onducted a qualitative analysis of the questions and answers posted to a programming Q&A web site called StackOverflow. On StackOverflow answers can be voted on, indicating which answers were found helpful by users of the site. By analyzing these well-received answers we identified haracteristics of effective examples. We found that the explanations acompanying examples are as important as the examples themselves. Our findings have implications for the way the API documentation and example set should be developed and evolved as well as the design of the tools assisting the development of these materials.", "Community-based question answering services accumulate large volumes of knowledge through the voluntary services of people across the globe. Stack Overflow is an example of such a service that targets developers and software engineers. In general, questions in Stack Overflow are answered in a very short time. However, we found that the number of unanswered questions has increased significantly in the past two years. Understanding why questions remain unanswered can help information seekers improve the quality of their questions, increase their chances of getting answers, and better decide when to use Stack Overflow services. In this paper, we mine data on unanswered questions from Stack Overflow. We then conduct a qualitative study to categorize unanswered questions, which reveals characteristics that would be difficult to find otherwise. Finally, we conduct an experiment to determine whether we can predict how long a question will remain unanswered in Stack Overflow." ] }
1401.0113
2951297428
We propose connectivity-preserving geometry images (CGIMs), which map a three-dimensional mesh onto a rectangular regular array of an image, such that the reconstructed mesh produces no sampling errors, but merely round-off errors. We obtain a V-matrix with respect to the original mesh, whose elements are vertices of the mesh, which intrinsically preserves the vertex-set and the connectivity of the original mesh in the sense of allowing round-off errors. We generate a CGIM array by using the Cartesian coordinates of corresponding vertices of the V-matrix. To reconstruct a mesh, we obtain a vertex-set and an edge-set by collecting all the elements with different pixels, and all different pairwise adjacent elements from the CGIM array respectively. Compared with traditional geometry images, CGIMs achieve minimum reconstruction errors with an efficient parametrization-free algorithm via elementary permutation techniques. We apply CGIMs to lossy compression of meshes, and the experimental results show that CGIMs perform well in reconstruction precision and detail preservation.
Research work on applications of GIMs includes mesh compression @cite_7 , smooth surface representation @cite_0 , face recognition @cite_14 , texture synthesis @cite_15 , and facial expression modeling @cite_4 . We do not list detailed work in order to focus in priority on our work.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_7", "@cite_0", "@cite_15" ], "mid": [ "2129317914", "2124156145", "170704106", "", "2046818008" ], "abstract": [ "As the size of the available collections of 3D objects grows, database transactions become essential for their management with the key operation being retrieval (query). Large collections are also precategorized into classes so that a single class contains objects of the same type (e.g., human faces, cars, four-legged animals). It is shown that general object retrieval methods are inadequate for intraclass retrieval tasks. We advocate that such intraclass problems require a specialized method that can exploit the basic class characteristics in order to achieve higher accuracy. A novel 3D object retrieval method is presented which uses a parameterized annotated model of the shape of the class objects, incorporating its main characteristics. The annotated subdivision-based model is fitted onto objects of the class using a deformable model framework, converted to a geometry image and transformed into the wavelet domain. Object retrieval takes place in the wavelet domain. The method does not require user interaction, achieves high accuracy, is efficient for use with large databases, and is suitable for nonrigid object classes. We apply our method to the face recognition domain, one of the most challenging intraclass retrieval tasks. We used the Face Recognition Grand Challenge v2 database, yielding an average verification rate of 95.2 percent at to 10-3 false accept rate. The latest results of our work can be found at http: www.cbl.uh.edu UR8D", "In this paper, we present a novel geometry video (GV) framework to model and compress 3-D facial expressions. GV bridges the gap of 3-D motion data and 2-D video, and provides a natural way to apply the well-studied video processing techniques to motion data processing. Our framework includes a set of algorithms to construct GVs, such as hole filling, geodesic-based face segmentation, expression-invariant parameterization (EIP), and GV compression. Our EIP algorithm can guarantee the exact correspondence of the salient features (eyes, mouth, and nose) in different frames, which leads to GVs with better spatial and temporal coherence than that of the conventional parameterization methods. By taking advantage of this feature, we also propose a new H.264 AVC-based progressive directional prediction scheme, which can provide further 10 -16 bitrate reductions compared to the original H.264 AVC applied for GV compression while maintaining good video quality. Our experimental results on real-world datasets demonstrate that GV is very effective for modeling the high-resolution 3-D expression data, thus providing an attractive way in expression information processing for gaming and movie industry.", "We recently introduced an algorithm for spherical parametrization and remeshing, which allows resampling of a genus-zero surface onto a regular 2D grid, a spherical geometry image. These geometry images offer several advantages for shape compression. First, simple extension rules extend the square image domain to cover the infinite plane, thereby providing a globally smooth surface parametrization. The 2D grid structure permits use of ordinary image wavelets, including higher-order wavelets with polynomial precision. The coarsest wavelets span the entire surface and thus encode the lowest frequencies of the shape. Finally, the compression and decompression algorithms operate on ordinary 2D arrays, and are thus ideally suited for hardware acceleration. In this paper, we detail two wavelet-based approaches for shape compression using spherical geometry images, and provide comparisons with previous compression schemes.", "", "In this paper, we present an automatic method which can transfer geometric textures from one object to another, and can apply a manually designed geometric texture to a model. Our method is based on geometry images as introduced by The key ideas in this method involve geometric texture extraction, boundary consistent texture synthesis, discretized orientation and scaling, and reconstruction of synthesized geometry. Compared to other methods, our approach is efficient and easy-to-implement, and produces results of high quality." ] }
1401.0514
1551431154
We study the problem of building generative models of natural source code (NSC); that is, source code written and understood by humans. Our primary contribution is to describe a family of generative models for NSC that have three key properties: First, they incorporate both sequential and hierarchical structure. Second, we learn a distributed representation of source code elements. Finally, they integrate closely with a compiler, which allows leveraging compiler logic and abstractions when building structure into the model. We also develop an extension that includes more complex structure, refining how the model generates identifier tokens based on what variables are currently in scope. Our models can be learned efficiently, and we show empirically that including appropriate structure greatly improves the models, measured by the probability of generating test programs.
It is true in general that PPDAs and PCFGs are equivalent classes of distributions over tokens, although they are subject to different inductive biases @cite_10 . Yet, if the traversal variables of s are latent and marginalized, then the resulting model is context free with respect to the tree.
{ "cite_N": [ "@cite_10" ], "mid": [ "2107771181" ], "abstract": [ "Both probabilistic context-free grammars (PCFGs) and shift-reduce probabilistic pushdown automata (PPDAs) have been used for language modeling and maximum likelihood parsing. We investigate the precise relationship between these two formalisms, showing that, while they define the same classes of probabilistic languages, they appear to impose different inductive biases." ] }
1312.6826
2951440476
The task of detecting the interest points in 3D meshes has typically been handled by geometric methods. These methods, while greatly describing human preference, can be ill-equipped for handling the variety and subjectivity in human responses. Different tasks have different requirements for interest point detection; some tasks may necessitate high precision while other tasks may require high recall. Sometimes points with high curvature may be desirable, while in other cases high curvature may be an indication of noise. Geometric methods lack the required flexibility to adapt to such changes. As a consequence, interest point detection seems to be well suited for machine learning methods that can be trained to match the criteria applied on the annotated training data. In this paper, we formulate interest point detection as a supervised binary classification problem using a random forest as our classifier. Among other challenges, we are faced with an imbalanced learning problem due to the substantial difference in the priors between interest and non-interest points. We address this by re-sampling the training set. We validate the accuracy of our method and compare our results to those of five state of the art methods on a new, standard benchmark.
In a recent survey, @cite_22 introduced a benchmark and an evaluation methodology for algorithms designed to predict interest points in 3D. The benchmark comprises 43 triangular meshes and the associated paper evaluated the performance of six algorithms @cite_25 @cite_19 @cite_7 @cite_9 @cite_11 @cite_27 in interest point detection. Since we also use this benchmark, here, we focus our attention to these six methods. Other relevant methods include @cite_13 @cite_29 @cite_26 @cite_21 @cite_12 @cite_16 @cite_15 . We refer readers to recent surveys @cite_8 @cite_20 @cite_22 @cite_0 @cite_23 for more details.
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_7", "@cite_8", "@cite_9", "@cite_29", "@cite_21", "@cite_20", "@cite_0", "@cite_19", "@cite_27", "@cite_23", "@cite_15", "@cite_16", "@cite_13", "@cite_25", "@cite_12", "@cite_11" ], "mid": [ "2099907898", "2031878977", "2117183049", "2136020167", "2100657858", "2010209818", "2082970905", "", "", "2107216992", "2060890058", "2072492023", "", "1989625560", "2025062188", "", "2063513338", "2020682184" ], "abstract": [ "We propose a new method for rapid 3D object indexing that combines feature-based methods with coarse alignment-based matching techniques. Our approach achieves a sublinear complexity on the number of models, maintaining at the same time a high degree of performance for real 3D sensed data that is acquired in largely uncontrolled settings. The key component of our method is to first index surface descriptors computed at salient locations from the scene into the whole model database using the locality sensitive hashing (LSH), a probabilistic approximate nearest neighbor method. Progressively complex geometric constraints are subsequently enforced to further prune the initial candidates and eliminate false correspondences due to inaccuracies in the surface descriptors and the errors of the LSH algorithm. The indexed models are selected based on the MAP rule using posterior probability of the models estimated in the joint 3D-signature space. Experiments with real 3D data employing a large database of vehicles, most of them very similar in shape, containing 1,000,000 features from more than 365 models demonstrate a high degree of performance in the presence of occlusion and obscuration, unmodeled vehicle interiors and part articulations, with an average processing time between 50 and 100 seconds per query", "In this paper, we present an evaluation strategy based on human-generated ground truth to measure the performance of 3D interest point detection techniques. We provide quantitative evaluation measures that relate automatically detected interest points to human-marked points, which were collected through a web-based application. We give visual demonstrations and a discussion on the results of the subjective experiments. We use a voting-based method to construct ground truth for 3D models and propose three evaluation measures, namely False Positive and False Negative Errors, and Weighted Miss Error to compare interest point detection algorithms.", "This paper proposes new methodology for the detection and matching of salient points over several views of an object. The process is composed by three main phases. In the first step, detection is carried out by adopting a new perceptually-inspired 3D saliency measure. Such measure allows the detection of few sparse salient points that characterize distinctive portions of the surface. In the second step, a statistical learning approach is considered to describe salient points across different views. Each salient point is modelled by a Hidden Markov Model (HMM), which is trained in an unsupervised way by using contextual 3D neighborhood information, thus providing a robust and invariant point signature. Finally, in the third step, matching among points of different views is performed by evaluating a pairwise similarity measure among HMMs. An extensive and comparative experimental session has been carried out, considering real objects acquired by a 3D scanner from different points of view, where objects come from standard 3D databases. Results are promising, as the detection of salient points is reliable, and the matching is robust and accurate.", "3D object recognition from local features is robust to occlusions and clutter. However, local features must be extracted from a small set of feature rich keypoints to avoid computational complexity and ambiguous features. We present an algorithm for the detection of such keypoints on 3D models and partial views of objects. The keypoints are highly repeatable between partial views of an object and its complete 3D model. We also propose a quality measure to rank the keypoints and select the best ones for extracting local features. Keypoints are identified at locations where a unique local 3D coordinate basis can be derived from the underlying surface in order to extract invariant features. We also propose an automatic scale selection technique for extracting multi-scale and scale invariant features to match objects at different unknown scales. Features are projected to a PCA subspace and matched to find correspondences between a database and query object. Each pair of matching features gives a transformation that aligns the query and database object. These transformations are clustered and the biggest cluster is used to identify the query object. Experiments on a public database revealed that the proposed quality measure relates correctly to the repeatability of keypoints and the multi-scale features have a recognition rate of over 95 for up to 80 occluded objects.", "We propose a novel point signature based on the properties of the heat diffusion process on a shape. Our signature, called the Heat Kernel Signature (or HKS), is obtained by restricting the well-known heat kernel to the temporal domain. Remarkably we show that under certain mild assumptions, HKS captures all of the information contained in the heat kernel, and characterizes the shape up to isometry. This means that the restriction to the temporal domain, on the one hand, makes HKS much more concise and easily commensurable, while on the other hand, it preserves all of the information about the intrinsic geometry of the shape. In addition, HKS inherits many useful properties from the heat kernel, which means, in particular, that it is stable under perturbations of the shape. Our signature also provides a natural and efficiently computable multi-scale way to capture information about neighborhoods of a given point, which can be extremely useful in many applications. To demonstrate the practical relevance of our signature, we present several methods for non-rigid multi-scale matching based on the HKS and use it to detect repeated structure within the same shape and across a collection of shapes.", "This article introduces a method for partial matching of surfaces represented by triangular meshes. Our method matches surface regions that are numerically and topologically dissimilar, but approximately similar regions. We introduce novel local surface descriptors which efficiently represent the geometry of local regions of the surface. The descriptors are defined independently of the underlying triangulation, and form a compatible representation that allows matching of surfaces with different triangulations. To cope with the combinatorial complexity of partial matching of large meshes, we introduce the abstraction of salient geometric features and present a method to construct them. A salient geometric feature is a compound high-level feature of nontrivial local shapes. We show that a relatively small number of such salient geometric features characterizes the surface well for various similarity applications. Matching salient geometric features is based on indexing rotation-invariant features and a voting scheme accelerated by geometric hashing. We demonstrate the effectiveness of our method with a number of applications, such as computing self-similarity, alignments, and subparts similarity.", "An algorithm is proposed for 3D object representation using generic 3D features which are transformation and scale invariant. Descriptive 3D features and their relations are used to construct a graphical model for the object which is later trained and then used for detection purposes. Descriptive 3D features are the fundamental structures which are extracted from the surface of the 3D scanner output. This surface is described by mean and Gaussian curvature values at every data point at various scales and a scale-space search is performed in order to extract the fundamental structures and to estimate the location and the scale of each fundamental structure.", "", "", "Three-dimensional geometric data play fundamental roles in many computer vision applications. However, their scale-dependent nature, i.e. the relative variation in the spatial extents of local geometric structures, is often overlooked. In this paper we present a comprehensive framework for exploiting this 3D geometric scale variability. Specifically, we focus on detecting scale-dependent geometric features on triangular mesh models of arbitrary topology. The key idea of our approach is to analyze the geometric scale variability of a given 3D model in the scale-space of a dense and regular 2D representation of its surface geometry encoded by the surface normals. We derive novel corner and edge detectors, as well as an automatic scale selection method, that acts upon this representation to detect salient geometric features and determine their intrinsic scales. We evaluate the effectiveness and robustness of our method on a number of models of different topology. The results show that the resulting scale-dependent geometric feature set provides a reliable basis for constructing a rich but concise representation of the geometric structure at hand.", "In this paper we describe a new formulation for the 3D salient local features based on the voxel grid inspired by the Scale Invariant Feature Transform (SIFT). We use it to identify the salient keypoints (invariant points) on a 3D voxelized model and calculate invariant 3D local feature descriptors at these keypoints. We then use the bag of words approach on the 3D local features to represent the 3D models for shape retrieval. The advantages of the method are that it can be applied to rigid as well as to articulated and deformable 3D models. Finally, this approach is applied for 3D Shape Retrieval on the McGill articulated shape benchmark and then the retrieval results are presented and compared to other methods.", "This paper presents the first performance evaluation of interest points on scalar volumetric data. Such data encodes 3D shape, a fundamental property of objects. The use of another such property, texture (i.e. 2D surface colouration), or appearance, for object detection, recognition and registration has been well studied; 3D shape less so. However, the increasing prevalence of 3D shape acquisition techniques and the diminishing returns to be had from appearance alone have seen a surge in 3D shape-based methods. In this work, we investigate the performance of several state of the art interest points detectors in volumetric data, in terms of repeatability, number and nature of interest points. Such methods form the first step in many shape-based applications. Our detailed comparison, with both quantitative and qualitative measures on synthetic and real 3D data, both point-based and volumetric, aids readers in selecting a method suitable for their application.", "", "This paper presents a new approach for recognition of 3D objects that are represented as 3D point clouds. We introduce a new 3D shape descriptor called Intrinsic Shape Signature (ISS) to characterize a local semi-local region of a point cloud. An intrinsic shape signature uses a view-independent representation of the 3D shape to match shape patches from different views directly, and a view-dependent transform encoding the viewing geometry to facilitate fast pose estimation. In addition, we present a highly efficient indexing scheme for the high dimensional ISS shape descriptors, allowing for fast and accurate search of large model databases. We evaluate the performance of the proposed algorithm on a very challenging task of recognizing different vehicle types using a database of 72 models in the presence of sensor noise, obscuration and scene clutter.", "We present an algorithm for the automatic alignment of two 3D shapes (data and model), without any assumptions about their initial positions. The algorithm computes for each surface point a descriptor based on local geometry that is robust to noise. A small number of feature points are automatically picked from the data shape according to the uniqueness of the descriptor value at the point. For each feature point on the data, we use the descriptor values of the model to find potential corresponding points. We then develop a fast branch-and-bound algorithm based on distance matrix comparisons to select the optimal correspondence set and bring the two shapes into a coarse alignment. The result of our alignment algorithm is used as the initialization to ICP (iterative closest point) and its variants for fine registration of the data to the model. Our algorithm can be used for matching shapes that overlap only over parts of their extent, for building models from partial range scans, as well as for simple symmetry detection, and for matching shapes undergoing articulated motion.", "", "Selecting the most important regions of a surface is useful for shape matching and a variety of applications in computer graphics and geometric modeling. While previous research has analyzed geometric properties of meshes in isolation, we select regions that distinguish a shape from objects of a different type. Our approach to analyzing distinctive regions is based on performing a shape-based search using each region as a query into a database. Distinctive regions of a surface have shape consistent with objects of the same type and different from objects of other types. We demonstrate the utility of detecting distinctive surface regions for shape matching and other graphics applications including mesh visualization, icon generation, and mesh simplification.", "With the increasing amount of 3D data and the ability of capture devices to produce low-cost multimedia data, the capability to select relevant information has become an interesting research field. In 3D objects, the aim is to detect a few salient structures which can be used, instead of the whole object, for applications like object registration, retrieval, and mesh simplification. In this paper, we present an interest points detector for 3D objects based on Harris operator, which has been used with good results in computer vision applications. We propose an adaptive technique to determine the neighborhood of a vertex, over which the Harris response on that vertex is calculated. Our method is robust to several transformations, which can be seen in the high repeatability values obtained using the SHREC feature detection and description benchmark. In addition, we show that Harris 3D outperforms the results obtained by recent effective techniques such as Heat Kernel Signatures." ] }
1312.6826
2951440476
The task of detecting the interest points in 3D meshes has typically been handled by geometric methods. These methods, while greatly describing human preference, can be ill-equipped for handling the variety and subjectivity in human responses. Different tasks have different requirements for interest point detection; some tasks may necessitate high precision while other tasks may require high recall. Sometimes points with high curvature may be desirable, while in other cases high curvature may be an indication of noise. Geometric methods lack the required flexibility to adapt to such changes. As a consequence, interest point detection seems to be well suited for machine learning methods that can be trained to match the criteria applied on the annotated training data. In this paper, we formulate interest point detection as a supervised binary classification problem using a random forest as our classifier. Among other challenges, we are faced with an imbalanced learning problem due to the substantial difference in the priors between interest and non-interest points. We address this by re-sampling the training set. We validate the accuracy of our method and compare our results to those of five state of the art methods on a new, standard benchmark.
* Scale Dependent Corners. Novatnack and Nishino @cite_19 measure the geometric scale variability of a 3D mesh on a 2D representation of the surface geometry given by its normal and distortion maps, which can be obtained by unwrapping the surface of the model onto a 2D plane. A geometric scale-space which encodes the evolution of the surface normals on the 3D model while it is gradually smoothed is constructed and interest points are extracted as points with high curvature at multiple scales.
{ "cite_N": [ "@cite_19" ], "mid": [ "2107216992" ], "abstract": [ "Three-dimensional geometric data play fundamental roles in many computer vision applications. However, their scale-dependent nature, i.e. the relative variation in the spatial extents of local geometric structures, is often overlooked. In this paper we present a comprehensive framework for exploiting this 3D geometric scale variability. Specifically, we focus on detecting scale-dependent geometric features on triangular mesh models of arbitrary topology. The key idea of our approach is to analyze the geometric scale variability of a given 3D model in the scale-space of a dense and regular 2D representation of its surface geometry encoded by the surface normals. We derive novel corner and edge detectors, as well as an automatic scale selection method, that acts upon this representation to detect salient geometric features and determine their intrinsic scales. We evaluate the effectiveness and robustness of our method on a number of models of different topology. The results show that the resulting scale-dependent geometric feature set provides a reliable basis for constructing a rich but concise representation of the geometric structure at hand." ] }
1312.6826
2951440476
The task of detecting the interest points in 3D meshes has typically been handled by geometric methods. These methods, while greatly describing human preference, can be ill-equipped for handling the variety and subjectivity in human responses. Different tasks have different requirements for interest point detection; some tasks may necessitate high precision while other tasks may require high recall. Sometimes points with high curvature may be desirable, while in other cases high curvature may be an indication of noise. Geometric methods lack the required flexibility to adapt to such changes. As a consequence, interest point detection seems to be well suited for machine learning methods that can be trained to match the criteria applied on the annotated training data. In this paper, we formulate interest point detection as a supervised binary classification problem using a random forest as our classifier. Among other challenges, we are faced with an imbalanced learning problem due to the substantial difference in the priors between interest and non-interest points. We address this by re-sampling the training set. We validate the accuracy of our method and compare our results to those of five state of the art methods on a new, standard benchmark.
* Salient Points. @cite_7 also adopt a multi-scale approach. DoG filters are applied to vertex coordinates to compute a displacement vector of each vertex at every scale. The displacement vectors are, then, projected onto the normals of the vertices producing a scale map'' for each scale. Interest points are extracted among the local maxima of the scale maps.
{ "cite_N": [ "@cite_7" ], "mid": [ "2117183049" ], "abstract": [ "This paper proposes new methodology for the detection and matching of salient points over several views of an object. The process is composed by three main phases. In the first step, detection is carried out by adopting a new perceptually-inspired 3D saliency measure. Such measure allows the detection of few sparse salient points that characterize distinctive portions of the surface. In the second step, a statistical learning approach is considered to describe salient points across different views. Each salient point is modelled by a Hidden Markov Model (HMM), which is trained in an unsupervised way by using contextual 3D neighborhood information, thus providing a robust and invariant point signature. Finally, in the third step, matching among points of different views is performed by evaluating a pairwise similarity measure among HMMs. An extensive and comparative experimental session has been carried out, considering real objects acquired by a 3D scanner from different points of view, where objects come from standard 3D databases. Results are promising, as the detection of salient points is reliable, and the matching is robust and accurate." ] }
1312.6826
2951440476
The task of detecting the interest points in 3D meshes has typically been handled by geometric methods. These methods, while greatly describing human preference, can be ill-equipped for handling the variety and subjectivity in human responses. Different tasks have different requirements for interest point detection; some tasks may necessitate high precision while other tasks may require high recall. Sometimes points with high curvature may be desirable, while in other cases high curvature may be an indication of noise. Geometric methods lack the required flexibility to adapt to such changes. As a consequence, interest point detection seems to be well suited for machine learning methods that can be trained to match the criteria applied on the annotated training data. In this paper, we formulate interest point detection as a supervised binary classification problem using a random forest as our classifier. Among other challenges, we are faced with an imbalanced learning problem due to the substantial difference in the priors between interest and non-interest points. We address this by re-sampling the training set. We validate the accuracy of our method and compare our results to those of five state of the art methods on a new, standard benchmark.
* Heat Kernel Signature. @cite_9 apply the Laplace-Beltrami operator over the mesh to obtain its Heat Kernel Signature (HKS). The HKS captures neighborhood structure properties which are manifested during the heat diffusion process on the surface model and which are invariant to isometric transformations. The local maxima of the HKS are selected as the interest points of the model.
{ "cite_N": [ "@cite_9" ], "mid": [ "2100657858" ], "abstract": [ "We propose a novel point signature based on the properties of the heat diffusion process on a shape. Our signature, called the Heat Kernel Signature (or HKS), is obtained by restricting the well-known heat kernel to the temporal domain. Remarkably we show that under certain mild assumptions, HKS captures all of the information contained in the heat kernel, and characterizes the shape up to isometry. This means that the restriction to the temporal domain, on the one hand, makes HKS much more concise and easily commensurable, while on the other hand, it preserves all of the information about the intrinsic geometry of the shape. In addition, HKS inherits many useful properties from the heat kernel, which means, in particular, that it is stable under perturbations of the shape. Our signature also provides a natural and efficiently computable multi-scale way to capture information about neighborhoods of a given point, which can be extremely useful in many applications. To demonstrate the practical relevance of our signature, we present several methods for non-rigid multi-scale matching based on the HKS and use it to detect repeated structure within the same shape and across a collection of shapes." ] }
1312.6826
2951440476
The task of detecting the interest points in 3D meshes has typically been handled by geometric methods. These methods, while greatly describing human preference, can be ill-equipped for handling the variety and subjectivity in human responses. Different tasks have different requirements for interest point detection; some tasks may necessitate high precision while other tasks may require high recall. Sometimes points with high curvature may be desirable, while in other cases high curvature may be an indication of noise. Geometric methods lack the required flexibility to adapt to such changes. As a consequence, interest point detection seems to be well suited for machine learning methods that can be trained to match the criteria applied on the annotated training data. In this paper, we formulate interest point detection as a supervised binary classification problem using a random forest as our classifier. Among other challenges, we are faced with an imbalanced learning problem due to the substantial difference in the priors between interest and non-interest points. We address this by re-sampling the training set. We validate the accuracy of our method and compare our results to those of five state of the art methods on a new, standard benchmark.
* 3D Harris. Sipiran and Bustos @cite_11 generalized the Harris and Stephens corner detector @cite_5 to 3D. The computation is now performed on the rings of a vertex, which play the role of neighboring pixels. A quadratic surface is fitted to the points around each vertex. This enables the computation of a filter similar to the Harris operator, the maximal responses of which are selected as interest points.
{ "cite_N": [ "@cite_5", "@cite_11" ], "mid": [ "2111308925", "2020682184" ], "abstract": [ "The problem we are addressing in Alvey Project MMI149 is that of using computer vision to understand the unconstrained 3D world, in which the viewed scenes will in general contain too wide a diversity of objects for topdown recognition techniques to work. For example, we desire to obtain an understanding of natural scenes, containing roads, buildings, trees, bushes, etc., as typified by the two frames from a sequence illustrated in Figure 1. The solution to this problem that we are pursuing is to use a computer vision system based upon motion analysis of a monocular image sequence from a mobile camera. By extraction and tracking of image features, representations of the 3D analogues of these features can be constructed.", "With the increasing amount of 3D data and the ability of capture devices to produce low-cost multimedia data, the capability to select relevant information has become an interesting research field. In 3D objects, the aim is to detect a few salient structures which can be used, instead of the whole object, for applications like object registration, retrieval, and mesh simplification. In this paper, we present an interest points detector for 3D objects based on Harris operator, which has been used with good results in computer vision applications. We propose an adaptive technique to determine the neighborhood of a vertex, over which the Harris response on that vertex is calculated. Our method is robust to several transformations, which can be seen in the high repeatability values obtained using the SHREC feature detection and description benchmark. In addition, we show that Harris 3D outperforms the results obtained by recent effective techniques such as Heat Kernel Signatures." ] }
1312.6826
2951440476
The task of detecting the interest points in 3D meshes has typically been handled by geometric methods. These methods, while greatly describing human preference, can be ill-equipped for handling the variety and subjectivity in human responses. Different tasks have different requirements for interest point detection; some tasks may necessitate high precision while other tasks may require high recall. Sometimes points with high curvature may be desirable, while in other cases high curvature may be an indication of noise. Geometric methods lack the required flexibility to adapt to such changes. As a consequence, interest point detection seems to be well suited for machine learning methods that can be trained to match the criteria applied on the annotated training data. In this paper, we formulate interest point detection as a supervised binary classification problem using a random forest as our classifier. Among other challenges, we are faced with an imbalanced learning problem due to the substantial difference in the priors between interest and non-interest points. We address this by re-sampling the training set. We validate the accuracy of our method and compare our results to those of five state of the art methods on a new, standard benchmark.
* 3D SIFT. Godil and Wagan @cite_27 initially convert the mesh model into a voxel representation. Then, 3D Gaussian filters are applied to the voxel model at various scales as in the standard SIFT algorithm. DoG filters are used to compute the difference between the original model and the model at a particular scale and their extrema are taken as candidate interest points. The final set of interest points are those that also lie on the surface of the 3D object.
{ "cite_N": [ "@cite_27" ], "mid": [ "2060890058" ], "abstract": [ "In this paper we describe a new formulation for the 3D salient local features based on the voxel grid inspired by the Scale Invariant Feature Transform (SIFT). We use it to identify the salient keypoints (invariant points) on a 3D voxelized model and calculate invariant 3D local feature descriptors at these keypoints. We then use the bag of words approach on the 3D local features to represent the 3D models for shape retrieval. The advantages of the method are that it can be applied to rigid as well as to articulated and deformable 3D models. Finally, this approach is applied for 3D Shape Retrieval on the McGill articulated shape benchmark and then the retrieval results are presented and compared to other methods." ] }
1312.6675
1556699071
Social media and social networks have already woven themselves into the very fabric of everyday life. This results in a dramatic increase of social data capturing various relations between the users and their associated artifacts, both in online networks and the real world using ubiquitous devices. In this work, we consider social interaction networks from a data mining perspective - also with a special focus on real-world face-to-face contact networks: We combine data mining and social network analysis techniques for examining the networks in order to improve our understanding of the data, the modeled behavior, and its underlying emergent processes. Furthermore, we adapt, extend and apply known predictive data mining algorithms on social interaction networks. Additionally, we present novel methods for descriptive data mining for uncovering and extracting relations and patterns for hypothesis generation and exploration, in order to provide characteristic information about the data and networks. The presented approaches and methods aim at extracting valuable knowledge for enhancing the understanding of the respective data, and for supporting the users of the respective systems. We consider data from several social systems, like the social bookmarking system BibSonomy, the social resource sharing system flickr, and ubiquitous social systems: Specifically, we focus on data from the social conference guidance system Conferator and the social group interaction system MyGroup. This work first gives a short introduction into social interaction networks, before we describe several analysis results in the context of online social networks and real-world face-to-face contact networks. Next, we present predictive data mining methods, i.e., for localization, recommendation and link prediction. After that, we present novel descriptive data mining methods for mining communities and patterns.
Overall, data mining in the context of social interaction networks concerns core elements of data mining and knowledge discovery itself, @cite_65 , but also includes techniques from social network analysis, @cite_50 @cite_15 , as well as mining social media, @cite_27 @cite_113 , complex network analytics @cite_60 @cite_69 @cite_84 @cite_73 @cite_128 , and mining the ubiquitous web @cite_92 @cite_59 @cite_102 .
{ "cite_N": [ "@cite_69", "@cite_128", "@cite_60", "@cite_92", "@cite_65", "@cite_102", "@cite_84", "@cite_113", "@cite_27", "@cite_50", "@cite_59", "@cite_15", "@cite_73" ], "mid": [ "2115022330", "2164727176", "2114581929", "2074598806", "2186428165", "2043037779", "2148606196", "49288413", "2046011722", "2061901927", "1933949522", "2135844668", "2124637492" ], "abstract": [ "Online social networking sites like Orkut, YouTube, and Flickr are among the most popular sites on the Internet. Users of these sites form a social network, which provides a powerful means of sharing, organizing, and finding content and contacts. The popularity of these sites provides an opportunity to study the characteristics of online social network graphs at large scale. Understanding these graphs is important, both to improve current systems and to design new applications of online social networks. This paper presents a large-scale measurement study and analysis of the structure of multiple online social networks. We examine data gathered from four popular online social networks: Flickr, YouTube, LiveJournal, and Orkut. We crawled the publicly accessible user links on each site, obtaining a large portion of each social network's graph. Our data set contains over 11.3 million users and 328 million links. We believe that this is the first study to examine multiple online social networks at scale. Our results confirm the power-law, small-world, and scale-free properties of online social networks. We observe that the indegree of user nodes tends to match the outdegree; that the networks contain a densely connected core of high-degree nodes; and that this core links small groups of strongly clustered, low-degree nodes at the fringes of the network. Finally, we discuss the implications of these structural properties for the design of social network based systems.", "The study of networks pervades all of science, from neurobiology to statistical physics. The most basic issues are structural: how does one characterize the wiring diagram of a food web or the Internet or the metabolic network of the bacterium Escherichia coli? Are there any unifying principles underlying their topology? From the perspective of nonlinear dynamics, we would also like to understand how an enormous network of interacting dynamical systems — be they neurons, power stations or lasers — will behave collectively, given their individual dynamics and coupling architecture. Researchers are only now beginning to unravel the structure and dynamics of complex networks.", "", "Web intelligence offers a new direction for scientific research and development, pushing technology toward manipulating the meaning of data and creating a distributed intelligence that can actually get things done. WI explores the fundamental and practical impact that artificial intelligence and advanced information technology will have on the next generation of Web-empowered systems, services, and environments.The Web significantly affects both academic research and everyday life, revolutionizing how we gather, store, process, present, share, and use information. It offers opportunities and challenges in many areas, including business, commerce, finance, and research and development.The next-generation Web will go beyond improved information search and knowledge queries and will help people achieve better ways of living, working, playing, and learning. To fulfill its potential, the intelligent Web's design and development must incorporate knowledge from existing disciplines, such as artificial intelligence and information technology, in a totally new domain.", "The book Knowledge Discovery in Databases, edited by Piatetsky-Shapiro and Frawley [PSF91], is an early collection of research papers on knowledge discovery from data. The book Advances in Knowledge Discovery and Data Mining, edited by Fayyad, Piatetsky-Shapiro, Smyth, and Uthurusamy [FPSSe96], is a collection of later research results on knowledge discovery and data mining. There have been many data mining books published in recent years, including Predictive Data Mining by Weiss and Indurkhya [WI98], Data Mining Solutions: Methods and Tools for Solving Real-World Problems by Westphal and Blaxton [WB98], Mastering Data Mining: The Art and Science of Customer Relationship Management by Berry and Linofi [BL99], Building Data Mining Applications for CRM by Berson, Smith, and Thearling [BST99], Data Mining: Practical Machine Learning Tools and Techniques by Witten and Frank [WF05], Principles of Data Mining (Adaptive Computation and Machine Learning) by Hand, Mannila, and Smyth [HMS01], The Elements of Statistical Learning by Hastie, Tibshirani, and Friedman [HTF01], Data Mining: Introductory and Advanced Topics by Dunham, and Data Mining: Multimedia, Soft Computing, and Bioinformatics by Mitra and Acharya [MA03]. There are also books containing collections of papers on particular aspects of knowledge discovery, such as Machine Learning and Data Mining: Methods and Applications edited by Michalski, Brakto, and Kubat [MBK98], and Relational Data Mining edited by Dzeroski and Lavrac [De01], as well as many tutorial notes on data mining in major database, data mining and machine learning conferences.", "People are on the verge of an era in which the human experience can be enriched in ways they couldn't have imagined two decades ago. Rather than depending on a single technology, people progressed with several whose semantics-empowered convergence and integration will enable us to capture, understand, and reapply human knowledge and intellect. Such capabilities will consequently elevate our technological ability to deal with the abstractions, concepts, and actions that characterize human experiences. This will herald computing for human experience (CHE). The CHE vision is built on a suite of technologies that serves, assists, and cooperates with humans to nondestructively and unobtrusively complement and enrich normal activities, with minimal explicit concern or effort on the humans' part. CHE will anticipate when to gather and apply relevant knowledge and intelligence. It will enable human experiences that are intertwined with the physical, conceptual, and experiential worlds (emotions, sentiments, and so on), rather than immerse humans in cyber worlds for a specific task. Instead of focusing on humans interacting with a technology or system, CHE will feature technology-rich human surroundings that often initiate interactions. Interaction will be more sophisticated and seamless compared to today's precursors such as automotive accident-avoidance systems. Many components of and ideas associated with the CHE vision have been around for a while. Here, the author discuss some of the most important tipping points that he believe will make CHE a reality within a decade.", "Inspired by empirical studies of networked systems such as the Internet, social networks, and biological networks, researchers have in recent years developed a variety of techniques and models to help us understand or predict the behavior of these systems. Here we review developments in this field, including such concepts as the small-world effect, degree distributions, clustering, network correlations, random graph models, models of network growth and preferential attachment, and dynamical processes taking place on networks.", "Facebook, Twitter, and LinkedIn generate a tremendous amount of valuable social data, but how can you find out who's making connections with social media, what theyre talking about, or where theyre located? This concise and practical book shows you how to answer these questions and more. You'll learn how to combine social web data, analysis techniques, and visualization to help you find what you've been looking for in the social haystack, as well as useful information you didn't know existed.Each standalone chapter introduces techniques for mining data in different areas of the social Web, including blogs and email. All you need to get started is a programming background and a willingness to learn basic Python tools.Get a straightforward synopsis of the social web landscape Use adaptable scripts on GitHub to harvest data from social network APIs such as Twitter, Facebook, and LinkedIn Learn how to employ easy-to-use Python tools to slice and dice the data you collect Explore social connections in microformats with the XHTML Friends Network Apply advanced mining techniques such as TF-IDF, cosine similarity, collocation analysis, document summarization, and clique detection Build interactive visualizations with web technologies based upon HTML5 and JavaScript toolkits \"Let Matthew Russell serve as your guide to working with social data sets old (email, blogs) and new (Twitter, LinkedIn, Facebook). Mining the Social Web is a natural successor to Programming Collective Intelligence: a practical, hands-on approach to hacking on data from the social Web with Python.\" --Jeff Hammerbacher, Chief Scientist, Cloudera \"A rich, compact, useful, practical introduction to a galaxy of tools, techniques, and theories for exploring structured and unstructured data.\" --Alex Martelli, Senior Staff Engineer, Google", "This book, from a data mining perspective, introduces characteristics of social media, reviews representative tasks of computing with social media, and illustrates associated challenges. It introduces basic concepts, presents state-of-the-art algorithms with easy-to-understand examples, and recommends effective evaluation methods. In particular, we discuss graph-based community detection techniques and many important extensions that handle dynamic, heterogeneous networks in social media. We also demonstrate how discovered patterns of communities can be used for social media mining. The concepts, algorithms, and methods presented in this lecture can help harness the power of social media and support building socially-intelligent systems. This book is an accessible introduction to the study of . It is an essential reading for students, researchers, and practitioners in disciplines and applications where social media is a key source of data that piques our curiosity to understand, manage, innovate, and excel. This book is supported by additional materials, including lecture slides, the complete set of figures, key references, some toy data sets used in the book, and the source code of representative algorithms. The readers are encouraged to visit the book website http: dmml.asu.edu cdm for the latest information.", "Part I. Introduction: Networks, Relations, and Structure: 1. Relations and networks in the social and behavioral sciences 2. Social network data: collection and application Part II. Mathematical Representations of Social Networks: 3. Notation 4. Graphs and matrixes Part III. Structural and Locational Properties: 5. Centrality, prestige, and related actor and group measures 6. Structural balance, clusterability, and transitivity 7. Cohesive subgroups 8. Affiliations, co-memberships, and overlapping subgroups Part IV. Roles and Positions: 9. Structural equivalence 10. Blockmodels 11. Relational algebras 12. Network positions and roles Part V. Dyadic and Triadic Methods: 13. Dyads 14. Triads Part VI. Statistical Dyadic Interaction Models: 15. Statistical analysis of single relational networks 16. Stochastic blockmodels and goodness-of-fit indices Part VII. Epilogue: 17. Future directions.", "Today, we observe the amalgamation of the Social Web and the Mobile Web, which will ultimately lead to a Ubiquitous Web. The integration and aggregation of the different kinds of available data, and the extraction of useful knowledge and its representation has become an important challenge for researchers from the Semantic Web, Web 2.0, social network analysis and machine learning communities. We discuss the Ubiquitous Web vision, by addressing the challenge of bridging the gap between Web 2.0 and the Semantic Web, before widening the scope to mobile applications.", "We present NodeXL, an extendible toolkit for network overview, discovery and exploration implemented as an add-in to the Microsoft Excel 2007 spreadsheet software. We demonstrate NodeXL data analysis and visualization features with a social media data sample drawn from an enterprise intranet social network. A sequence of NodeXL operations from data import to computation of network statistics and refinement of network visualization through sorting, filtering, and clustering functions is described. These operations reveal sociologically relevant differences in the patterns of interconnection among employee participants in the social media space. The tool and method can be broadly applied.", "The emergence of order in natural systems is a constant source of inspiration for both physical and biological sciences. While the spatial order characterizing for example the crystals has been the basis of many advances in contemporary physics, most complex systems in nature do not offer such high degree of order. Many of these systems form complex networks whose nodes are the elements of the system and edges represent the interactions between them. Traditionally complex networks have been described by the random graph theory founded in 1959 by Paul Erdohs and Alfred Renyi. One of the defining features of random graphs is that they are statistically homogeneous, and their degree distribution (characterizing the spread in the number of edges starting from a node) is a Poisson distribution. In contrast, recent empirical studies, including the work of our group, indicate that the topology of real networks is much richer than that of random graphs. In particular, the degree distribution of real networks is a power-law, indicating a heterogeneous topology in which the majority of the nodes have a small degree, but there is a significant fraction of highly connected nodes that play an important role in the connectivity of the network. The scale-free topology of real networks has very important consequences on their functioning. For example, we have discovered that scale-free networks are extremely resilient to the random disruption of their nodes. On the other hand, the selective removal of the nodes with highest degree induces a rapid breakdown of the network to isolated subparts that cannot communicate with each other. The non-trivial scaling of the degree distribution of real networks is also an indication of their assembly and evolution. Indeed, our modeling studies have shown us that there are general principles governing the evolution of networks. Most networks start from a small seed and grow by the addition of new nodes which attach to the nodes already in the system. This process obeys preferential attachment: the new nodes are more likely to connect to nodes with already high degree. We have proposed a simple model based on these two principles wich was able to reproduce the power-law degree distribution of real networks. Perhaps even more importantly, this model paved the way to a new paradigm of network modeling, trying to capture the evolution of networks, not just their static topology." ] }
1312.6947
1490695611
Ontology Learning (OL) is the computational task of generating a knowledge base in the form of an ontology given an unstructured corpus whose content is in natural language (NL). Several works can be found in this area most of which are limited to statistical and lexico-syntact ic pattern matching based techniques (Light-Weight OL). These techniques do not lead to very accurate learning mostly because of several linguistic nuances in NL. Formal OL is an alternative (less explored) methodology were deep linguistics analysis is made using theory and tools found in computational linguistics to generate formal axioms and definitions instead simply inducing a taxonomy. In this paper we propose “Description Logic (DL)” based formal OL framework for learning factual IS-A type sentences in English. We claim that semantic construction of IS-A sentences is non trivial. Hence, we also claim that such sentences requires special studies in the context of OL before any truly formal OL can be proposed. We introduce a learner tool, called DLOLIS−A, that generated such ontologies in the owl format. We have adopted “Gold Standard” based OL evaluation on IS-A rich WCL v.1.1 dataset and our own Community representative IS-A dataset. We observed significant improvement of DLOLIS−A when compared to the light-weight OL tool Text2Onto and formal OL tool FRED.
There has been significant literature over the last decade on the problem of Ontology Learning (OL). Most of these works can be categorized into two approaches as discussed earlier: (i) , and (ii) . Light-weight ontology learning from text documents is arguably the most widely used approach in the field of OL @cite_32 . It can be further divided into two general approaches: (i) and (ii) .
{ "cite_N": [ "@cite_32" ], "mid": [ "2124436241" ], "abstract": [ "Ontologies are often viewed as the answer to the need for interoperable semantics in modern information systems. The explosion of textual information on the Read Write Web coupled with the increasing demand for ontologies to power the Semantic Web have made (semi-)automatic ontology learning from text a very promising research area. This together with the advanced state in related areas, such as natural language processing, have fueled research into ontology learning over the past decade. This survey looks at how far we have come since the turn of the millennium and discusses the remaining challenges that will define the research directions in this area in the near future." ] }
1312.7006
2099020507
We consider the mixed regression problem with two components, under adversarial and stochastic noise. We give a convex optimization formulation that provably recovers the true solution, and provide upper bounds on the recovery errors for both arbitrary noise and stochastic noise settings. We also give matching minimax lower bounds (up to log factors), showing that under certain assumptions, our algorithm is information-theoretically optimal. Our results represent the first tractable algorithm guaranteeing successful recovery with tight bounds on recovery errors and sample complexity.
Mixture models and latent variable modeling are very broadly used in a wide array of contexts far beyond regression. Subspace clustering @cite_20 @cite_12 @cite_15 , Gaussian mixture models @cite_3 @cite_24 and @math -means clustering are popular examples of unsupervised learning for mixture models. The most popular and broadly implemented approach to mixture problems, including mixed regression, is the so-called Expectation-Maximization (EM) algorithm @cite_27 @cite_17 . In fact, EM has been used for mixed regression for various application domains @cite_25 @cite_30 . Despite its wide use, still little is known about its performance beyond local convergence @cite_28 @cite_1 .
{ "cite_N": [ "@cite_30", "@cite_28", "@cite_1", "@cite_3", "@cite_24", "@cite_27", "@cite_15", "@cite_20", "@cite_25", "@cite_12", "@cite_17" ], "mid": [ "", "2053742104", "2962737134", "2397597991", "2137945041", "2049633694", "1781773254", "2003217181", "1503100111", "", "" ], "abstract": [ "", "Two convergence aspects of the EM algorithm are studied: (i) does the EM algorithm find a local maximum or a stationary value of the (incompletedata) likelihood function? (ii) does the sequence of parameter estimates generated by EM converge? Several convergence results are obtained under conditions that are applicable to many practical situations. Two useful special cases are: (a) if the unobserved complete-data specification can be described by a curved exponential family with compact parameter space, all the limit points of any EM sequence are stationary points of the likelihood function; (b) if the likelihood function is unimodal and a certain differentiability condition is satisfied, then any EM sequence converges to the unique maximum likelihood estimate. A list of key properties of the algorithm is included.", "We develop a general framework for proving rigorous guarantees on the performance of the EM algorithm and a variant known as gradient EM. Our analysis is divided into two parts: a treatment of these algorithms at the population level (in the limit of infinite data), followed by results that apply to updates based on a finite set of samples. First, we characterize the domain of attraction of any global maximizer of the population likelihood. This characterization is based on a novel view of the EM updates as a perturbed form of likelihood ascent, or in parallel, of the gradient EM updates as a perturbed form of standard gradient ascent. Leveraging this characterization, we then provide non-asymptotic guarantees on the EM and gradient EM algorithms when applied to a finite set of samples. We develop consequences of our general theory for three canonical examples of incompletedata problems: mixture of Gaussians, mixture of regressions, and linear regression with covariates missing completely at random. In each case, our theory guarantees that with a suitable initialization, a relatively small number of EM (or gradient EM) steps will yield (with high probability) an estimate that is within statistical error of the MLE. We provide simulations to confirm this theoretically predicted behavior.", "", "While several papers have investigated computationally and statistically efficient methods for learning Gaussian mixtures, precise minimax bounds for their statistical performance as well as fundamental limits in high-dimensional settings are not well-understood. In this paper, we provide precise information theoretic bounds on the clustering accuracy and sample complexity of learning a mixture of two isotropic Gaussians in high dimensions under small mean separation. If there is a sparse subset of relevant dimensions that determine the mean separation, then the sample complexity only depends on the number of relevant dimensions and mean separation, and can be achieved by a simple computationally efficient procedure. Our results provide the first step of a theoretical basis for recent methods that combine feature selection and clustering.", "", "This paper considers the problem of subspace clustering under noise. Specifically, we study the behavior of Sparse Subspace Clustering (SSC) when either adversarial or random noise is added to the unlabelled input data points, which are assumed to lie in a union of low-dimensional subspaces. We show that a modified version of SSC is provably effective in correctly identifying the underlying subspaces, even with noisy data. This extends theoretical guarantee of this algorithm to the practical setting and provides justification to the success of SSC in a class of real applications.", "We propose a method based on sparse representation (SR) to cluster data drawn from multiple low-dimensional linear or affine subspaces embedded in a high-dimensional space. Our method is based on the fact that each point in a union of subspaces has a SR with respect to a dictionary formed by all other data points. In general, finding such a SR is NP hard. Our key contribution is to show that, under mild assumptions, the SR can be obtained exactly' by using l1 optimization. The segmentation of the data is obtained by applying spectral clustering to a similarity matrix built from this SR. Our method can handle noise, outliers as well as missing data. We apply our subspace clustering algorithm to the problem of segmenting multiple motions in video. Experiments on 167 video sequences show that our approach significantly outperforms state-of-the-art methods.", "Consider data (x1,y1),…,(xn,yn), where each xi may be vector valued, and the distribution of yi given xi is a mixture of linear regressions. This provides a generalization of mixture models which do not include covariates in the mixture formulation. This mixture of linear regressions formulation has appeared in the computer science literature under the name “Hierarchical Mixtures of Experts” model. This model has been considered from both frequentist and Bayesian viewpoints. We focus on the Bayesian formulation. Previously, estimation of the mixture of linear regression model has been done through straightforward Gibbs sampling with latent variables. This paper contributes to this field in three major areas. First, we provide a theoretical underpinning to the Bayesian implementation by demonstrating consistency of the posterior distribution. This demonstration is done by extending results in Barron, Schervish and Wasserman (Annals of Statistics 27: 536–561, 1999) on bracketing entropy to the regression setting. Second, we demonstrate through examples that straightforward Gibbs sampling may fail to effectively explore the posterior distribution and provide alternative algorithms that are more accurate. Third, we demonstrate the usefulness of the mixture of linear regressions framework in Bayesian robust regression. The methods described in the paper are applied to two examples.", "", "" ] }
1312.7006
2099020507
We consider the mixed regression problem with two components, under adversarial and stochastic noise. We give a convex optimization formulation that provably recovers the true solution, and provide upper bounds on the recovery errors for both arbitrary noise and stochastic noise settings. We also give matching minimax lower bounds (up to log factors), showing that under certain assumptions, our algorithm is information-theoretically optimal. Our results represent the first tractable algorithm guaranteeing successful recovery with tight bounds on recovery errors and sample complexity.
One exception is the recent work in @cite_13 , which considers mixed regression in the noiseless setting, where they propose an alternating minimization approach initialized by a grid search and show that it recovers the regressors in the noiseless case with a sample complexity of @math . Extension to the noisy setting is very recently considered in @cite_1 . Focusing on the stochastic noise setting and the high SNR regime (i.e., when @math ; cf. ), they show that the EM algorithm with good initialization achieves the error bound @math . Another notable exception is the work in @cite_31 . There, EM is adapted to the high-dimensional sparse regression setting, where the regressors are known to be sparse. The authors use EM to solve a penalized (for sparsity) likelihood function. A generalized EM approach achieves support-recovery, though once restricted to that support where the problem becomes a standard mixed regression problem, only convergence to a local optimum can be guaranteed.
{ "cite_N": [ "@cite_1", "@cite_31", "@cite_13" ], "mid": [ "2962737134", "2138142480", "2949960673" ], "abstract": [ "We develop a general framework for proving rigorous guarantees on the performance of the EM algorithm and a variant known as gradient EM. Our analysis is divided into two parts: a treatment of these algorithms at the population level (in the limit of infinite data), followed by results that apply to updates based on a finite set of samples. First, we characterize the domain of attraction of any global maximizer of the population likelihood. This characterization is based on a novel view of the EM updates as a perturbed form of likelihood ascent, or in parallel, of the gradient EM updates as a perturbed form of standard gradient ascent. Leveraging this characterization, we then provide non-asymptotic guarantees on the EM and gradient EM algorithms when applied to a finite set of samples. We develop consequences of our general theory for three canonical examples of incompletedata problems: mixture of Gaussians, mixture of regressions, and linear regression with covariates missing completely at random. In each case, our theory guarantees that with a suitable initialization, a relatively small number of EM (or gradient EM) steps will yield (with high probability) an estimate that is within statistical error of the MLE. We provide simulations to confirm this theoretically predicted behavior.", "We consider a finite mixture of regressions (FMR) model for high-dimensional inhomogeneous data where the number of covariates may be much larger than sample size. We propose an l 1-penalized maximum likelihood estimator in an appropriate parameterization. This kind of estimation belongs to a class of problems where optimization and theory for non-convex functions is needed. This distinguishes itself very clearly from high-dimensional estimation with convex loss- or objective functions as, for example, with the Lasso in linear or generalized linear models. Mixture models represent a prime and important example where non-convexity arises.", "Mixed linear regression involves the recovery of two (or more) unknown vectors from unlabeled linear measurements; that is, where each sample comes from exactly one of the vectors, but we do not know which one. It is a classic problem, and the natural and empirically most popular approach to its solution has been the EM algorithm. As in other settings, this is prone to bad local minima; however, each iteration is very fast (alternating between guessing labels, and solving with those labels). In this paper we provide a new initialization procedure for EM, based on finding the leading two eigenvectors of an appropriate matrix. We then show that with this, a re-sampled version of the EM algorithm provably converges to the correct vectors, under natural assumptions on the sampling distribution, and with nearly optimal (unimprovable) sample complexity. This provides not only the first characterization of EM's performance, but also much lower sample complexity as compared to both standard (randomly initialized) EM, and other methods for this problem." ] }
1312.7006
2099020507
We consider the mixed regression problem with two components, under adversarial and stochastic noise. We give a convex optimization formulation that provably recovers the true solution, and provide upper bounds on the recovery errors for both arbitrary noise and stochastic noise settings. We also give matching minimax lower bounds (up to log factors), showing that under certain assumptions, our algorithm is information-theoretically optimal. Our results represent the first tractable algorithm guaranteeing successful recovery with tight bounds on recovery errors and sample complexity.
Mixture models have been recently explored using the recently developed technology of tensors in @cite_4 @cite_3 . @cite_9 , the authors consider a tensor-based approach, regressing @math against @math , and then using the tensor decomposition techniques to efficiently recover each @math . These methods are not limited to the mixture of only two models, as we are. Yet, the tensor approach requires @math samples, which is several orders of magnitude more than the @math that our work requires. As noted in their work, the higher sampling requirement for using third order tensors seems intrinsic.
{ "cite_N": [ "@cite_9", "@cite_4", "@cite_3" ], "mid": [ "2953030775", "2950741027", "2397597991" ], "abstract": [ "Discriminative latent-variable models are typically learned using EM or gradient-based optimization, which suffer from local optima. In this paper, we develop a new computationally efficient and provably consistent estimator for a mixture of linear regressions, a simple instance of a discriminative latent-variable model. Our approach relies on a low-rank linear regression to recover a symmetric tensor, which can be factorized into the parameters using a tensor power method. We prove rates of convergence for our estimator and provide an empirical evaluation illustrating its strengths relative to local optimization (EM).", "This work considers a computationally and statistically efficient parameter estimation method for a wide class of latent variable models---including Gaussian mixture models, hidden Markov models, and latent Dirichlet allocation---which exploits a certain tensor structure in their low-order observable moments (typically, of second- and third-order). Specifically, parameter estimation is reduced to the problem of extracting a certain (orthogonal) decomposition of a symmetric tensor derived from the moments; this decomposition can be viewed as a natural generalization of the singular value decomposition for matrices. Although tensor decompositions are generally intractable to compute, the decomposition of these specially structured tensors can be efficiently obtained by a variety of approaches, including power iterations and maximization approaches (similar to the case of matrices). A detailed analysis of a robust tensor power method is provided, establishing an analogue of Wedin's perturbation theorem for the singular vectors of matrices. This implies a robust and computationally tractable estimation approach for several popular latent variable models.", "" ] }
1312.7006
2099020507
We consider the mixed regression problem with two components, under adversarial and stochastic noise. We give a convex optimization formulation that provably recovers the true solution, and provide upper bounds on the recovery errors for both arbitrary noise and stochastic noise settings. We also give matching minimax lower bounds (up to log factors), showing that under certain assumptions, our algorithm is information-theoretically optimal. Our results represent the first tractable algorithm guaranteeing successful recovery with tight bounds on recovery errors and sample complexity.
In this work we consider the setting with two mixture components. Many interesting applications have binary latent factors: gene mutation present not, gender, healthy sick individual, children adult, etc.; see also the examples in @cite_25 . Theoretically, the minimax rate was previously unknown even in the two-component case. Extension to more than two components is of great interest.
{ "cite_N": [ "@cite_25" ], "mid": [ "1503100111" ], "abstract": [ "Consider data (x1,y1),…,(xn,yn), where each xi may be vector valued, and the distribution of yi given xi is a mixture of linear regressions. This provides a generalization of mixture models which do not include covariates in the mixture formulation. This mixture of linear regressions formulation has appeared in the computer science literature under the name “Hierarchical Mixtures of Experts” model. This model has been considered from both frequentist and Bayesian viewpoints. We focus on the Bayesian formulation. Previously, estimation of the mixture of linear regression model has been done through straightforward Gibbs sampling with latent variables. This paper contributes to this field in three major areas. First, we provide a theoretical underpinning to the Bayesian implementation by demonstrating consistency of the posterior distribution. This demonstration is done by extending results in Barron, Schervish and Wasserman (Annals of Statistics 27: 536–561, 1999) on bracketing entropy to the regression setting. Second, we demonstrate through examples that straightforward Gibbs sampling may fail to effectively explore the posterior distribution and provide alternative algorithms that are more accurate. Third, we demonstrate the usefulness of the mixture of linear regressions framework in Bayesian robust regression. The methods described in the paper are applied to two examples." ] }
1312.7006
2099020507
We consider the mixed regression problem with two components, under adversarial and stochastic noise. We give a convex optimization formulation that provably recovers the true solution, and provide upper bounds on the recovery errors for both arbitrary noise and stochastic noise settings. We also give matching minimax lower bounds (up to log factors), showing that under certain assumptions, our algorithm is information-theoretically optimal. Our results represent the first tractable algorithm guaranteeing successful recovery with tight bounds on recovery errors and sample complexity.
Finally, we note that our focus is on estimating the regressors @math rather than identifying the hidden labels @math or predicting the response @math for future data points. The relationship between covariates and response is often equally (some times more) important as prediction. For example, the regressors may correspond to unknown signals or molecular structures, and the response-covariate pairs are linear measurements; here the regressors are themselves the object of interest. For many mixture problems, including clustering, identifying the labels accurately for all data points may be (statistically) impossible. Obtaining the regressors allows for an estimate of this label (see @cite_11 for a related setting).
{ "cite_N": [ "@cite_11" ], "mid": [ "2951471445" ], "abstract": [ "We consider a discriminative learning (regression) problem, whereby the regression function is a convex combination of k linear classifiers. Existing approaches are based on the EM algorithm, or similar techniques, without provable guarantees. We develop a simple method based on spectral techniques and a mirroring' trick, that discovers the subspace spanned by the classifiers' parameter vectors. Under a probabilistic assumption on the feature vector distribution, we prove that this approach has nearly optimal statistical efficiency." ] }
1312.6838
2951702784
In today's information systems, the availability of massive amounts of data necessitates the development of fast and accurate algorithms to summarize these data and represent them in a succinct format. One crucial problem in big data analytics is the selection of representative instances from large and massively-distributed data, which is formally known as the Column Subset Selection (CSS) problem. The solution to this problem enables data analysts to understand the insights of the data and explore its hidden structure. The selected instances can also be used for data preprocessing tasks such as learning a low-dimensional embedding of the data points or computing a low-rank approximation of the corresponding matrix. This paper presents a fast and accurate greedy algorithm for large-scale column subset selection. The algorithm minimizes an objective function which measures the reconstruction error of the data matrix based on the subset of selected columns. The paper first presents a centralized greedy algorithm for column subset selection which depends on a novel recursive formula for calculating the reconstruction error of the data matrix. The paper then presents a MapReduce algorithm which selects a few representative columns from a matrix whose columns are massively distributed across several commodity machines. The algorithm first learns a concise representation of all columns using random projection, and it then solves a generalized column subset selection problem at each machine in which a subset of columns are selected from the sub-matrix on that machine such that the reconstruction error of the concise representation is minimized. The paper demonstrates the effectiveness and efficiency of the proposed algorithm through an empirical evaluation on benchmark data sets.
In the area of numerical linear algebra, the column pivoting method exploited by the QR decomposition @cite_32 permutes the columns of the matrix based on their norms to enhance the numerical stability of the QR decomposition algorithm. The first @math columns of the permuted matrix can be directly selected as representative columns. The Rank-Revealing QR (RRQR) decomposition @cite_28 @cite_15 @cite_44 @cite_16 is a category of QR decomposition methods which permute columns of the data matrix while imposing additional constraints on the singular values of the two sub-matrices of the upper-triangular matrix @math corresponding to the selected and non-selected columns. It has been shown that the constrains on the singular values can be used to derive an theoretical guarantee for the column-based reconstruction error according to spectral norm @cite_12 .
{ "cite_N": [ "@cite_28", "@cite_32", "@cite_44", "@cite_15", "@cite_16", "@cite_12" ], "mid": [ "", "", "2007266146", "2008205484", "2021604182", "2093010960" ], "abstract": [ "", "", "We develop algorithms and implementations for computing rank-revealing QR (RRQR) factorizations of dense matrices. First, we develop an efficient block algorithm for approximating an RRQR factorization, employing a windowed version of the commonly used Golub pivoting strategy, aided by incremental condition estimation. Second, we develop efficiently implementable variants of guaranteed reliable RRQR algorithms for triangular matrices originally suggested by Chandrasekaran and Ipsen and by Pan and Tang. We suggest algorithmic improvements with respect to condition estimation, termination criteria, and Givens updating. By combining the block algorithm with one of the triangular postprocessing steps, we arrive at an efficient and reliable algorithm for computing an RRQR factorization of a dense matrix. Experimental results on IBM RS 6000 SGI R8000 platforms show that this approach performs up to three times faster that the less reliable QR factorization with column pivoting as it is currently implemented in LAPACK, and comes within 15 of the performance of the LAPACK block algorithm for computing a QR factorization without any column exchanges. Thus, we expect this routine to be useful in may circumstances where numerical rank deficiency cannot be ruled out, but currently has been ignored because of the computational cost of dealing with it.", "Abstract : An algorithm is presented for computing a column permutation Pi and a QR-factorization (A)(Pi) = QR of an m by n (m or = n) matrix A such that a possible rank deficiency of A will be revealed in the triangular factor R having a small lower right block. For low rank deficient matrices, the algorithm is guaranteed to reveal the rank of A and the cost is only slightly more than the cost of one regular QR-factorization. A posteriori upper and lower bounds on the singular values of A are derived and can be used to infer the numerical rank of A. Keywords: QR-Factorization; Rank deficient matrices; Least squares computation; Subset selection; Rank; Singular values.", "Abstract By exploring properties of Schur complements, this paper presents bounds on the existence of rank-revealing LU factorizations that are comparable with those of rank-revealing QR factorizations. The new bounds provide substantial improvement over previously derived bounds. This paper also proposes two algorithms using Gaussian elimination with a “block pivoting” strategy to select a subset of columns from a given matrix which has a guaranteed relatively large smallest singular value. Each of these two algorithms is faster than its orthogonal counterpart for dense matrices. If implemented appropriately, these algorithms are faster than the corresponding rank-revealing QR methods, even when the orthogonal matrices are not explicitly updated. Based on these two algorithms, an algorithm using only Gaussian elimination for computing rank-revealing LU factorizations is introduced.", "Motivated by the enormous amounts of data collected in a large IT service provider organization, this paper presents a method for quickly and automatically summarizing and extracting meaningful insights from the data. Termed Clustered Subset Selection (CSS), our method enables program-guided data explorations of high-dimensional data matrices. CSS combines clustering and subset selection into a coherent and intuitive method for data analysis. In addition to a general framework, we introduce a family of CSS algorithms with different clustering components such as k-means and Close-to-Rank-One (CRO) clustering, and Subset Selection components such as best rank-one approximation and Rank-Revealing QR (RRQR) decomposition. From an empirical perspective, we illustrate that CSS is achieving significant improvements over existing Subset Selection methods in terms of approximation errors. Compared to existing Subset Selection techniques, CSS is also able to provide additional insight about clusters and cluster representatives. Finally, we present a case-study of program-guided data explorations using CSS on a large amount of IT service delivery data collection." ] }
1312.6838
2951702784
In today's information systems, the availability of massive amounts of data necessitates the development of fast and accurate algorithms to summarize these data and represent them in a succinct format. One crucial problem in big data analytics is the selection of representative instances from large and massively-distributed data, which is formally known as the Column Subset Selection (CSS) problem. The solution to this problem enables data analysts to understand the insights of the data and explore its hidden structure. The selected instances can also be used for data preprocessing tasks such as learning a low-dimensional embedding of the data points or computing a low-rank approximation of the corresponding matrix. This paper presents a fast and accurate greedy algorithm for large-scale column subset selection. The algorithm minimizes an objective function which measures the reconstruction error of the data matrix based on the subset of selected columns. The paper first presents a centralized greedy algorithm for column subset selection which depends on a novel recursive formula for calculating the reconstruction error of the data matrix. The paper then presents a MapReduce algorithm which selects a few representative columns from a matrix whose columns are massively distributed across several commodity machines. The algorithm first learns a concise representation of all columns using random projection, and it then solves a generalized column subset selection problem at each machine in which a subset of columns are selected from the sub-matrix on that machine such that the reconstruction error of the concise representation is minimized. The paper demonstrates the effectiveness and efficiency of the proposed algorithm through an empirical evaluation on benchmark data sets.
Besides methods based on QR decomposition, different recent methods have been proposed for directly selecting a subset of columns from the data matrix. @cite_12 proposed a deterministic column subset selection method which first groups columns into clusters and then selects a subset of columns from each cluster. The authors proposed a general framework in which different clustering and subset selection algorithms can be employed to select a subset of representative columns. C ivril and Magdon-Ismail @cite_10 @cite_40 presented a deterministic algorithm which greedily selects columns from the data matrix that best represent the right leading singular values of the matrix. This algorithm, however accurate, depends on the calculation of the leading singular vectors of a matrix, which is computationally very complex for large matrices.
{ "cite_N": [ "@cite_40", "@cite_10", "@cite_12" ], "mid": [ "2062570725", "1554201860", "2093010960" ], "abstract": [ "Given a real matrix A∈Rm×n of rank r, and an integer k<r, the sum of the outer products of top k singular vectors scaled by the corresponding singular values provide the best rank-k approximation Ak to A. When the columns of A have specific meaning, it might be desirable to find good approximations to Ak which use a small number of columns of A. This paper provides a simple greedy algorithm for this problem in Frobenius norm, with guarantees on the performance and the number of columns chosen. The algorithm selects c columns from A with c=O(klogkϵ2η2(A)) such that ‖A−ΠCA‖F≤(1+ϵ)‖A−Ak‖F, where C is the matrix composed of the c columns, ΠC is the matrix projecting the columns of A onto the space spanned by C and η(A) is a measure related to the coherence in the normalized columns of A. The algorithm is quite intuitive and is obtained by combining a greedy solution to the generalization of the well known sparse approximation problem and an existence result on the possibility of sparse approximation. We provide empirical results on various specially constructed matrices comparing our algorithm with the previous deterministic approaches based on QR factorizations and a recently proposed randomized algorithm. The results indicate that in practice, the performance of the algorithm can be significantly better than the bounds suggest.", "Given a matrix A e ℝ m ×n of rank r, and an integer k < r, the top k singular vectors provide the best rank-k approximation to A. When the columns of A have specific meaning, it is desirable to find (provably) \"good\" approximations to A k which use only a small number of columns in A. Proposed solutions to this problem have thus far focused on randomized algorithms. Our main result is a simple greedy deterministic algorithm with guarantees on the performance and the number of columns chosen. Specifically, our greedy algorithm chooses c columns from A with @math such that @math where C gr is the matrix composed of the c columns, @math is the pseudo-inverse of C gr ( @math is the best reconstruction of A from C gr), and ¼(A) is a measure of the coherence in the normalized columns of A. The running time of the algorithm is O(SVD(A k) + mnc) where SVD(A k) is the running time complexity of computing the first k singular vectors of A. To the best of our knowledge, this is the first deterministic algorithm with performance guarantees on the number of columns and a (1 + e) approximation ratio in Frobenius norm. The algorithm is quite simple and intuitive and is obtained by combining a generalization of the well known sparse approximation problem from information theory with an existence result on the possibility of sparse approximation. Tightening the analysis along either of these two dimensions would yield improved results.", "Motivated by the enormous amounts of data collected in a large IT service provider organization, this paper presents a method for quickly and automatically summarizing and extracting meaningful insights from the data. Termed Clustered Subset Selection (CSS), our method enables program-guided data explorations of high-dimensional data matrices. CSS combines clustering and subset selection into a coherent and intuitive method for data analysis. In addition to a general framework, we introduce a family of CSS algorithms with different clustering components such as k-means and Close-to-Rank-One (CRO) clustering, and Subset Selection components such as best rank-one approximation and Rank-Revealing QR (RRQR) decomposition. From an empirical perspective, we illustrate that CSS is achieving significant improvements over existing Subset Selection methods in terms of approximation errors. Compared to existing Subset Selection techniques, CSS is also able to provide additional insight about clusters and cluster representatives. Finally, we present a case-study of program-guided data explorations using CSS on a large amount of IT service delivery data collection." ] }
1312.6838
2951702784
In today's information systems, the availability of massive amounts of data necessitates the development of fast and accurate algorithms to summarize these data and represent them in a succinct format. One crucial problem in big data analytics is the selection of representative instances from large and massively-distributed data, which is formally known as the Column Subset Selection (CSS) problem. The solution to this problem enables data analysts to understand the insights of the data and explore its hidden structure. The selected instances can also be used for data preprocessing tasks such as learning a low-dimensional embedding of the data points or computing a low-rank approximation of the corresponding matrix. This paper presents a fast and accurate greedy algorithm for large-scale column subset selection. The algorithm minimizes an objective function which measures the reconstruction error of the data matrix based on the subset of selected columns. The paper first presents a centralized greedy algorithm for column subset selection which depends on a novel recursive formula for calculating the reconstruction error of the data matrix. The paper then presents a MapReduce algorithm which selects a few representative columns from a matrix whose columns are massively distributed across several commodity machines. The algorithm first learns a concise representation of all columns using random projection, and it then solves a generalized column subset selection problem at each machine in which a subset of columns are selected from the sub-matrix on that machine such that the reconstruction error of the concise representation is minimized. The paper demonstrates the effectiveness and efficiency of the proposed algorithm through an empirical evaluation on benchmark data sets.
Recently, @cite_8 presented a column subset selection algorithm which first calculates the top- @math right singular values of the data matrix (where @math is the target rank) and then uses deterministic sparsification methods to select @math columns from the data matrix. The authors derived a theoretically near-optimal error bound for the rank- @math column-based approximation. Deshpande and Rademacher @cite_29 presented a polynomial-time deterministic algorithm for volume sampling with a theoretical guarantee for @math . Quite recently, Guruswami and Sinop @cite_23 presented a deterministic algorithm for volume sampling with theoretical guarantee for @math . The deterministic volume sampling algorithms are, however, more complex than the algorithms presented in this paper, and they are infeasible for large data sets.
{ "cite_N": [ "@cite_29", "@cite_23", "@cite_8" ], "mid": [ "2089135543", "1524253248", "2547648546" ], "abstract": [ "We give efficient algorithms for volume sampling, i.e., for picking @math -subsets of the rows of any given matrix with probabilities proportional to the squared volumes of the simplices defined by them and the origin (or the squared volumes of the parallelepipeds defined by these subsets of rows). In other words, we can efficiently sample @math -subsets of @math with probabilities proportional to the corresponding @math by @math principal minors of any given @math by @math positive semi definite matrix. This solves an open problem from the monograph on spectral algorithms by Kannan and Vempala (see Section @math of KV , also implicit in BDM, DRVW ). Our first algorithm for volume sampling @math -subsets of rows from an @math -by- @math matrix runs in @math arithmetic operations (where @math is the exponent of matrix multiplication) and a second variant of it for @math -approximate volume sampling runs in @math arithmetic operations, which is almost linear in the size of the input (i.e., the number of entries) for small @math . Our efficient volume sampling algorithms imply the following results for low-rank matrix approximation: (1) Given @math , in @math arithmetic operations we can find @math of its rows such that projecting onto their span gives a @math -approximation to the matrix of rank @math closest to @math under the Frobenius norm. This improves the @math -approximation of Boutsidis, Drineas and Mahoney BDM and matches the lower bound shown in DRVW . The method of conditional expectations gives a algorithm with the same complexity. The running time can be improved to @math at the cost of losing an extra @math in the approximation factor. (2) The same rows and projection as in the previous point give a @math -approximation to the matrix of rank @math closest to @math under the spectral norm. In this paper, we show an almost matching lower bound of @math , even for @math .", "We prove that for any real-valued matrix X e Rmxn, and positive integers r ≥ k, there is a subset of r columns of X such that projecting X onto their span gives a [EQUATION]-approximation to best rank-k approximation of X in Frobenius norm. We show that the trade-off we achieve between the number of columns and the approximation ratio is optimal up to lower order terms. Furthermore, there is a deterministic algorithm to find such a subset of columns that runs in O(rnmω log m) arithmetic operations where ω is the exponent of matrix multiplication. We also give a faster randomized algorithm that runs in O(rnm2) arithmetic operations.", "We consider low-rank reconstruction of a matrix using a subset of its columns and present asymptotically optimal algorithms for both spectral norm and Frobenius norm reconstruction. The main tools we introduce to obtain our results are (i) the use of fast approximate SVD-like decompositions for column-based matrix reconstruction, and (ii) two deterministic algorithms for selecting rows from matrices with orthonormal columns, building upon the sparse representation theorem for decompositions of the identity that appeared in [J. D. Batson, D. A. Spielman, and N. Srivastava, Twice-Ramanujan sparsifiers, in Proceedings of the 41st Annual ACM Symposium on Theory of Computing (STOC), 2009, pp. 255--262]." ] }
1312.6838
2951702784
In today's information systems, the availability of massive amounts of data necessitates the development of fast and accurate algorithms to summarize these data and represent them in a succinct format. One crucial problem in big data analytics is the selection of representative instances from large and massively-distributed data, which is formally known as the Column Subset Selection (CSS) problem. The solution to this problem enables data analysts to understand the insights of the data and explore its hidden structure. The selected instances can also be used for data preprocessing tasks such as learning a low-dimensional embedding of the data points or computing a low-rank approximation of the corresponding matrix. This paper presents a fast and accurate greedy algorithm for large-scale column subset selection. The algorithm minimizes an objective function which measures the reconstruction error of the data matrix based on the subset of selected columns. The paper first presents a centralized greedy algorithm for column subset selection which depends on a novel recursive formula for calculating the reconstruction error of the data matrix. The paper then presents a MapReduce algorithm which selects a few representative columns from a matrix whose columns are massively distributed across several commodity machines. The algorithm first learns a concise representation of all columns using random projection, and it then solves a generalized column subset selection problem at each machine in which a subset of columns are selected from the sub-matrix on that machine such that the reconstruction error of the concise representation is minimized. The paper demonstrates the effectiveness and efficiency of the proposed algorithm through an empirical evaluation on benchmark data sets.
For instance, @cite_33 proposed a two-stage hybrid algorithm for column subset selection which runs in @math . In the first stage, the algorithm samples @math columns based on probabilities calculated using the @math -leading right singular vectors. In the second phase, a Rank-revealing QR (RRQR) algorithm is employed to select exactly @math columns from the columns sampled in the first stage. The authors suggested repeating the selection process 40 times in order to provably reduce the failure probability. The authors proved a good theoretical guarantee for the algorithm in terms of spectral and Frobenius term. However, the algorithm depends on calculating the leading @math right singular vectors which is computationally complex for large data sets.
{ "cite_N": [ "@cite_33" ], "mid": [ "2950958145" ], "abstract": [ "We consider the problem of selecting the best subset of exactly @math columns from an @math matrix @math . We present and analyze a novel two-stage algorithm that runs in @math time and returns as output an @math matrix @math consisting of exactly @math columns of @math . In the first (randomized) stage, the algorithm randomly selects @math columns according to a judiciously-chosen probability distribution that depends on information in the top- @math right singular subspace of @math . In the second (deterministic) stage, the algorithm applies a deterministic column-selection procedure to select and return exactly @math columns from the set of columns selected in the first stage. Let @math be the @math matrix containing those @math columns, let @math denote the projection matrix onto the span of those columns, and let @math denote the best rank- @math approximation to the matrix @math . Then, we prove that, with probability at least 0.8, @math This Frobenius norm bound is only a factor of @math worse than the best previously existing existential result and is roughly @math better than the best previous algorithmic result for the Frobenius norm version of this Column Subset Selection Problem (CSSP). We also prove that, with probability at least 0.8, @math This spectral norm bound is not directly comparable to the best previously existing bounds for the spectral norm version of this CSSP. Our bound depends on @math , whereas previous results depend on @math ; if these two quantities are comparable, then our bound is asymptotically worse by a @math factor." ] }
1312.6838
2951702784
In today's information systems, the availability of massive amounts of data necessitates the development of fast and accurate algorithms to summarize these data and represent them in a succinct format. One crucial problem in big data analytics is the selection of representative instances from large and massively-distributed data, which is formally known as the Column Subset Selection (CSS) problem. The solution to this problem enables data analysts to understand the insights of the data and explore its hidden structure. The selected instances can also be used for data preprocessing tasks such as learning a low-dimensional embedding of the data points or computing a low-rank approximation of the corresponding matrix. This paper presents a fast and accurate greedy algorithm for large-scale column subset selection. The algorithm minimizes an objective function which measures the reconstruction error of the data matrix based on the subset of selected columns. The paper first presents a centralized greedy algorithm for column subset selection which depends on a novel recursive formula for calculating the reconstruction error of the data matrix. The paper then presents a MapReduce algorithm which selects a few representative columns from a matrix whose columns are massively distributed across several commodity machines. The algorithm first learns a concise representation of all columns using random projection, and it then solves a generalized column subset selection problem at each machine in which a subset of columns are selected from the sub-matrix on that machine such that the reconstruction error of the concise representation is minimized. The paper demonstrates the effectiveness and efficiency of the proposed algorithm through an empirical evaluation on benchmark data sets.
The greedy CSS algorithm differs from the greedy algorithm proposed by C ivril and Magdon-Ismail @cite_10 @cite_40 in that the latter depends on first calculating the Singular Value Decomposition of the data matrix, which is computationally complex, especially for large matrices. The proposed algorithm is also more efficient than the recently proposed volume sampling algorithms @cite_29 @cite_23 .
{ "cite_N": [ "@cite_40", "@cite_29", "@cite_10", "@cite_23" ], "mid": [ "2062570725", "2089135543", "1554201860", "1524253248" ], "abstract": [ "Given a real matrix A∈Rm×n of rank r, and an integer k<r, the sum of the outer products of top k singular vectors scaled by the corresponding singular values provide the best rank-k approximation Ak to A. When the columns of A have specific meaning, it might be desirable to find good approximations to Ak which use a small number of columns of A. This paper provides a simple greedy algorithm for this problem in Frobenius norm, with guarantees on the performance and the number of columns chosen. The algorithm selects c columns from A with c=O(klogkϵ2η2(A)) such that ‖A−ΠCA‖F≤(1+ϵ)‖A−Ak‖F, where C is the matrix composed of the c columns, ΠC is the matrix projecting the columns of A onto the space spanned by C and η(A) is a measure related to the coherence in the normalized columns of A. The algorithm is quite intuitive and is obtained by combining a greedy solution to the generalization of the well known sparse approximation problem and an existence result on the possibility of sparse approximation. We provide empirical results on various specially constructed matrices comparing our algorithm with the previous deterministic approaches based on QR factorizations and a recently proposed randomized algorithm. The results indicate that in practice, the performance of the algorithm can be significantly better than the bounds suggest.", "We give efficient algorithms for volume sampling, i.e., for picking @math -subsets of the rows of any given matrix with probabilities proportional to the squared volumes of the simplices defined by them and the origin (or the squared volumes of the parallelepipeds defined by these subsets of rows). In other words, we can efficiently sample @math -subsets of @math with probabilities proportional to the corresponding @math by @math principal minors of any given @math by @math positive semi definite matrix. This solves an open problem from the monograph on spectral algorithms by Kannan and Vempala (see Section @math of KV , also implicit in BDM, DRVW ). Our first algorithm for volume sampling @math -subsets of rows from an @math -by- @math matrix runs in @math arithmetic operations (where @math is the exponent of matrix multiplication) and a second variant of it for @math -approximate volume sampling runs in @math arithmetic operations, which is almost linear in the size of the input (i.e., the number of entries) for small @math . Our efficient volume sampling algorithms imply the following results for low-rank matrix approximation: (1) Given @math , in @math arithmetic operations we can find @math of its rows such that projecting onto their span gives a @math -approximation to the matrix of rank @math closest to @math under the Frobenius norm. This improves the @math -approximation of Boutsidis, Drineas and Mahoney BDM and matches the lower bound shown in DRVW . The method of conditional expectations gives a algorithm with the same complexity. The running time can be improved to @math at the cost of losing an extra @math in the approximation factor. (2) The same rows and projection as in the previous point give a @math -approximation to the matrix of rank @math closest to @math under the spectral norm. In this paper, we show an almost matching lower bound of @math , even for @math .", "Given a matrix A e ℝ m ×n of rank r, and an integer k < r, the top k singular vectors provide the best rank-k approximation to A. When the columns of A have specific meaning, it is desirable to find (provably) \"good\" approximations to A k which use only a small number of columns in A. Proposed solutions to this problem have thus far focused on randomized algorithms. Our main result is a simple greedy deterministic algorithm with guarantees on the performance and the number of columns chosen. Specifically, our greedy algorithm chooses c columns from A with @math such that @math where C gr is the matrix composed of the c columns, @math is the pseudo-inverse of C gr ( @math is the best reconstruction of A from C gr), and ¼(A) is a measure of the coherence in the normalized columns of A. The running time of the algorithm is O(SVD(A k) + mnc) where SVD(A k) is the running time complexity of computing the first k singular vectors of A. To the best of our knowledge, this is the first deterministic algorithm with performance guarantees on the number of columns and a (1 + e) approximation ratio in Frobenius norm. The algorithm is quite simple and intuitive and is obtained by combining a generalization of the well known sparse approximation problem from information theory with an existence result on the possibility of sparse approximation. Tightening the analysis along either of these two dimensions would yield improved results.", "We prove that for any real-valued matrix X e Rmxn, and positive integers r ≥ k, there is a subset of r columns of X such that projecting X onto their span gives a [EQUATION]-approximation to best rank-k approximation of X in Frobenius norm. We show that the trade-off we achieve between the number of columns and the approximation ratio is optimal up to lower order terms. Furthermore, there is a deterministic algorithm to find such a subset of columns that runs in O(rnmω log m) arithmetic operations where ω is the exponent of matrix multiplication. We also give a faster randomized algorithm that runs in O(rnm2) arithmetic operations." ] }
1312.6838
2951702784
In today's information systems, the availability of massive amounts of data necessitates the development of fast and accurate algorithms to summarize these data and represent them in a succinct format. One crucial problem in big data analytics is the selection of representative instances from large and massively-distributed data, which is formally known as the Column Subset Selection (CSS) problem. The solution to this problem enables data analysts to understand the insights of the data and explore its hidden structure. The selected instances can also be used for data preprocessing tasks such as learning a low-dimensional embedding of the data points or computing a low-rank approximation of the corresponding matrix. This paper presents a fast and accurate greedy algorithm for large-scale column subset selection. The algorithm minimizes an objective function which measures the reconstruction error of the data matrix based on the subset of selected columns. The paper first presents a centralized greedy algorithm for column subset selection which depends on a novel recursive formula for calculating the reconstruction error of the data matrix. The paper then presents a MapReduce algorithm which selects a few representative columns from a matrix whose columns are massively distributed across several commodity machines. The algorithm first learns a concise representation of all columns using random projection, and it then solves a generalized column subset selection problem at each machine in which a subset of columns are selected from the sub-matrix on that machine such that the reconstruction error of the concise representation is minimized. The paper demonstrates the effectiveness and efficiency of the proposed algorithm through an empirical evaluation on benchmark data sets.
In comparison to other CSS methods, the distributed algorithm proposed in this paper is designed to be MapReduce-efficient. In the selection step, representative columns are selected based on a common representation. The common representation proposed in this work is based on random projection. This is more efficient than the work of C ivril and Magdon-Ismail @cite_40 which selects columns based on the leading singular vectors. In comparison to other deterministic methods, the proposed algorithm is specifically designed to be parallelized which makes it applicable to big data matrices whose columns are massively distributed. On the other hand, the two-step of distributed then centralized selection is similar to that of the hybrid CSS methods. The proposed algorithm however employs a deterministic algorithm at the distributed selection phase which is more accurate than the randomized selection employed by hybrid methods in the first phase.
{ "cite_N": [ "@cite_40" ], "mid": [ "2062570725" ], "abstract": [ "Given a real matrix A∈Rm×n of rank r, and an integer k<r, the sum of the outer products of top k singular vectors scaled by the corresponding singular values provide the best rank-k approximation Ak to A. When the columns of A have specific meaning, it might be desirable to find good approximations to Ak which use a small number of columns of A. This paper provides a simple greedy algorithm for this problem in Frobenius norm, with guarantees on the performance and the number of columns chosen. The algorithm selects c columns from A with c=O(klogkϵ2η2(A)) such that ‖A−ΠCA‖F≤(1+ϵ)‖A−Ak‖F, where C is the matrix composed of the c columns, ΠC is the matrix projecting the columns of A onto the space spanned by C and η(A) is a measure related to the coherence in the normalized columns of A. The algorithm is quite intuitive and is obtained by combining a greedy solution to the generalization of the well known sparse approximation problem and an existence result on the possibility of sparse approximation. We provide empirical results on various specially constructed matrices comparing our algorithm with the previous deterministic approaches based on QR factorizations and a recently proposed randomized algorithm. The results indicate that in practice, the performance of the algorithm can be significantly better than the bounds suggest." ] }
1312.6558
1799927848
Different machine learning techniques have been proposed and used for modeling individual and group user needs, interests and preferences. In the traditional predictive modeling instances are described by observable variables, called attributes. The goal is to learn a model for predicting the target variable for unseen instances. For example, for marketing purposes a company consider profiling a new user based on her observed web browsing behavior, referral keywords or other relevant information. In many real world applications the values of some attributes are not only observable, but can be actively decided by a decision maker. Furthermore, in some of such applications the decision maker is interested not only to generate accurate predictions, but to maximize the probability of the desired outcome. For example, a direct marketing manager can choose which type of a special offer to send to a client (actionable attribute), hoping that the right choice will result in a positive response with a higher probability. We study how to learn to choose the value of an actionable attribute in order to maximize the probability of a desired outcome in predictive modeling. We emphasize that not all instances are equally sensitive to changes in actions. Accurate choice of an action is critical for those instances, which are on the borderline (e.g. users who do not have a strong opinion one way or the other). We formulate three supervised learning approaches for learning to select the value of an actionable attribute at an instance level. We also introduce a focused training procedure which puts more emphasis on the situations where varying the action is the most likely to take the effect. The proof of concept experimental validation on two real-world case studies in web analytics and e-learning domains highlights the potential of the proposed approaches.
A reader is refereed to recent surveys on mining actionable knowledge @cite_15 @cite_29 , which discusses data mining for decision support (follow up actions). The role of domain experts is emphasized in producing action rules from data mining results. The implications of data mining to the actions have been under discussion for more than a decade @cite_27 ; however, these investigations typically focus on different aspects of knowledge discovery process than our study. In particular, actions are not learned directly from the data, but follow up actions as a result of discovering an interesting rule are considered. Our learning process is focused on learning action rules from data. Discovering action rules either requires prior extraction of classification rules or the rules are mined directly from data. The proposed Seek approach falls into the first type, while the Twist and the Contextual approaches represent the second type.
{ "cite_N": [ "@cite_27", "@cite_15", "@cite_29" ], "mid": [ "1488315716", "1629492576", "2106541455" ], "abstract": [ "An approach to defining actionability as a measure of interestingness of patterns is proposed. This approach is based on the concept of an action hierarchy which is defined as a tree of actions with patterns and pattern templates (data mining queries) assigned to its nodes. A method for discovering actionable patterns is presented and various techniques for optimizing the discovery process are proposed.", "The data mining process consists of a series of steps ranging from data cleaning, data selection and transformation, to pattern evaluation and visualization. One of the central problems in data mining is to make the mined patterns or knowledge actionable. Here, the term actionable refers to the mined patterns suggest concrete and profitable actions to the decision-maker. That is, the user can do something to bring direct benefits (increase in profits, reduction in cost, improvement in efficiency, etc.) to the organization's advantage. However, there has been written no comprehensive survey available on this topic. The goal of this paper is to fill the void. In this paper, we first present two frameworks for mining actionable knowledge that are inexplicitly adopted by existing research methods. Then we try to situate some of the research on this topic from two different viewpoints: 1) data mining tasks and 2) adopted framework. Finally, we specify issues that are either not addressed or insufficiently studied yet and conclude the paper.", "Actionable knowledge has been qualitatively and intensively studied in the social sciences. Its marriage with data mining is only a recent story. On the one hand, data mining has been booming for a while and has attracted an increasing variety of increasing applications. On the other, it is a reality that the so-called knowledge discovered from data by following the classic frameworks often cannot support meaningful decision-making actions. This shows the poor relationship and significant gap between data mining research and practice, and between knowledge, power, and action, and forms an increasing imbalance between research outcomes and business needs. Thorough and innovative retrospection and thinking are timely in bridging the gaps and promoting data mining toward next-generation research and development: namely, the paradigm shift from knowledge discovery from data to actionable knowledge discovery and delivery. © 2012 Wiley Periodicals, Inc. © 2012 Wiley Periodicals, Inc." ] }
1312.6558
1799927848
Different machine learning techniques have been proposed and used for modeling individual and group user needs, interests and preferences. In the traditional predictive modeling instances are described by observable variables, called attributes. The goal is to learn a model for predicting the target variable for unseen instances. For example, for marketing purposes a company consider profiling a new user based on her observed web browsing behavior, referral keywords or other relevant information. In many real world applications the values of some attributes are not only observable, but can be actively decided by a decision maker. Furthermore, in some of such applications the decision maker is interested not only to generate accurate predictions, but to maximize the probability of the desired outcome. For example, a direct marketing manager can choose which type of a special offer to send to a client (actionable attribute), hoping that the right choice will result in a positive response with a higher probability. We study how to learn to choose the value of an actionable attribute in order to maximize the probability of a desired outcome in predictive modeling. We emphasize that not all instances are equally sensitive to changes in actions. Accurate choice of an action is critical for those instances, which are on the borderline (e.g. users who do not have a strong opinion one way or the other). We formulate three supervised learning approaches for learning to select the value of an actionable attribute at an instance level. We also introduce a focused training procedure which puts more emphasis on the situations where varying the action is the most likely to take the effect. The proof of concept experimental validation on two real-world case studies in web analytics and e-learning domains highlights the potential of the proposed approaches.
Mining action rules without prior classification is often formulated as the association rule discovery problem @cite_18 @cite_28 @cite_7 @cite_0 . The main focus is how to formulate good interestingness measures to discover actionable knowledge: conditional lift (conditioned on the actionable attribute) @cite_28 , unified interestingness @cite_7 , support and confidence with alternative actions @cite_18 . The latter work also employs cost constraints for the actions.
{ "cite_N": [ "@cite_28", "@cite_18", "@cite_0", "@cite_7" ], "mid": [ "2099141560", "1981843994", "2017740606", "2158741508" ], "abstract": [ "This paper proposes an algorithm to discover novel association rules, combined association rules. Compared with conventional association rule, this combined association rule allows users to perform actions directly. Combined association rules are always organized as rule sets, each of which is composed of a number of single combined association rules. These single rules consist of non-actionable attributes, actionable attributes, and class attribute, with the rules in one set sharing the same non-actionable attributes. Thus, for a group of objects having the same non-actionable attributes, the actions corresponding to a preferred class can be performed directly. However, standard association rule mining algorithms encounter many difficulties when applied to combined association rule mining, and hence new algorithms have to be developed for combined association rule mining. In this paper, we will focus on rule generation and interestingness measures in combined association rule mining. In rule generation, the frequent itemsets are discovered among itemset groups to improve efficiency. New interestingness measures are defined to discover more actionable knowledge. In the case study, the proposed algorithm is applied into the field of social security. The combined association rule provides much greater actionable knowledge to business owners and users.", "Action rules provide hints to a business user what actions (i.e. changes within some values of flexible attributes) should be taken to improve the profitability of customers. That is, taking some actions to re-classify some customers from less desired decision class to the more desired one. However, in previous work, each action rule was constructed from two rules, extracted earlier, defining different profitability classes. In this paper, we make a first step towards formally introducing the problem of mining action rules from scratch and present formal definitions. In contrast to previous work, our formulation provides guarantee on verifying completeness and correctness of discovered action rules. In addition to formulating the problem from an inductive learning viewpoint, we provide theoretical analysis on the complexities of the problem and its variations. Furthermore, we present efficient algorithms for mining action rules from scratch. In an experimental study we demonstrate the usefulness of our techniques.", "Many applications can benefit from constructing models to predict the behavior of an entity. However, such models do not provide the user with explicit knowledge that can be directly used to influence (restrain or encourage) behavior for the user's interest. Undoubtedly, the user often exactly needs such knowledge. This type of knowledge is called actionable knowledge. Actionability is a very important criterion measuring the interestingness of mined patterns. In this paper, to mine such knowledge, we take a first step toward formally defining a new class of data mining problem, named actionable behavioral rule mining. Our definition explicitly states the problem as a search problem in a framework of support and expected utility. We also propose two algorithms for mining such rules. Our experiment shows the validity of our approach, as well as the practical value of our defined problem.", "Most data mining algorithms and tools stop at the mining and delivery of patterns satisfying expected technical interestingness. There are often many patterns mined but business people either are not interested in them or do not know what follow-up actions to take to support their business decisions. This issue has seriously affected the widespread employment of advanced data mining techniques in greatly promoting enterprise operational quality and productivity. In this paper, we present a formal view of actionable knowledge discovery (AKD) from the system and decision-making perspectives. AKD is a closed optimization problem-solving process from problem definition, framework model design to actionable pattern discovery, and is designed to deliver operable business rules that can be seamlessly associated or integrated with business processes and systems. To support such processes, we correspondingly propose, formalize, and illustrate four types of generic AKD frameworks: Postanalysis-based AKD, Unified-Interestingness-based AKD, Combined-Mining-based AKD, and Multisource Combined-Mining-based AKD (MSCM-AKD). A real-life case study of MSCM-based AKD is demonstrated to extract debt prevention patterns from social security data. Substantial experiments show that the proposed frameworks are sufficiently general, flexible, and practical to tackle many complex problems and applications by extracting actionable deliverables for instant decision making." ] }
1312.6558
1799927848
Different machine learning techniques have been proposed and used for modeling individual and group user needs, interests and preferences. In the traditional predictive modeling instances are described by observable variables, called attributes. The goal is to learn a model for predicting the target variable for unseen instances. For example, for marketing purposes a company consider profiling a new user based on her observed web browsing behavior, referral keywords or other relevant information. In many real world applications the values of some attributes are not only observable, but can be actively decided by a decision maker. Furthermore, in some of such applications the decision maker is interested not only to generate accurate predictions, but to maximize the probability of the desired outcome. For example, a direct marketing manager can choose which type of a special offer to send to a client (actionable attribute), hoping that the right choice will result in a positive response with a higher probability. We study how to learn to choose the value of an actionable attribute in order to maximize the probability of a desired outcome in predictive modeling. We emphasize that not all instances are equally sensitive to changes in actions. Accurate choice of an action is critical for those instances, which are on the borderline (e.g. users who do not have a strong opinion one way or the other). We formulate three supervised learning approaches for learning to select the value of an actionable attribute at an instance level. We also introduce a focused training procedure which puts more emphasis on the situations where varying the action is the most likely to take the effect. The proof of concept experimental validation on two real-world case studies in web analytics and e-learning domains highlights the potential of the proposed approaches.
A series of works employ the rough set techniques for mining action rules @cite_19 @cite_12 @cite_2 @cite_3 . The considered techniques are both with @cite_2 and without @cite_3 prior classification rules. In these works the action rules are the end goal for a decision making, the predictive analytics task is not explicitly associated. An example in @cite_12 assumes a group of people who closed their bank accounts. The goal is to find a cause why these accounts have been closed and formulate an action how to prevent it.
{ "cite_N": [ "@cite_19", "@cite_3", "@cite_12", "@cite_2" ], "mid": [ "1966237194", "113898578", "", "1752644800" ], "abstract": [ "Action rules (or actionable patterns) describe possible transitions of objects from one state to another with respect to a distinguished attribute. Strategies for discovering them can be divided into two types: rule based and object based. Rule-based actionable patterns are built on the foundations of preexisting rules. This approach consists of two main steps: (1) a standard learning method is used to detect interesting patterns in the form of classification rules, association rules, or clusters; (2) the second step is to use an automatic or semiautomatic strategy to inspect such results and derive possible action strategies. These strategies provide an insight of how values of some attributes need to be changed so the desirable objects can be shifted to a desirable group. Object-based approach assumes that actionable patterns are extracted directly from a database. System DEAR, presented in this paper, is an example of a rule-based approach. System ARD and system for association rules mining are examples of an object-based approach. Music Information Retrieval (MIR) is taken as an application domain. We show how to manipulate the music score using action rules. © 2011 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc.", "In this paper, we present an algorithm that discovers action rules from a decision table. Action rules describe possible transitions of objects from one state to another with respect to a distinguished attribute. The previous research on action rule discovery required the extraction of classification rules before constructing any action rule. The new proposed algorithm does not require pre-existing classification rules, and it uses a bottom up approach to generate action rules having minimal attribute involvement.", "", "Decision tables classifying customers into groups of different profitability are used for mining rules classifying customers. Attributes are divided into two groups: stable and flexible. By stable attributes we mean attributes which values can not be changed by a bank (age, marital status, number of children are the examples). On the other hand attributes (like percentage rate or loan approval to buy a house in certain area) which values can be changed or influenced by a bank are called flexible. Rules are extracted from a decision table given preference to flexible attributes. This new class of rules forms a special repository of rules from which new rules called actionrules are constructed. They show what actions should be taken to improve the profitability of customers." ] }
1312.6558
1799927848
Different machine learning techniques have been proposed and used for modeling individual and group user needs, interests and preferences. In the traditional predictive modeling instances are described by observable variables, called attributes. The goal is to learn a model for predicting the target variable for unseen instances. For example, for marketing purposes a company consider profiling a new user based on her observed web browsing behavior, referral keywords or other relevant information. In many real world applications the values of some attributes are not only observable, but can be actively decided by a decision maker. Furthermore, in some of such applications the decision maker is interested not only to generate accurate predictions, but to maximize the probability of the desired outcome. For example, a direct marketing manager can choose which type of a special offer to send to a client (actionable attribute), hoping that the right choice will result in a positive response with a higher probability. We study how to learn to choose the value of an actionable attribute in order to maximize the probability of a desired outcome in predictive modeling. We emphasize that not all instances are equally sensitive to changes in actions. Accurate choice of an action is critical for those instances, which are on the borderline (e.g. users who do not have a strong opinion one way or the other). We formulate three supervised learning approaches for learning to select the value of an actionable attribute at an instance level. We also introduce a focused training procedure which puts more emphasis on the situations where varying the action is the most likely to take the effect. The proof of concept experimental validation on two real-world case studies in web analytics and e-learning domains highlights the potential of the proposed approaches.
In @cite_25 @cite_30 actionable knowledge is extracted from . The approaches relate to the proposed Seek approach. The idea is to inspect the resulting tree and find what attributes could be changed in order to end up in the decision leaf with higher probability of the desired outcome. Instead of assuming actionable and observable attributes, the authors introduce a cost matrix for changing the value of each attribute. Impossible actions are marked with infinite costs (like changing from an adult to a child).
{ "cite_N": [ "@cite_30", "@cite_25" ], "mid": [ "2135343156", "2163541574" ], "abstract": [ "Data mining has been applied to CRM (Customer Relationship Management) in many industries with a limited success. Most data mining tools can only discover customer models or profiles (such as customers who are likely attritors and customers who are loyal), but not actions that would improve customer relationship (such as changing attritors to loyal customers). We describe a novel algorithm that suggests actions to change customers from an undesired status (such as attritors) to a desired one (such as loyal). Our algorithm takes into account the cost of actions, and further it attempts to maximize the expected net profit. To our best knowledge, no data mining algorithms or tools today can accomplish this important task in CRM. The algorithm is implemented, with many advanced features, in a specialized and highly effective data mining software called Proactive Solution.", "Most data mining algorithms and tools stop at discovered customer models, producing distribution information on customer profiles. Such techniques, when applied to industrial problems such as customer relationship management (CRM), are useful in pointing out customers who are likely attritors and customers who are loyal, but they require human experts to postprocess the discovered knowledge manually. Most of the postprocessing techniques have been limited to producing visualization results and interestingness ranking, but they do not directly suggest actions that would lead to an increase in the objective function such as profit. In this paper, we present novel algorithms that suggest actions to change customers from an undesired status (such as attritors) to a desired one (such as loyal) while maximizing an objective function: the expected net profit. These algorithms can discover cost-effective actions to transform customers from undesirable classes to desirable ones. The approach we take integrates data mining and decision making tightly by formulating the decision making problems directly on top of the data mining results in a postprocessing step. To improve the effectiveness of the approach, we also present an ensemble of decision trees which is shown to be more robust when the training data changes. Empirical tests are conducted on both a realistic insurance application domain and UCI benchmark data" ] }
1312.6558
1799927848
Different machine learning techniques have been proposed and used for modeling individual and group user needs, interests and preferences. In the traditional predictive modeling instances are described by observable variables, called attributes. The goal is to learn a model for predicting the target variable for unseen instances. For example, for marketing purposes a company consider profiling a new user based on her observed web browsing behavior, referral keywords or other relevant information. In many real world applications the values of some attributes are not only observable, but can be actively decided by a decision maker. Furthermore, in some of such applications the decision maker is interested not only to generate accurate predictions, but to maximize the probability of the desired outcome. For example, a direct marketing manager can choose which type of a special offer to send to a client (actionable attribute), hoping that the right choice will result in a positive response with a higher probability. We study how to learn to choose the value of an actionable attribute in order to maximize the probability of a desired outcome in predictive modeling. We emphasize that not all instances are equally sensitive to changes in actions. Accurate choice of an action is critical for those instances, which are on the borderline (e.g. users who do not have a strong opinion one way or the other). We formulate three supervised learning approaches for learning to select the value of an actionable attribute at an instance level. We also introduce a focused training procedure which puts more emphasis on the situations where varying the action is the most likely to take the effect. The proof of concept experimental validation on two real-world case studies in web analytics and e-learning domains highlights the potential of the proposed approaches.
Both settings are sensible in different applications. The uplift modeling setting is valid in treatment applications, e.g. in direct advertising when discounts are offered. In such applications every action has a significant cost and the question is to perform it or not for a given individual. Our setting is of recommender nature rather than treatment. It is relevant, for example, to web analytics, where actions (e.g. providing an example-based or a theory-based feedback in the web-based student assessment, or choosing the type of the recommendation approach) come at virtually no cost, thus we are interested to select the most appropriate one from a set of alternatives for each example at consideration. Note, that in our setting optimal action may be also no action. Table summarizes the main difference of our setting with the recent study in uplift modeling @cite_34 .
{ "cite_N": [ "@cite_34" ], "mid": [ "2011485768" ], "abstract": [ "Most classification approaches aim at achieving high prediction accuracy on a given dataset. However, in most practical cases, some action such as mailing an offer or treating a patient is to be taken on the classified objects, and we should model not the class probabilities themselves, but instead, the change in class probabilities caused by the action. The action should then be performed on those objects for which it will be most profitable. This problem is known as uplift modeling, differential response analysis, or true lift modeling, but has received very little attention in machine learning literature. An important modification of the problem involves several possible actions, when for each object, the model must also decide which action should be used in order to maximize profit. In this paper, we present tree-based classifiers designed for uplift modeling in both single and multiple treatment cases. To this end, we design new splitting criteria and pruning methods. The experiments confirm the usefulness of the proposed approaches and show significant improvement over previous uplift modeling techniques." ] }
1312.6558
1799927848
Different machine learning techniques have been proposed and used for modeling individual and group user needs, interests and preferences. In the traditional predictive modeling instances are described by observable variables, called attributes. The goal is to learn a model for predicting the target variable for unseen instances. For example, for marketing purposes a company consider profiling a new user based on her observed web browsing behavior, referral keywords or other relevant information. In many real world applications the values of some attributes are not only observable, but can be actively decided by a decision maker. Furthermore, in some of such applications the decision maker is interested not only to generate accurate predictions, but to maximize the probability of the desired outcome. For example, a direct marketing manager can choose which type of a special offer to send to a client (actionable attribute), hoping that the right choice will result in a positive response with a higher probability. We study how to learn to choose the value of an actionable attribute in order to maximize the probability of a desired outcome in predictive modeling. We emphasize that not all instances are equally sensitive to changes in actions. Accurate choice of an action is critical for those instances, which are on the borderline (e.g. users who do not have a strong opinion one way or the other). We formulate three supervised learning approaches for learning to select the value of an actionable attribute at an instance level. We also introduce a focused training procedure which puts more emphasis on the situations where varying the action is the most likely to take the effect. The proof of concept experimental validation on two real-world case studies in web analytics and e-learning domains highlights the potential of the proposed approaches.
The problem of discovering action rules is closely related to @cite_38 . One of the earliest works related to choosing actions @cite_10 builds reasoning about actions from causality perspective. A causality perspective can be considered complementary to our study in assessing the effects of actions. In relation to causality, the effects of interventions to the decision rules are explored in @cite_37 . Causality @cite_13 relates to exploratory data analysis when discovering the true cause and effect is the primary goal. In our case we may potentially find rules which are linked by correlation, but not causality. However, as we assume that the data distribution is stationary, we would not be able to perform an action on such rules that do not include causality, i.e. we would not be able to change the attribute value. Therefore, the domain experts would not indicate such attributes as actionable, as there would be no meaningful application task. For the further discussion of the variety of settings involving explanatory and predictive modeling we refer an interested reader to @cite_14 and @cite_33 .
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_14", "@cite_33", "@cite_10", "@cite_13" ], "mid": [ "2496518686", "2074374546", "1973969044", "2044230810", "99297125", "" ], "abstract": [ "This chapter reviews techniques for learning causal relationships from data, in application to the problem of feature selection. Most feature selection methods do not attempt to uncover causal relationships between feature and target and focus instead on making best predictions. We examine situations in which the knowledge of causal relationships benefits feature selection. Such benefits may include: explaining relevance in terms of causal mechanisms, distinguishing between actual features and experimental artifacts, predicting the consequences of actions performed by external agents, and making predictions in non-stationary environments. Conversely, we highlight the benefits that causal discovery may draw from recent developments in feature selection theory and algorithms.", "Decision rules induced from a data set represent knowledge patterns relating premises and decisions in ‘if … , then …’ statements. Premise is a conjunction of elementary conditions relative to independent variables and decision is a conclusion relative to dependent variables. Given a set of decision rules induced from a data set, it is useful to estimate possible effects on the dependent variables caused by an intervention on some independent variables. The authors introduce a methodology for quantifying the impact of a strategy of intervention based on a decision rule induced from data. While the usual interestingness measures of decision rules are taking into account only characteristics of universe U where they come from, the measures of efficiency of intervention depend also on characteristics of universe U′ where intervention takes place. The authors are considering the intervention on a single independent variable and on a combination of these variables.", "Statistical modeling is a powerful tool for developing and testing theories by way of causal explanation, prediction, and description. In many disciplines there is near-exclusive use of statistical modeling for causal ex- planation and the assumption that models with high explanatory power are inherently of high predictive power. Conflation between explanation and pre- diction is common, yet the distinction must be understood for progressing scientific knowledge. While this distinction has been recognized in the phi- losophy of science, the statistical literature lacks a thorough discussion of the many differences that arise in the process of modeling for an explanatory ver- sus a predictive goal. The purpose of this article is to clarify the distinction between explanatory and predictive modeling, to discuss its sources, and to reveal the practical implications of the distinction to each step in the model- ing process.", "Market response models based on field-generated data need to address potential endogeneity in the regressors to obtain consistent parameter estimates. Another requirement is that market response models predict well in a holdout sample. With both requirements combined, it may seem reasonable to subject an endogeneity-corrected model to a holdout prediction task, and this is quite common in the academic marketing literature. One may be inclined to expect that the consistent parameter estimates obtained via instrumental variables IV estimation predict better than the biased ordinary least squares OLS estimates. This paper shows that this expectation is incorrect. That is, if the holdout sample is similar to the estimation sample so that the regressors are endogenous in both samples, holdout sample validation favors regression estimates that are not corrected for endogeneity i.e., OLS over estimates that are corrected for endogeneity i.e., IV estimation. We also discuss ways in which holdout samples may be used sensibly in the presence of endogeneity. A key takeaway is that if consistent parameter estimates are the primary model objective, the model should be validated with an exogenous rather than endogenous holdout sample.", "", "" ] }
1312.6558
1799927848
Different machine learning techniques have been proposed and used for modeling individual and group user needs, interests and preferences. In the traditional predictive modeling instances are described by observable variables, called attributes. The goal is to learn a model for predicting the target variable for unseen instances. For example, for marketing purposes a company consider profiling a new user based on her observed web browsing behavior, referral keywords or other relevant information. In many real world applications the values of some attributes are not only observable, but can be actively decided by a decision maker. Furthermore, in some of such applications the decision maker is interested not only to generate accurate predictions, but to maximize the probability of the desired outcome. For example, a direct marketing manager can choose which type of a special offer to send to a client (actionable attribute), hoping that the right choice will result in a positive response with a higher probability. We study how to learn to choose the value of an actionable attribute in order to maximize the probability of a desired outcome in predictive modeling. We emphasize that not all instances are equally sensitive to changes in actions. Accurate choice of an action is critical for those instances, which are on the borderline (e.g. users who do not have a strong opinion one way or the other). We formulate three supervised learning approaches for learning to select the value of an actionable attribute at an instance level. We also introduce a focused training procedure which puts more emphasis on the situations where varying the action is the most likely to take the effect. The proof of concept experimental validation on two real-world case studies in web analytics and e-learning domains highlights the potential of the proposed approaches.
A probability based framework for value-change based actions is presented in @cite_11 together with an approach for the Naive Bayes classifier as an instance of this framework.
{ "cite_N": [ "@cite_11" ], "mid": [ "1558666744" ], "abstract": [ "Inductive learning techniques such as the naive Bayes and decision tree algorithms have been extended in the past to handle different types of costs mainly by distinguishing different costs of classification errors. However, it is an equally important issue to consider how to handle the test costs associated with querying the missing values in a test case. When the value of an attribute is missing in a test case, it may or may not be worthwhile to take the effort to obtain its missing value, depending on how much the value results in a potential gain in the classification accuracy. In this paper, we show how to obtain a test-cost sensitive naive Bayes classifier (csNB) by including a test strategy which determines how unknown attributes are selected to perform test on in order to minimize the sum of the mis-classification costs and test costs. We propose and evaluate several potential test strategies including one that allows several tests to be done at once. We empirically evaluate the csNB method, and show that it compares favorably with its decision tree counterpart." ] }
1312.6558
1799927848
Different machine learning techniques have been proposed and used for modeling individual and group user needs, interests and preferences. In the traditional predictive modeling instances are described by observable variables, called attributes. The goal is to learn a model for predicting the target variable for unseen instances. For example, for marketing purposes a company consider profiling a new user based on her observed web browsing behavior, referral keywords or other relevant information. In many real world applications the values of some attributes are not only observable, but can be actively decided by a decision maker. Furthermore, in some of such applications the decision maker is interested not only to generate accurate predictions, but to maximize the probability of the desired outcome. For example, a direct marketing manager can choose which type of a special offer to send to a client (actionable attribute), hoping that the right choice will result in a positive response with a higher probability. We study how to learn to choose the value of an actionable attribute in order to maximize the probability of a desired outcome in predictive modeling. We emphasize that not all instances are equally sensitive to changes in actions. Accurate choice of an action is critical for those instances, which are on the borderline (e.g. users who do not have a strong opinion one way or the other). We formulate three supervised learning approaches for learning to select the value of an actionable attribute at an instance level. We also introduce a focused training procedure which puts more emphasis on the situations where varying the action is the most likely to take the effect. The proof of concept experimental validation on two real-world case studies in web analytics and e-learning domains highlights the potential of the proposed approaches.
From focused learning perspective there are remote relations to boosting @cite_6 , classification with reject option @cite_24 and evaluating classifier competence @cite_36 . Boosting goes over a loop of training, each step putting more emphasis on learning the cases which were previously misclassified. Classification with reject option evaluates the regions of competence and is allowed not to output the decision if its confidence is too low.
{ "cite_N": [ "@cite_24", "@cite_36", "@cite_6" ], "mid": [ "2161813894", "1978318335", "2070534370" ], "abstract": [ "We consider the problem of binary classification where the classifier can, for a particular cost, choose not to classify an observation. Just as in the conventional classification problem, minimization of the sample average of the cost is a difficult optimization problem. As an alternative, we propose the optimization of a certain convex loss function φ, analogous to the hinge loss used in support vector machines (SVMs). Its convexity ensures that the sample average of this surrogate loss can be efficiently minimized. We study its statistical properties. We show that minimizing the expected surrogate loss—the φ-risk—also minimizes the risk. We also study the rate at which the φ-risk approaches its minimum value. We show that fast rates are possible when the conditional probability P(Y=1|X) is unlikely to be close to certain critical values.", "In this paper, a new classifier design methodology, confidence-based classifier design, is proposed to design classifiers with controlled confidence. This methodology is under the guidance of two optimal classification theories, a new classification theory for designing optimal classifiers with controlled error rates and the C.K. Chow's optimal classification theory for designing optimal classifiers with controlled conditional error. The new methodology also takes advantage of the current well-developed classifier's probability preserving and ordering properties. It calibrates the output scores of current classifiers to the conditional error or error rates. Thus, it can either classify input samples or reject them according to the output scores of classifiers. It can achieve some reasonable performance even though it is not an optimal solution. An example is presented to implement the new methodology using support vector machines (SVMs). The empirical cumulative density function method is used to estimate error rates from the output scores of a trained SVM. Furthermore, a new dynamic bin width allocation method is proposed to estimate sample conditional error and this method adapts to the underlying probabilities. The experimental results clearly demonstrate the efficacy of the suggested classifier design methodology.", "Abstract We present an algorithm for improving the accuracy of algorithms for learning binary concepts. The improvement is achieved by combining a large number of hypotheses, each of which is generated by training the given learning algorithm on a different set of examples. Our algorithm is based on ideas presented by Schapire and represents an improvement over his results, The analysis of our algorithm provides general upper bounds on the resources required for learning in Valiant′s polynomial PAC learning framework, which are the best general upper bounds known today. We show that the number of hypotheses that are combined by our algorithm is the smallest number possible. Other outcomes of our analysis are results regarding the representational power of threshold circuits, the relation between learnability and compression, and a method for parallelizing PAC learning algorithms. We provide extensions of our algorithms to cases in which the concepts are not binary and to the case where the accuracy of the learning algorithm depends on the distribution of the instances." ] }
1312.7243
2397659083
In this article, we consider the problem of computing minimum dominating set for a given set @math of @math points in @math . Here the objective is to find a minimum cardinality subset @math of @math such that the union of the unit radius disks centered at the points in @math covers all the points in @math . We first propose a simple 4-factor and 3-factor approximation algorithms in @math and @math time respectively improving time complexities by a factor of @math and @math respectively over the best known result available in the literature [M. De, G.K. Das, P. Carmi and S.C. Nandy, Approximation algorithms for a variant of discrete piercing set problem for unit disk , Int. J. of Comp. Geom. and Appl., to appear]. Finally, we propose a very important shifting lemma, which is of independent interest and using this lemma we propose a @math -factor approximation algorithm and a PTAS for the minimum dominating set problem.
In the discrete unit disk cover (DUDC) problem, two sets @math and @math of points in @math are given, the objective is to choose minimum number of unit disks @math centered at the points in @math such that the union of the disks in @math covers all the points in @math . Johnson @cite_7 proved that the DUDC problem is NP-hard. Mustafa and Ray in 2010 @cite_17 proposed a @math -approximation algorithm for @math (PTAS) for the DUDC problem using @math -net based local improvement approach. The fastest algorithm is obtained by setting @math for a 3-factor approximation algorithm, which runs in @math ) time, where @math and @math are number of unit radius disks and number of points respectively @cite_5 . The high complexity of the PTAS leads to further research on constant factor approximation algorithms for the DUDC problem. A series of constant factor approximation algorithms for DUDC problem are available in the literature:
{ "cite_N": [ "@cite_5", "@cite_7", "@cite_17" ], "mid": [ "1998920351", "2148043549", "2018541722" ], "abstract": [ "Given a set of n points and a set of m unit disks on a 2-dimensional plane, the discrete unit disk cover (DUDC) problem is (i) to check whether each point in is covered by at least one disk in or not and (ii) if so, then find a minimum cardinality subset such that the unit disks in cover all the points in . The discrete unit disk cover problem is a geometric version of the general set cover problem which is NP-hard. The general set cover problem is not approximable within , for some constant c, but the DUDC problem was shown to admit a constant factor approximation. In this paper, we provide an algorithm with constant approximation factor 18. The running time of the proposed algorithm is . The previous best known tractable solution for the same problem was a 22-factor approximation algorithm with running time .", "Abstract This is the twelfth edition of a quarterly column that provides continuing coverage of new developments in the theory of NP-completeness. The presentation is modeled on that used by M. R. Garey and myself in our book “Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., New York 1979 (hereinafter referred to as “[GJ previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed, and, when appropriate, cross-references will be given to that book and the list of problems (NP-complete and harder) presented there. Readers who have results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.), or open problems they would like publicized, should send them to David S. Johnson, Room 2C-355, AT&T Bell Laboratories, Murray Hill, NJ 07974 (CSNET address: dsj.rabbit.btl@csnet-relay). Please include details, or at least sketches, of any new proofs (full papers are preferred). If the results are unpublished, please state explicitly that you would like them to be mentioned in the column. Comments and corrections are also welcome. For more details on the nature of the column and the form of desired submissions, see the December 1981 issue of this Journal.", "We consider the problem of computing minimum geometric hitting sets in which, given a set of geometric objects and a set of points, the goal is to compute the smallest subset of points that hit all geometric objects. The problem is known to be strongly NP-hard even for simple geometric objects like unit disks in the plane. Therefore, unless P = NP, it is not possible to get Fully Polynomial Time Approximation Algorithms (FPTAS) for such problems. We give the first PTAS for this problem when the geometric objects are half-spaces in ℝ3 and when they are an r-admissible set regions in the plane (this includes pseudo-disks as they are 2-admissible). Quite surprisingly, our algorithm is a very simple local-search algorithm which iterates over local improvements only." ] }
1312.7243
2397659083
In this article, we consider the problem of computing minimum dominating set for a given set @math of @math points in @math . Here the objective is to find a minimum cardinality subset @math of @math such that the union of the unit radius disks centered at the points in @math covers all the points in @math . We first propose a simple 4-factor and 3-factor approximation algorithms in @math and @math time respectively improving time complexities by a factor of @math and @math respectively over the best known result available in the literature [M. De, G.K. Das, P. Carmi and S.C. Nandy, Approximation algorithms for a variant of discrete piercing set problem for unit disk , Int. J. of Comp. Geom. and Appl., to appear]. Finally, we propose a very important shifting lemma, which is of independent interest and using this lemma we propose a @math -factor approximation algorithm and a PTAS for the minimum dominating set problem.
108-approximation algorithm [C a , 2004 @cite_4 ] 72-approximation algorithm [Narayanappa and Voytechovsky, 2006 @cite_20 ] 38-approximation algorithm in O( @math ) time [, 2007 @cite_18 ] 22-approximation algorithm in O( @math ) time [, 2010 @cite_1 ] 18-approximation algorithm in O( @math ) time [, 2012 @cite_5 ] 15-approximation algorithm in O( @math ) time [Fraser and L ' o pez-Ortiz, 2012 @cite_8 ] @math -approximation algorithm in @math time [, 2013 @cite_26 ]
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_4", "@cite_8", "@cite_1", "@cite_5", "@cite_20" ], "mid": [ "", "2175864873", "2021544892", "2592261725", "2106734202", "1998920351", "2138222927" ], "abstract": [ "", "In this paper we consider the discrete unit disk cover problem and the rectangular region cover problem as follows.", "Broadcasting is a fundamental operation which is frequent in wireless ad hoc networks. A simple broadcasting mechanism, known as flooding, is to let every node retransmit the message to all its 1-hop neighbors when receiving the first copy of the message. Despite its simplicity, flooding is very inefficient and can result in high redundancy, contention, and collision. One approach to reducing the redundancy is to let each node forward the message only to a small subset of 1-hop neighbors that cover all of the node's 2-hop neighbors. In this paper we propose two practical heuristics for selecting the minimum number of forwarding neighbors: an O(n log n)time algorithm that selects at most 6 times more forwarding neighbors than the optimum, and an O(n log2 n) time algorithm with an improved approximation ratio of 3, where n is the number of 1- and 2-hop neighbors. The best previously known algorithm, due to Bronnimann and Goodrich [2], guarantees O(1) approximation in O(n3 log n)time.", "We present a study of the Within-Strip Discrete Unit Disk Cover (WSDUDC) problem, which is a restricted version of the Discrete Unit Disk Cover (DUDC) problem. For the WSDUDC problem, there exists a set of points and a set of unit disks in the plane, and the points and disk centres are conned to a strip of xed height. An optimal solution to the WSDUDC problem is a set of disks of minimum cardinality that covers all points in the input set. We describe a range of approximation algorithms for the problem, including 4- and 3-approximate algorithms which apply for strips of height 2 p 2=3 0:94 and 0:8 units respectively, as well as a general scheme for any strip with less than unit height. We prove that the WSDUDC problem is NP-complete on strips of any xed height, which is our most interesting result from a theoretical standpoint. The result is also quite surprising, since a number of similar problems are tractable on strips of xed height. Finally, we discuss how these results may be applied to known DUDC approximation algorithms.", "Given a set @math of m unit disks and a set @math of n points in the plane, the discrete unit disk cover problem is to select a minimum cardinality subset @math to cover @math . This problem is NP-hard [14] and the best previous practical solution is a 38-approximation algorithm by [5]. We first consider the line-separable discrete unit disk cover problem (the set of disk centers can be separated from the set of points by a line) for which we present an O(n(log n + m))-time algorithm that finds an exact solution. Combining our line-separable algorithm with techniques from the algorithm of [5] results in an O(m2n4) time 22-approximate solution to the discrete unit disk cover problem.", "Given a set of n points and a set of m unit disks on a 2-dimensional plane, the discrete unit disk cover (DUDC) problem is (i) to check whether each point in is covered by at least one disk in or not and (ii) if so, then find a minimum cardinality subset such that the unit disks in cover all the points in . The discrete unit disk cover problem is a geometric version of the general set cover problem which is NP-hard. The general set cover problem is not approximable within , for some constant c, but the DUDC problem was shown to admit a constant factor approximation. In this paper, we provide an algorithm with constant approximation factor 18. The running time of the proposed algorithm is . The previous best known tractable solution for the same problem was a 22-factor approximation algorithm with running time .", "We present a polynomial time algorithm for the unit disk covering problem with an approximation factor 72, and show that this is the best possible approximation factor based on the method used. This is an improvement on the best known approximation factor of 108." ] }
1312.6042
2148332509
We propose to deal with sequential processes where only partial observations are available by learning a latent representation space on which policies may be accurately learned.
Efficient approaches have been proposed to extract high-level representations using deep-learning @cite_1 but few studies have proposed extension to deal with sequential processes. A formal analysis has been proposed in @cite_6 . Models concerning partially observable sequential processes have been proposed in the context of controlling tasks problems. For example, @cite_5 and @cite_0 present models using recurrent neural networks (RNN) to learn a controller for a given task. In these approaches, informative representations are constructed by the RNN, but these representations are driven by the task to solve. Some unsupervised approaches have been recently proposed. In that case, a representation learning model is learned over the observations, without needing to define a reward function. The policy is learned afterward using these representations, by usually using classical RL algorithms. For instance, @cite_4 propose a model based on a recurrent auto-associative memory with history of arbitrary depth, while @cite_3 present an extension of RNN for unsupervised learning. In comparison to these models, our transductive approach is simultaenously based on unsupervised trajectories, and also allows us to choose which action to take even if observations are missing, by learning a dynamic model in the latent space.
{ "cite_N": [ "@cite_4", "@cite_1", "@cite_6", "@cite_3", "@cite_0", "@cite_5" ], "mid": [ "130805539", "2953130357", "", "", "2002196558", "2084920657" ], "abstract": [ "Traditional Reinforcement Learning methods are insufficient for AGIs who must be able to learn to deal with Partially Observable Markov Decision Processes. We investigate a novel method for dealing with this problem: standard RL techniques using as input the hidden layer output of a Sequential Constant-Size Compressor (SCSC). The SCSC takes the form of a sequential Recurrent Auto-Associative Memory, trained through standard back-propagation. Results illustrate the feasibility of this approach -- this system learns to deal with highdimensional visual observations (up to 640 pixels) in partially observable environments where there are long time lags (up to 12 steps) between relevant sensory information and necessary action.", "Numerous control and learning problems face the situation where sequences of high-dimensional highly dependent data are available, but no or little feedback is provided to the learner. To address this issue, we formulate the following problem. Given a series of observations X_0,...,X_n coming from a large (high-dimensional) space X, find a representation function f mapping X to a finite space Y such that the series f(X_0),...,f(X_n) preserve as much information as possible about the original time-series dependence in X_0,...,X_n. We show that, for stationary time series, the function f can be selected as the one maximizing the time-series information h_0(f(X))- h_ (f(X)) where h_0(f(X)) is the Shannon entropy of f(X_0) and h_ (f(X)) is the entropy rate of the time series f(X_0),...,f(X_n),... Implications for the problem of optimal control are presented.", "", "", "Neuroevolution, the artificial evolution of neural networks, has shown great promise on continuous reinforcement learning tasks that require memory. However, it is not yet directly applicable to realistic embedded agents using high-dimensional (e.g. raw video images) inputs, requiring very large networks. In this paper, neuroevolution is combined with an unsupervised sensory pre-processor or compressor that is trained on images generated from the environment by the population of evolving recurrent neural network controllers. The compressor not only reduces the input cardinality of the controllers, but also biases the search toward novel controllers by rewarding those controllers that discover images that it reconstructs poorly. The method is successfully demonstrated on a vision-based version of the well-known mountain car benchmark, where controllers receive only single high-dimensional visual images of the environment, from a third-person perspective, instead of the standard two-dimensional state vector which includes information about velocity.", "An online learning algorithm for reinforcement learning with continually running recurrent networks in nonstationary reactive environments is described. Various kinds of reinforcement are considered as special types of input to an agent living in the environment. The agent's only goal is to maximize the amount of reinforcement received over time. Supervised learning techniques for recurrent networks serve to construct a differentiable model of the environmental dynamics which includes a model of future reinforcement. This model is used for learning goal-directed behavior in an online fashion. The possibility of using the system for planning future action sequences is investigated and this approach is compared to approaches based on temporal difference methods. A connection to met alearning (learning how to learn) is noted" ] }
1312.6036
1540122489
Natural disasters are a large threat for people especially in developing countries such as Laos. ICT-based disaster management systems aim at supporting disaster warning and response efforts. However, the ability to directly communicate in both directions between local and administrative level is often not supported, and a tight integration into administrative workflows is missing. In this paper, we present the smartphone-based disaster and reporting system Mobile4D. It allows for bi-directional communication while being fully involved in administrative processes. We present the system setup and discuss integration into administrative structures in Lao PDR.
Several ICT frameworks and systems related to disasters are in use, most of them targeting the developed world. In developing countries, additional issues have to be faced. As @cite_3 point out, effective warning systems require "not only the use of ICTs, but also the existence of institutions that allow for the effective mobilization of their potential", so the effective inclusion of administrative units play a critical role.
{ "cite_N": [ "@cite_3" ], "mid": [ "1549230983" ], "abstract": [ "The Indian Ocean tsunami of December 26th, 2004 was one of the greatest natural disasters; it was also the first Internet-mediated natural disaster. Despite the presumed ubiquity and power of advanced technologies including satellites and the Internet, no advance warning was given to the affected coastal populations by their governments or others. This article examines the conditions for the supply of effective early warnings of disasters, drawing from the experience of both the December 26th, 2004 tsunami and the false warnings issued on March 28th, 2005. The potential of information and communication technologies for prompt communication of hazard detection and monitoring information and for effective dissemination of alert and warning messages is examined. The factors contributing to the absence of institutions necessary for the realization of that potential are explored." ] }
1312.6036
1540122489
Natural disasters are a large threat for people especially in developing countries such as Laos. ICT-based disaster management systems aim at supporting disaster warning and response efforts. However, the ability to directly communicate in both directions between local and administrative level is often not supported, and a tight integration into administrative workflows is missing. In this paper, we present the smartphone-based disaster and reporting system Mobile4D. It allows for bi-directional communication while being fully involved in administrative processes. We present the system setup and discuss integration into administrative structures in Lao PDR.
Sahana @cite_19 is a complex modular Open Source disaster management toolkit targeting at large-scale disasters, especially for organizing and coordinating disaster response. It has been successfully applied in many lesser developed countries. A review on Geohazard Warning Systems is given in @cite_10 .
{ "cite_N": [ "@cite_19", "@cite_10" ], "mid": [ "2025837846", "2078559511" ], "abstract": [ "Evaluating how the Sahana disaster information system coordinates disparate institutional and technical resources in the wake of the Indian Ocean tsunami.", "The advent and evolution of geohazard warning systems is a very interesting study. The two broad fields that are immediately visible are that of geohazard evaluation and subsequent warning dissemination. Evidently, the latter field lacks any systematic study or standards. Arbitrarily organized and vague data and information on warning techniques create confusion and indecision. The purpose of this review is to try and systematize the available bulk of information on warning systems so that meaningful insights can be derived through decidable flowcharts, and a developmental process can be undertaken. Hence, the methods and technologies for numerous geohazard warning systems have been assessed by putting them into suitable categories for better understanding of possible ways to analyze their efficacy as well as shortcomings. By establishing a classification scheme based on extent, control, time period, and advancements in technology, the geohazard warning systems available in any literature could be comprehensively analyzed and evaluated. Although major advancements have taken place in geohazard warning systems in recent times, they have been lacking a complete purpose. Some systems just assess the hazard and wait for other means to communicate, and some are designed only for communication and wait for the hazard information to be provided, which usually is after the mishap. Primarily, systems are left at the mercy of administrators and service providers and are not in real time. An integrated hazard evaluation and warning dissemination system could solve this problem. Warning systems have also suffered from complexity of nature, requirement of expert-level monitoring, extensive and dedicated infrastructural setups, and so on. The user community, which would greatly appreciate having a convenient, fast, and generalized warning methodology, is surveyed in this review. The review concludes with the future scope of research in the field of hazard warning systems and some suggestions for developing an efficient mechanism toward the development of an automated integrated geohazard warning system." ] }
1312.6036
1540122489
Natural disasters are a large threat for people especially in developing countries such as Laos. ICT-based disaster management systems aim at supporting disaster warning and response efforts. However, the ability to directly communicate in both directions between local and administrative level is often not supported, and a tight integration into administrative workflows is missing. In this paper, we present the smartphone-based disaster and reporting system Mobile4D. It allows for bi-directional communication while being fully involved in administrative processes. We present the system setup and discuss integration into administrative structures in Lao PDR.
Mobile devices gain increasing importance in disaster cases. Disaster alert systems based on SMS show a good impact in developing countries @cite_7 . @cite_18 present a Android smartphone based disaster alerting system which focusses mostly on routing issues in the disaster response phase. In general, the use of smartphones can show great impact in developing countries. This has especially been shown in health care related cases @cite_15 , e.g., by providing the opportunity for remote diagnosis based on photos @cite_6 .
{ "cite_N": [ "@cite_18", "@cite_6", "@cite_7", "@cite_15" ], "mid": [ "2289441042", "2013477545", "", "2097379719" ], "abstract": [ "The Philippines is one of the countries in the world vulnerable to natural hazards because of its geographic location. It also lacks an efficient disaster management system that will help in times of need. One common scenario during disasters is that the activity of rescue and relief is not well-coordinated. For this reason, there is a need for a system that will help in the efficient provision of rescue and relief to disaster-affected areas. Since the use of smart phones is gaining interest in people, the disaster management system was implemented as a smart phone application using Google's Android operating system. The disaster management system Android application known as MyDisasterDroid determines the optimum route along different geographical locations that the volunteers and rescuers need to take in order to serve the most number of people and provide maximum coverage of the area in the shortest possible time. Genetic algorithm was applied for optimization and different parameters were varied to determine the most optimum route.", "This article describes a prototype system for quantifying bioassays and for exchanging the results of the assays digitally with physicians located off-site. The system uses paper-based microfluidic devices for running multiple assays simultaneously, camera phones or portable scanners for digitizing the intensity of color associated with each colorimetric assay, and established communications infrastructure for transferring the digital information from the assay site to an off-site laboratory for analysis by a trained medical professional; the diagnosis then can be returned directly to the healthcare provider in the field. The microfluidic devices were fabricated in paper using photolithography and were functionalized with reagents for colorimetric assays. The results of the assays were quantified by comparing the intensities of the color developed in each assay with those of calibration curves. An example of this system quantified clinically relevant concentrations of glucose and protein in artificial uri...", "", "The latest generation of smartphones are increasingly viewed as handheld computers rather than as phones, due to their powerful on-board computing capability, capacious memories, large screens and open operating systems that encourage application development. This paper provides a brief state-of-the-art overview of health and healthcare smartphone apps (applications) on the market today, including emerging trends and market uptake. Platforms available today include Android, Apple iOS, RIM BlackBerry, Symbian, and Windows (Windows Mobile 6.x and the emerging Windows Phone 7 platform). The paper covers apps targeting both laypersons patients and healthcare professionals in various scenarios, e.g., health, fitness and lifestyle education and management apps; ambient assisted living apps; continuing professional education tools; and apps for public health surveillance. Among the surveyed apps are those assisting in chronic disease management, whether as standalone apps or part of a BAN (Body Area Network) and remote server configuration. We describe in detail the development of a smartphone app within eCAALYX (Enhanced Complete Ambient Assisted Living Experiment, 2009-2012), an EU-funded project for older people with multiple chronic conditions. The eCAALYX Android smartphone app receives input from a BAN (a patient-wearable smart garment with wireless health sensors) and the GPS (Global Positioning System) location sensor in the smartphone, and communicates over the Internet with a remote server accessible by healthcare professionals who are in charge of the remote monitoring and management of the older patient with multiple chronic conditions. Finally, we briefly discuss barriers to adoption of health and healthcare smartphone apps (e.g., cost, network bandwidth and battery power efficiency, usability, privacy issues, etc.), as well as some workarounds to mitigate those barriers." ] }
1312.6349
2949731591
In this paper we have a close look at the Sybil attack and advances in defending against it, with particular emphasis on the recent work. We identify three major veins of literature work to defend against the attack: using trusted certification, using resources testing, and using social networks. The first vein of literature considers defending against the attack using trusted certification, which is done by either centralized certification or distributed certification using cryptographic primitives that can replace the centralized certification entity. The second vein of literature considers defending against the attack by resources testing, which can by in the form of IP testing, network coordinates, recurring cost as by requiring clients to solve puzzles. The third and last vein of literature is by mitigating the attack combining social networks used as bootstrapping security and tools from random walk theory that have shown to be effective in defending against the attack under certain assumptions. Our survey and analyses of the different schemes in the three veins of literature show several shortcomings which form several interesting directions and research questions worthy of investigation.
Related to our work, @cite_9 proposed a broad survey on solutions for sybil attack in general settings including P2P overlays. Unlike our work, they emphasized on classifying the literature works broadly rather than defining merits and shortcomings of each class of works. Our survey, however, has greatly benefited form their classification though the set of schemes reviewed in our survey is greatly different. In particular, the main technical contents of our survey review works that are published after the publication of the survey in @cite_9 . Related to social network-based defenses, Yu has presented an intriguing tutorial and a survey in @cite_41 .
{ "cite_N": [ "@cite_41", "@cite_9" ], "mid": [ "2152760593", "91828656" ], "abstract": [ "The sybil attack in distributed systems refers to individual malicious users joining the system multiple times under multiple fake identities. Sybil attacks can easily invalidate the overarching prerequisite of many fault-tolerant designs which assume that the fraction of malicious nodes is not too large. This article presents a tutorial and survey on effective sybil defenses leveraging social networks. Since this approach of sybil defenses via social networks was introduced 5 years ago, it has attracted much more attention from the research community than many other alternatives. We will first explain the intuitions and insights behind this approach, and then survey a number of specific sybil defense mechanisms based on this approach, including SybilGuard, SybilLimit, SybilInfer, Gatekeeper, SumUp, Whanau, and Ostra. We will also discuss some practical implications and deployment considerations of this approach.", "Many security mechanisms are based on specific assumptions of identity and are vulnerable to attacks when these assumptions are violated. For example, impersonation is the well-known consequence when authenticating credentials are stolen by a third party. Another attack on identity occurs when credentials for one identity are purposely shared by multiple individuals, for example to avoid paying twice for a service. Such shared accounts are common in practice: friends exchange iTunes passwords to share purchased music; BugMeNot.com is a community that shares website registration passwords; and network address translation [29] devices allow multiple users to pay for a single IP address which is then shared among them. In this paper, we survey the impact of the Sybil attack [26], an attack against identity in which an individual entity masquerades as multiple simultaneous identities. The Sybil attack is a fundamental problem in many systems, and it has so far resisted a universally applicable solution. Many distributed applications and everyday services assume each participating entity controls exactly one identity. When this assumption is unverifiable or unmet, the sevice is subject to attack and the results of the application are questionable if not incorrect. A concrete example of this would be an online voting system where one person can vote using many online identities. Notably, this problem is currently only solved if a central authority, such as the administrator of a certificate authority, can guarantee that each person has a single identity represented by one key; in practice, this is very difficult to ensure on a large scale and would require costly manual attention." ] }
1312.6430
2952728049
In this work, we propose a novel node splitting method for regression trees and incorporate it into the regression forest framework. Unlike traditional binary splitting, where the splitting rule is selected from a predefined set of binary splitting rules via trial-and-error, the proposed node splitting method first finds clusters of the training data which at least locally minimize the empirical loss without considering the input space. Then splitting rules which preserve the found clusters as much as possible are determined by casting the problem into a classification problem. Consequently, our new node splitting method enjoys more freedom in choosing the splitting rules, resulting in more efficient tree structures. In addition to the Euclidean target space, we present a variant which can naturally deal with a circular target space by the proper use of circular statistics. We apply the regression forest employing our node splitting to head pose estimation (Euclidean target space) and car direction estimation (circular target space) and demonstrate that the proposed method significantly outperforms state-of-the-art methods (38.5 and 22.5 error reduction respectively).
@cite_5 @cite_27 apply k-means clustering to the target space to automatically discretize the target space and assign pseudo-classes. They then solve the classification problem by rule induction algorithms for classification. Though somewhat more sophisticated, these approaches still suffer from problems due to discretization. The difference of our method from approaches discussed above is that in these approaches, pseudo-classes are fixed once determined either by human or clustering algorithms while in our approach, pseudo-classes are redetermined at each node splitting of regression tree training. Similarly to our method, @cite_23 converts node splitting tasks into local classification tasks by applying EM algorithm to the joint input-output space. Since clustering is applied to the joint space, their method is not suitable for tasks with high dimensional input space. In fact there experiments are limited to tasks with upto 20 dimensional input space, where their method performs poorly compared to baseline methods.
{ "cite_N": [ "@cite_5", "@cite_27", "@cite_23" ], "mid": [ "1527338945", "", "2066442872" ], "abstract": [ "We describe a machine learning method for predicting the value of a real-valued function, given the values of multiple input variables. The method induces solutions from samples in the form of ordered disjunctive normal form (DNF) decision rules. A central objective of the method and representation is the induction of compact, easily interpretable solutions. This rule-based decision model can be extended to search efficiently for similar cases prior to approximating function values. Experimental results on real-world data demonstrate that the new techniques are competitive with existing machine learning and statistical methods and can sometimes yield superior regression performance.", "", "Developing regression models for large datasets that are both accurate and easy to interpret is a very important data mining problem. Regression trees with linear models in the leaves satisfy both these requirements, but thus far, no truly scalable regression tree algorithm is known. This paper proposes a novel regression tree construction algorithm (SECRET) that produces trees of high quality and scales to very large datasets. At every node, SECRET uses the EM algorithm for Gaussian mixtures to find two clusters in the data and to locally transform the regression problem into a classification problem based on closeness to these clusters. Goodness of split measures, like the gini gain, can then be used to determine the split variable and the split point much like in classification tree construction. Scalability of the algorithm can be achieved by employing scalable versions of the EM and classification tree construction algorithms. An experimental evaluation on real and artificial data shows that SECRET has accuracy comparable to other linear regression tree algorithms but takes orders of magnitude less computation time for large datasets." ] }
1312.6430
2952728049
In this work, we propose a novel node splitting method for regression trees and incorporate it into the regression forest framework. Unlike traditional binary splitting, where the splitting rule is selected from a predefined set of binary splitting rules via trial-and-error, the proposed node splitting method first finds clusters of the training data which at least locally minimize the empirical loss without considering the input space. Then splitting rules which preserve the found clusters as much as possible are determined by casting the problem into a classification problem. Consequently, our new node splitting method enjoys more freedom in choosing the splitting rules, resulting in more efficient tree structures. In addition to the Euclidean target space, we present a variant which can naturally deal with a circular target space by the proper use of circular statistics. We apply the regression forest employing our node splitting to head pose estimation (Euclidean target space) and car direction estimation (circular target space) and demonstrate that the proposed method significantly outperforms state-of-the-art methods (38.5 and 22.5 error reduction respectively).
The work most similar to our method was proposed by Chou @cite_6 who applied k-means like algorithm to the target space to find a locally optimal set of partitions for regression tree learning. However, this method is limited to the case where the input is a categorical variable. Although we limit ourselves to continuous inputs, our formulation is more general and can be applied to any type of inputs by choosing appropriate classification methods.
{ "cite_N": [ "@cite_6" ], "mid": [ "1967148170" ], "abstract": [ "Decision trees are probably the most popular and commonly used classification model. They are recursively built following a top-down approach (from general concepts to particular examples) by repeated splits of the training dataset. When this dataset contains numerical attributes, binary splits are usually performed by choosing the threshold value which minimizes the impurity measure used as splitting criterion (e.g. C4.5 gain ratio criterion or CART Gini's index). In this paper we propose the use of multi-way splits for continuous attributes in order to reduce the tree complexity without decreasing classification accuracy. This can be done by intertwining a hierarchical clustering algorithm with the usual greedy decision tree learning." ] }
1312.6430
2952728049
In this work, we propose a novel node splitting method for regression trees and incorporate it into the regression forest framework. Unlike traditional binary splitting, where the splitting rule is selected from a predefined set of binary splitting rules via trial-and-error, the proposed node splitting method first finds clusters of the training data which at least locally minimize the empirical loss without considering the input space. Then splitting rules which preserve the found clusters as much as possible are determined by casting the problem into a classification problem. Consequently, our new node splitting method enjoys more freedom in choosing the splitting rules, resulting in more efficient tree structures. In addition to the Euclidean target space, we present a variant which can naturally deal with a circular target space by the proper use of circular statistics. We apply the regression forest employing our node splitting to head pose estimation (Euclidean target space) and car direction estimation (circular target space) and demonstrate that the proposed method significantly outperforms state-of-the-art methods (38.5 and 22.5 error reduction respectively).
Regression has been widely applied for head pose estimation tasks. @cite_1 used kernel partial least squares regression to learn a mapping from HOG features to head poses. Fenzi @cite_29 learned a set of local feature generative model using RBF networks and estimated poses using MAP inference.
{ "cite_N": [ "@cite_29", "@cite_1" ], "mid": [ "2114178445", "2022652211" ], "abstract": [ "In this paper, we propose a method for learning a class representation that can return a continuous value for the pose of an unknown class instance using only 2D data and weak 3D labeling information. Our method is based on generative feature models, i.e., regression functions learned from local descriptors of the same patch collected under different viewpoints. The individual generative models are then clustered in order to create class generative models which form the class representation. At run-time, the pose of the query image is estimated in a maximum a posteriori fashion by combining the regression functions belonging to the matching clusters. We evaluate our approach on the EPFL car dataset and the Pointing'04 face dataset. Experimental results show that our method outperforms by 10 the state-of-the-art in the first dataset and by 9 in the second.", "Head pose estimation is a critical problem in many computer vision applications. These include human computer interaction, video surveillance, face and expression recognition. In most prior work on heads pose estimation, the positions of the faces on which the pose is to be estimated are specified manually. Therefore, the results are reported without studying the effect of misalignment. We propose a method based on partial least squares (PLS) regression to estimate pose and solve the alignment problem simultaneously. The contributions of this paper are two-fold: 1) we show that the kernel version of PLS (kPLS) achieves better than state-of-the-art results on the estimation problem and 2) we develop a technique to reduce misalignment based on the learned PLS factors." ] }
1312.6430
2952728049
In this work, we propose a novel node splitting method for regression trees and incorporate it into the regression forest framework. Unlike traditional binary splitting, where the splitting rule is selected from a predefined set of binary splitting rules via trial-and-error, the proposed node splitting method first finds clusters of the training data which at least locally minimize the empirical loss without considering the input space. Then splitting rules which preserve the found clusters as much as possible are determined by casting the problem into a classification problem. Consequently, our new node splitting method enjoys more freedom in choosing the splitting rules, resulting in more efficient tree structures. In addition to the Euclidean target space, we present a variant which can naturally deal with a circular target space by the proper use of circular statistics. We apply the regression forest employing our node splitting to head pose estimation (Euclidean target space) and car direction estimation (circular target space) and demonstrate that the proposed method significantly outperforms state-of-the-art methods (38.5 and 22.5 error reduction respectively).
A few works considered direction estimation tasks where the direction ranges from 0 @math and 360 @math . @cite_22 modified regression forests so that the binary splitting minimizes a cost function specifically designed for direction estimation tasks. @cite_3 applied supervised manifold learning and used RBF networks to learn a mapping from a point on the learnt manifold to the target space.
{ "cite_N": [ "@cite_22", "@cite_3" ], "mid": [ "2083037804", "2062516103" ], "abstract": [ "Determining the viewpoint of traffic participants provides valuable high-level attributes to constrain the interpretation of their movement, and thus allows more specific predictions of alert behavior. We present a monocular object viewpoint estimation approach that is realized by a random regression forest. In particular, we address the circular and continuous structure of the problem for training the decision trees. Our approach builds on a 2D deformable part based object detector. Using detected cars on the KITTI vision benchmark, we demonstrate performance for continuous viewpoint estimation, ground point estimation, and their integration into a high-dimensional particle filtering framework. Besides location and viewpoint of cars, the filter framework considers full monocular egomotion information of the observing platform. This demonstrates the versatility of using only monocular information processing with appropriate machine learning.", "In this paper we propose a framework for learning a regression function form a set of local features in an image. The regression is learned from an embedded representation that reflects the local features and their spatial arrangement as well as enforces supervised manifold constraints on the data. We applied the approach for viewpoint estimation on a Multiview car dataset, a head pose dataset and arm posture dataset. The experimental results show that this approach has superior results (up to 67 improvement) to the state-of-the-art approaches in very challenging datasets." ] }
1312.5785
1796848575
This paper introduces EXMOVES, learned exemplar-based features for efficient recognition of actions in videos. The entries in our descriptor are produced by evaluating a set of movement classifiers over spatial-temporal volumes of the input sequence. Each movement classifier is a simple exemplar-SVM trained on low-level features, i.e., an SVM learned using a single annotated positive space-time volume and a large number of unannotated videos. Our representation offers two main advantages. First, since our mid-level features are learned from individual video exemplars, they require minimal amount of supervision. Second, we show that simple linear classification models trained on our global video descriptor yield action recognition accuracy approaching the state-of-the-art but at orders of magnitude lower cost, since at test-time no sliding window is necessary and linear models are efficient to train and test. This enables scalable action recognition, i.e., efficient classification of a large number of different actions even in large video databases. We show the generality of our approach by building our mid-level descriptors from two different low-level feature representations. The accuracy and efficiency of the approach are demonstrated on several large-scale action recognition benchmarks.
Many approaches to human action recognition have been proposed over the last decade. Most of these techniques differ in terms of the representation used to describe the video. An important family of methods is the class of action recognition systems using space-time interest points, such as Haris3D @cite_6 , Cuboids @cite_5 , and SIFT3D @cite_23 . Efros used optical flows to represent and classify actions @cite_17 . Klaser extended HOG @cite_7 to HOG3D by making use of the temporal dimension of videos @cite_19 . learned volumetric features for action detection @cite_28 . Wang and Suter proposed the use of silhouettes to describe human activities @cite_25 . Recently, accurate action recognition has been demonstrated using dense trajectories and motion boundary descriptors @cite_27 .
{ "cite_N": [ "@cite_7", "@cite_28", "@cite_6", "@cite_19", "@cite_27", "@cite_23", "@cite_5", "@cite_25", "@cite_17" ], "mid": [ "", "2029477555", "2020163092", "2024868105", "2068611653", "2108333036", "2533739470", "2169039276", "2138105460" ], "abstract": [ "", "Real-world actions occur often in crowded, dynamic environments. This poses a difficult challenge for current approaches to video event detection because it is difficult to segment the actor from the background due to distracting motion from other objects in the scene. We propose a technique for event recognition in crowded videos that reliably identifies actions in the presence of partial occlusion and background clutter. Our approach is based on three key ideas: (1) we efficiently match the volumetric representation of an event against oversegmented spatio-temporal video volumes; (2) we augment our shape-based features using flow; (3) rather than treating an event template as an atomic entity, we separately match by parts (both in space and time), enabling robustness against occlusions and actor variability. Our experiments on human actions, such as picking up a dropped object or waving in a crowd show reliable detection with few false positives.", "Local image features or interest points provide compact and abstract representations of patterns in an image. In this paper, we extend the notion of spatial interest points into the spatio-temporal domain and show how the resulting features often reflect interesting events that can be used for a compact representation of video data as well as for interpretation of spatio-temporal events. To detect spatio-temporal events, we build on the idea of the Harris and Forstner interest point operators and detect local structures in space-time where the image values have significant local variations in both space and time. We estimate the spatio-temporal extents of the detected events by maximizing a normalized spatio-temporal Laplacian operator over spatial and temporal scales. To represent the detected events, we then compute local, spatio-temporal, scale-invariant N-jets and classify each event with respect to its jet descriptor. For the problem of human motion analysis, we illustrate how a video representation in terms of local space-time features allows for detection of walking people in scenes with occlusions and dynamic cluttered backgrounds.", "In this work, we present a novel local descriptor for video sequences. The proposed descriptor is based on histograms of oriented 3D spatio-temporal gradients. Our contribution is four-fold. (i) To compute 3D gradients for arbitrary scales, we develop a memory-efficient algorithm based on integral videos. (ii) We propose a generic 3D orientation quantization which is based on regular polyhedrons. (iii) We perform an in-depth evaluation of all descriptor parameters and optimize them for action recognition. (iv) We apply our descriptor to various action datasets (KTH, Weizmann, Hollywood) and show that we outperform the state-of-the-art.", "This paper introduces a video representation based on dense trajectories and motion boundary descriptors. Trajectories capture the local motion information of the video. A dense representation guarantees a good coverage of foreground motion as well as of the surrounding context. A state-of-the-art optical flow algorithm enables a robust and efficient extraction of dense trajectories. As descriptors we extract features aligned with the trajectories to characterize shape (point coordinates), appearance (histograms of oriented gradients) and motion (histograms of optical flow). Additionally, we introduce a descriptor based on motion boundary histograms (MBH) which rely on differential optical flow. The MBH descriptor shows to consistently outperform other state-of-the-art descriptors, in particular on real-world videos that contain a significant amount of camera motion. We evaluate our video representation in the context of action classification on nine datasets, namely KTH, YouTube, Hollywood2, UCF sports, IXMAS, UIUC, Olympic Sports, UCF50 and HMDB51. On all datasets our approach outperforms current state-of-the-art results.", "In this paper we introduce a 3-dimensional (3D) SIFT descriptor for video or 3D imagery such as MRI data. We also show how this new descriptor is able to better represent the 3D nature of video data in the application of action recognition. This paper will show how 3D SIFT is able to outperform previously used description methods in an elegant and efficient manner. We use a bag of words approach to represent videos, and present a method to discover relationships between spatio-temporal words in order to better describe the video data.", "A common trend in object recognition is to detect and leverage the use of sparse, informative feature points. The use of such features makes the problem more manageable while providing increased robustness to noise and pose variation. In this work we develop an extension of these ideas to the spatio-temporal case. For this purpose, we show that the direct 3D counterparts to commonly used 2D interest point detectors are inadequate, and we propose an alternative. Anchoring off of these interest points, we devise a recognition algorithm based on spatio-temporally windowed data. We present recognition results on a variety of datasets including both human and rodent behavior.", "We describe a probabilistic framework for recognizing human activities in monocular video based on simple silhouette observations in this paper. The methodology combines kernel principal component analysis (KPCA) based feature extraction and factorial conditional random field (FCRF) based motion modeling. Silhouette data is represented more compactly by nonlinear dimensionality reduction that explores the underlying structure of the articulated action space and preserves explicit temporal orders in projection trajectories of motions. FCRF models temporal sequences in multiple interacting ways, thus increasing joint accuracy by information sharing, with the ideal advantages of discriminative models over generative ones (e.g., relaxing independence assumption between observations and the ability to effectively incorporate both overlapping features and long-range dependencies). The experimental results on two recent datasets have shown that the proposed framework can not only accurately recognize human activities with temporal, intra-and inter-person variations, but also is considerably robust to noise and other factors such as partial occlusion and irregularities in motion styles.", "Our goal is to recognize human action at a distance, at resolutions where a whole person may be, say, 30 pixels tall. We introduce a novel motion descriptor based on optical flow measurements in a spatiotemporal volume for each stabilized human figure, and an associated similarity measure to be used in a nearest-neighbor framework. Making use of noisy optical flow measurements is the key challenge, which is addressed by treating optical flow not as precise pixel displacements, but rather as a spatial pattern of noisy measurements which are carefully smoothed and aggregated to form our spatiotemporal motion descriptor. To classify the action being performed by a human figure in a query sequence, we retrieve nearest neighbor(s) from a database of stored, annotated video sequences. We can also use these retrieved exemplars to transfer 2D 3D skeletons onto the figures in the query sequence, as well as two forms of data-based action synthesis \"do as I do\" and \"do as I say\". Results are demonstrated on ballet, tennis as well as football datasets." ] }
1312.5785
1796848575
This paper introduces EXMOVES, learned exemplar-based features for efficient recognition of actions in videos. The entries in our descriptor are produced by evaluating a set of movement classifiers over spatial-temporal volumes of the input sequence. Each movement classifier is a simple exemplar-SVM trained on low-level features, i.e., an SVM learned using a single annotated positive space-time volume and a large number of unannotated videos. Our representation offers two main advantages. First, since our mid-level features are learned from individual video exemplars, they require minimal amount of supervision. Second, we show that simple linear classification models trained on our global video descriptor yield action recognition accuracy approaching the state-of-the-art but at orders of magnitude lower cost, since at test-time no sliding window is necessary and linear models are efficient to train and test. This enables scalable action recognition, i.e., efficient classification of a large number of different actions even in large video databases. We show the generality of our approach by building our mid-level descriptors from two different low-level feature representations. The accuracy and efficiency of the approach are demonstrated on several large-scale action recognition benchmarks.
On all these representations, a variety of classification models have been applied to recognize human actions: bag-of-word model @cite_8 , Metric Learning @cite_14 , Deep Learning @cite_18 , Boosting-based approaches @cite_21 @cite_2 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_8", "@cite_21", "@cite_2" ], "mid": [ "", "1593040460", "2165095705", "2142194269", "2129666410" ], "abstract": [ "", "This paper proposes a metric learning based approach for human activity recognition with two main objectives: (1) reject unfamiliar activities and (2) learn with few examples. We show that our approach outperforms all state-of-the-art methods on numerous standard datasets for traditional action classification problem. Furthermore, we demonstrate that our method not only can accurately label activities but also can reject unseen activities and can learn from few examples with high accuracy. We finally show that our approach works well on noisy YouTube videos.", "We present a novel model for human action categorization. A video sequence is represented as a collection of spatial and spatial-temporal features by extracting static and dynamic interest points. We propose a hierarchical model that can be characterized as a constellation of bags-of-features and that is able to combine both spatial and spatial-temporal features. Given a novel video sequence, the model is able to categorize human actions in a frame-by-frame basis. We test the model on a publicly available human action dataset [2] and show that our new method performs well on the classification task. We also conducted control experiments to show that the use of the proposed mixture of hierarchical models improves the classification performance over bag of feature models. An additional experiment shows that using both dynamic and static features provides a richer representation of human actions when compared to the use of a single feature type, as demonstrated by our evaluation in the classification task.", "The aim of this paper is to address recognition of natural human actions in diverse and realistic video settings. This challenging but important subject has mostly been ignored in the past due to several problems one of which is the lack of realistic and annotated video datasets. Our first contribution is to address this limitation and to investigate the use of movie scripts for automatic annotation of human actions in videos. We evaluate alternative methods for action retrieval from scripts and show benefits of a text-based classifier. Using the retrieved action samples for visual learning, we next turn to the problem of action classification in video. We present a new method for video classification that builds upon and extends several recent ideas including local space-time features, space-time pyramids and multi-channel non-linear SVMs. The method is shown to improve state-of-the-art results on the standard KTH action dataset by achieving 91.8 accuracy. Given the inherent problem of noisy labels in automatic annotation, we particularly investigate and show high tolerance of our method to annotation errors in the training set. We finally apply the method to learning and classifying challenging action classes in movies and show promising results.", "We address recognition and localization of human actions in realistic scenarios. In contrast to the previous work studying human actions in controlled settings, here we train and test algorithms on real movies with substantial variation of actions in terms of subject appearance, motion, surrounding scenes, viewing angles and spatio-temporal extents. We introduce a new annotated human action dataset and use it to evaluate several existing methods. We in particular focus on boosted space-time window classifiers and introduce \"keyframe priming\" that combines discriminative models of human motion and shape within an action. Keyframe priming is shown to significantly improve the performance of action detection. We present detection results for the action class \"drinking\" evaluated on two episodes of the movie \"Coffee and Cigarettes\"." ] }
1312.5050
2951841515
Online video-on-demand(VoD) services invariably maintain a view count for each video they serve, and it has become an important currency for various stakeholders, from viewers, to content owners, advertizers, and the online service providers themselves. There is often significant financial incentive to use a robot (or a botnet) to artificially create fake views. How can we detect the fake views? Can we detect them (and stop them) using online algorithms as they occur? What is the extent of fake views with current VoD service providers? These are the questions we study in the paper. We develop some algorithms and show that they are quite effective for this problem.
There appears to be similar issues whenever one wants to attract eyeballs on the Internet. @cite_17 detects fake accounts on online social networks (OSN). It ranks users according to their perceived likelihood of being fake based on social graph properties. By deploying their method on the largest OSN of Spain, @math 90 The use of entropy function has been proposed for anomaly detection problems before. In @cite_20 , the authors propose to use several information-theoretic measures for network anomaly detection. The entropy measures are applied to Unix system call data, BSM data, and network tcpdump data to illustrate the utilities. @cite_14 uses two-phase entropy measures to detect network anomalies by comparing the current network traffic against a baseline distribution. A Maximum Entropy principle is applied to estimate the distribution for normal network operation using pre-labeled training data. Then the relative entropy of the network traffic is applied with respect to the distribution of the normal behavior. @cite_15 proposes efficient streaming algorithms to implement the entropy measurement on high-speed links with low CPU and memory requirements.
{ "cite_N": [ "@cite_15", "@cite_14", "@cite_20", "@cite_17" ], "mid": [ "2109885200", "", "2096847629", "2168508162" ], "abstract": [ "Using entropy of traffic distributions has been shown to aid a wide variety of network monitoring applications such as anomaly detection, clustering to reveal interesting patterns, and traffic classification. However, realizing this potential benefit in practice requires accurate algorithms that can operate on high-speed links, with low CPU and memory requirements. In this paper, we investigate the problem of estimating the entropy in a streaming computation model. We give lower bounds for this problem, showing that neither approximation nor randomization alone will let us compute the entropy efficiently. We present two algorithms for randomly approximating the entropy in a time and space efficient manner, applicable for use on very high speed (greater than OC-48) links. The first algorithm for entropy estimation is inspired by the structural similarity with the seminal work of for estimating frequency moments, and we provide strong theoretical guarantees on the error and resource usage. Our second algorithm utilizes the observation that the performance of the streaming algorithm can be enhanced by separating the high-frequency items (or elephants) from the low-frequency items (or mice). We evaluate our algorithms on traffic traces from different deployment scenarios.", "", "Anomaly detection is an essential component of protection mechanisms against novel attacks. We propose to use several information-theoretic measures, namely, entropy, conditional entropy, relative conditional entropy, information gain, and information cost for anomaly detection. These measures can be used to describe the characteristics of an audit data set, suggest the appropriate anomaly detection model(s) to be built, and explain the performance of the model(s). We use case studies on Unix system call data, BSM data, and network tcpdump data to illustrate the utilities of these measures.", "Users increasingly rely on the trustworthiness of the information exposed on Online Social Networks (OSNs). In addition, OSN providers base their business models on the marketability of this information. However, OSNs suffer from abuse in the form of the creation of fake accounts, which do not correspond to real humans. Fakes can introduce spam, manipulate online rating, or exploit knowledge extracted from the network. OSN operators currently expend significant resources to detect, manually verify, and shut down fake accounts. Tuenti, the largest OSN in Spain, dedicates 14 full-time employees in that task alone, incurring a significant monetary cost. Such a task has yet to be successfully automated because of the difficulty in reliably capturing the diverse behavior of fake and real OSN profiles. We introduce a new tool in the hands of OSN operators, which we call SybilRank. It relies on social graph properties to rank users according to their perceived likelihood of being fake (Sybils). SybilRank is computationally efficient and can scale to graphs with hundreds of millions of nodes, as demonstrated by our Hadoop prototype. We deployed SybilRank in Tuenti's operation center. We found that ∼90 of the 200K accounts that SybilRank designated as most likely to be fake, actually warranted suspension. On the other hand, with Tuenti's current user-report-based approach only ∼5 of the inspected accounts are indeed fake." ] }
1312.5050
2951841515
Online video-on-demand(VoD) services invariably maintain a view count for each video they serve, and it has become an important currency for various stakeholders, from viewers, to content owners, advertizers, and the online service providers themselves. There is often significant financial incentive to use a robot (or a botnet) to artificially create fake views. How can we detect the fake views? Can we detect them (and stop them) using online algorithms as they occur? What is the extent of fake views with current VoD service providers? These are the questions we study in the paper. We develop some algorithms and show that they are quite effective for this problem.
Machine learning approaches could be applied in many anomaly detection scenarios. provide a survey of anomaly detection in @cite_13 . @cite_1 uses the supervised learning to realize a mapping of traffic to applications based on labeled measurements from known applications. @cite_22 proposes an improved K-means approach to classify unlabeled data into different categories for the anomaly intrusion detection. @cite_5 identifies the challenges for the intrusion detection community to employ machine learning effectively, and provides a set of guidelines for improvement.
{ "cite_N": [ "@cite_1", "@cite_5", "@cite_13", "@cite_22" ], "mid": [ "2133473417", "1985987493", "2122646361", "2140519269" ], "abstract": [ "An accurate mapping of traffic to applications is important for a broad range of network management and measurement tasks. Internet applications have traditionally been identified using well-known default server network-port numbers in the TCP or UDP headers. However this approach has become increasingly inaccurate. An alternate, more accurate technique is to use specific application-level features in the protocol exchange to guide the identification. Unfortunately deriving the signatures manually is very time consuming and difficult.In this paper, we explore automatically extracting application signatures from IP traffic payload content. In particular we apply three statistical machine learning algorithms to automatically identify signatures for a range of applications. The results indicate that this approach is highly accurate and scales to allow online application identification on high speed links. We also discovered that content signatures still work in the presence of encryption. In these cases we were able to derive content signature for unencrypted handshakes negotiating the encryption parameters of a particular connection.", "In network intrusion detection research, one popular strategy for finding attacks is monitoring a network's activity for anomalies: deviations from profiles of normality previously learned from benign traffic, typically identified using tools borrowed from the machine learning community. However, despite extensive academic research one finds a striking gap in terms of actual deployments of such systems: compared with other intrusion detection approaches, machine learning is rarely employed in operational \"real world\" settings. We examine the differences between the network intrusion detection problem and other areas where machine learning regularly finds much more success. Our main claim is that the task of finding attacks is fundamentally different from these other applications, making it significantly harder for the intrusion detection community to employ machine learning effectively. We support this claim by identifying challenges particular to network intrusion detection, and provide a set of guidelines meant to strengthen future research on anomaly detection.", "Anomaly detection is an important problem that has been researched within diverse research areas and application domains. Many anomaly detection techniques have been specifically developed for certain application domains, while others are more generic. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. We have grouped existing techniques into different categories based on the underlying approach adopted by each technique. For each category we have identified key assumptions, which are used by the techniques to differentiate between normal and anomalous behavior. When applying a given technique to a particular domain, these assumptions can be used as guidelines to assess the effectiveness of the technique in that domain. For each category, we provide a basic anomaly detection technique, and then show how the different existing techniques in that category are variants of the basic technique. This template provides an easier and more succinct understanding of the techniques belonging to each category. Further, for each category, we identify the advantages and disadvantages of the techniques in that category. We also provide a discussion on the computational complexity of the techniques since it is an important issue in real application domains. We hope that this survey will provide a better understanding of the different directions in which research has been done on this topic, and how techniques developed in one area can be applied in domains for which they were not intended to begin with.", "Intrusion detection has become an indispensable defense line in the information security infrastructure. The existing signature-based intrusion detection mechanisms are often not sufficient in detecting many types of attacks. K-means is a popular anomaly intrusion detection method to classify unlabeled data into different categories. However, it suffers from the local convergence and high false alarms. In this paper, two soft computing techniques, fuzzy logic and swarm intelligence, are used to solve these problems. We proposed SFK-means approach which inherits the advantages of K-means, Fuzzy K-means and Swarm K- means, simultaneously we improve the deficiencies. The most advantages of our SFK-means algorithm are solving the local convergence problem in Fuzzy K- means and the sharp boundary problem in Swarm K- means. The experimental results on dataset KDDCup99 show that our proposed method can be effective in detecting various attacks." ] }
1312.4853
2952443768
Bid-centric service descriptions have the potential to offer a new cloud service provisioning model that promotes portability, diversity of choice and differentiation between providers. A bid matching model based on requirements and capabilities is presented that provides the basis for such an approach. In order to facilitate the bidding process, tenders should be specified as abstractly as possible so that the solution space is not needlessly restricted. To this end, we describe how partial TOSCA service descriptions allow for a range of diverse solutions to be proposed by multiple providers in response to tenders. Rather than adopting a lowest common denominator approach, true portability should allow for the relative strengths and differentiating features of cloud service providers to be applied to bids. With this in mind, we describe how TOSCA service descriptions could be augmented with additional information in order to facilitate heterogeneity in proposed solutions, such as the use of coprocessors and provider-specific services.
ABACUS @cite_24 is a resource management framework that allows for cloud service differentiation based on job characteristics. Each job submission has an associated budget and utility function. The utility function is used to indicate the benefit accrued by allocating the job to sets of resources. When resources become available, these parameters are used to decide which outstanding job they will be allocated to. Experimental results based on a MapReduce use case are presented.
{ "cite_N": [ "@cite_24" ], "mid": [ "2071277892" ], "abstract": [ "The emergence of the cloud computing paradigm has greatly enabled innovative service models, such as Platform as a Service (PaaS), and distributed computing frameworks, such as Map Reduce. However, most existing cloud systems fail to distinguish users with different preferences, or jobs of different natures. Consequently, they are unable to provide service differentiation, leading to inefficient allocations of cloud resources. Moreover, contentions on the resources exacerbate this inefficiency, when prioritizing crucial jobs is necessary, but impossible. Motivated by this, we propose Abacus, a generic resource management framework addressing this problem. Abacus interacts with users through an auction mechanism, which allows users to specify their priorities using budgets, and job characteristics via utility functions. Based on this information, Abacus computes the optimal allocation and scheduling of resources. Meanwhile, the auction mechanism in Abacus possesses important properties including incentive compatibility (i.e., the users' best strategy is to simply bid their true budgets and job utilities) and monotonicity (i.e., users are motivated to increase their budgets in order to receive better services). In addition, when the user is unclear about her utility function, Abacus automatically learns this function based on statistics of her previous jobs. An extensive set of experiments, running on Hadoop, demonstrate the high performance and other desirable properties of Abacus." ] }
1312.4853
2952443768
Bid-centric service descriptions have the potential to offer a new cloud service provisioning model that promotes portability, diversity of choice and differentiation between providers. A bid matching model based on requirements and capabilities is presented that provides the basis for such an approach. In order to facilitate the bidding process, tenders should be specified as abstractly as possible so that the solution space is not needlessly restricted. To this end, we describe how partial TOSCA service descriptions allow for a range of diverse solutions to be proposed by multiple providers in response to tenders. Rather than adopting a lowest common denominator approach, true portability should allow for the relative strengths and differentiating features of cloud service providers to be applied to bids. With this in mind, we describe how TOSCA service descriptions could be augmented with additional information in order to facilitate heterogeneity in proposed solutions, such as the use of coprocessors and provider-specific services.
Shi present an electronic auction platform for cloud resources based on a continuous double auction mechanism @cite_21 . The platform uses trading rounds to match bids from consumers with asks from cloud service providers. A two stage game bidding strategy is also presented. Song present another market model based on combinatorial auctions @cite_15 . This model allows for collaboration between service providers when creating bids. Service providers can autonomously find partners and create groups that increase their competitive power and hence improve their chances of submitting a winning bid.
{ "cite_N": [ "@cite_15", "@cite_21" ], "mid": [ "2156296696", "1738519518" ], "abstract": [ "In this paper, we present a novel combinatorial auction (CA) based trading infrastructure to enable the supply and demand of Cloud services from different Cloud providers (CP). We propose a new auction policy that considers the relationship among CPs (mutual business relationship) in bidding mechanism. In our new auction-based market model, we allow the group of service providers to publish their bids collaboratively as a single bid to the auctioneer. It gives service providers a chance to autonomously find partners and make groups. Then they can use group strategy to increase their competitive power and compete for winning the bid (s). This will reduce conflicts, as well as collaboration cost and negotiation time, among participants as compare to existing CA-based market model. We implement our proposed market model of trading service in a simulated environment and study its economic efficiency with existing model.", "Cloud computing has been an emerging model which aims at allowing customers to utilize computing resources hosted by Cloud Service Providers (CSPs). More and more consumers rely on CSPs to supply computing and storage service on the one hand, and CSPs try to attract consumers on favorable terms on the other. In such competitive cloud computing markets, pricing policies are critical to market efficiency. While CSPs often publish their prices and charge users according to the amount of resources they consume, auction mechanism is rarely applied. In fact a feasible auction mechanism is the most effective method for allocation of resources, especially double auction is more efficient and flexible for it enables buyers and sellers to enter bids and offers simultaneously. In this paper we bring up an electronic auction platform for cloud, and a cloud Continuous Double Auction (CDA) mechanism is formulated to match orders and facilitate trading based on the platform. Some evaluating criteria are defined to analyze the efficiency of markets and strategies. Furthermore, the selection of bidding strategies for the auction plays a very important role for each player to maximize its own profit, so we developed a novel bidding strategy for cloud CDA, BH-strategy, which is a two-stage game bidding strategy. At last we designed three simulation scenarios to compare the performance of our strategy with other dominating bidding strategies and proved that BH-strategy has better performance on surpluses, successful transactions and market efficiency. In addition, we discussed that our cloud CDA mechanism is feasible for cloud computing resource allocation." ] }
1312.4853
2952443768
Bid-centric service descriptions have the potential to offer a new cloud service provisioning model that promotes portability, diversity of choice and differentiation between providers. A bid matching model based on requirements and capabilities is presented that provides the basis for such an approach. In order to facilitate the bidding process, tenders should be specified as abstractly as possible so that the solution space is not needlessly restricted. To this end, we describe how partial TOSCA service descriptions allow for a range of diverse solutions to be proposed by multiple providers in response to tenders. Rather than adopting a lowest common denominator approach, true portability should allow for the relative strengths and differentiating features of cloud service providers to be applied to bids. With this in mind, we describe how TOSCA service descriptions could be augmented with additional information in order to facilitate heterogeneity in proposed solutions, such as the use of coprocessors and provider-specific services.
The MODAClouds project @cite_23 seeks to develop a a model-driven approach for the design and execution of applications across multiple clouds. Under this approach, applications are developed at a high level that abstracts the capabilities of the clouds that may be targeted during deployment. These high-level specifications are then semi-automatically translated to run on multiple cloud platforms, allowing for flexibility in terms of cost, risk and quality of service.
{ "cite_N": [ "@cite_23" ], "mid": [ "2022280117" ], "abstract": [ "Cloud computing is emerging as a major trend in the ICT industry. While most of the attention of the research community is focused on considering the perspective of the Cloud providers, offering mechanisms to support scaling of resources and interoperability and federation between Clouds, the perspective of developers and operators willing to choose the Cloud without being strictly bound to a specific solution is mostly neglected. We argue that Model-Driven Development can be helpful in this context as it would allow developers to design software systems in a cloud-agnostic way and to be supported by model transformation techniques into the process of instantiating the system into specific, possibly, multiple Clouds. The MODAClouds (MOdel-Driven Approach for the design and execution of applications on multiple Clouds) approach we present here is based on these principles and aims at supporting system developers and operators in exploiting multiple Clouds for the same system and in migrating (part of) their systems from Cloud to Cloud as needed. MODAClouds offers a quality-driven design, development and operation method and features a Decision Support System to enable risk analysis for the selection of Cloud providers and for the evaluation of the Cloud adoption impact on internal business processes. Furthermore, MODAClouds offers a run-time environment for observing the system under execution and for enabling a feedback loop with the design environment. This allows system developers to react to performance fluctuations and to re-deploy applications on different Clouds on the long term." ] }
1312.5105
1794340831
Correlation clustering is perhaps the most natural formulation of clustering. Given @math objects and a pairwise similarity measure, the goal is to cluster the objects so that, to the best possible extent, similar objects are put in the same cluster and dissimilar objects are put in different clusters. Despite its theoretical appeal, the practical relevance of correlation clustering still remains largely unexplored, mainly due to the fact that correlation clustering requires the @math pairwise similarities as input. In this paper we initiate the investigation into algorithms for correlation clustering. In we are given the identifier of a single object and we want to return the cluster to which it belongs in some globally consistent near-optimal clustering, using a small number of similarity queries. Local algorithms for correlation clustering open the door to algorithms, which are particularly useful when the similarity between items is costly to compute, as it is often the case in many practical application domains. They also imply @math distributed and streaming clustering algorithms, @math constant-time estimators and testers for cluster edit distance, and @math property-preserving parallel reconstruction algorithms for clusterability. Specifically, we devise a local clustering algorithm attaining a @math -approximation in time @math independently of the dataset size. An explicit approximate clustering for all objects can be produced in time @math (which is provably optimal). We also provide a fully additive @math -approximation with local query complexity @math and time complexity @math . The latter yields the fastest polynomial-time approximation scheme for correlation clustering known to date.
Sublinear clustering algorithms. Sublinear clustering algorithms for geometric data sets are known @cite_5 @cite_2 @cite_20 @cite_9 @cite_36 . Many of these find implicit representations of the clustering they output. There is a natural implicit representation for most of this problems, e.g., the set of @math cluster centers. By contrast, in correlation clustering there may be no clear way to define a clustering for the whole graph based on a small set of vertices. The only sublinear-time algorithm known for correlation clustering is the aforementioned result of @cite_19 ; it runs in time @math , but the multiplicative constant hidden in the notation has an exponential dependence on the approximation parameter.
{ "cite_N": [ "@cite_36", "@cite_9", "@cite_19", "@cite_2", "@cite_5", "@cite_20" ], "mid": [ "1992598798", "2137691837", "", "2050516752", "2075328466", "2171394996" ], "abstract": [ "The min-sum k -clustering problem is to partition a metric space (P,d) into k clusters C 1,…,C k ⊆P such that @math is minimized. We show the first efficient construction of a coreset for this problem. Our coreset construction is based on a new adaptive sampling algorithm. With our construction of coresets we obtain two main algorithmic results. The first result is a sublinear-time (4+e)-approximation algorithm for the min-sum k-clustering problem in metric spaces. The running time of this algorithm is @math for any constant k and e, and it is o(n 2) for all k=o(log n log log n). Since the full description size of the input is Θ(n 2), this is sublinear in the input size. The fastest previously known o(log n)-factor approximation algorithm for k>2 achieved a running time of Ω(n k ), and no non-trivial o(n 2)-time algorithm was known before. Our second result is the first pass-efficient data streaming algorithm for min-sum k-clustering in the distance oracle model, i.e., an algorithm that uses poly(log n,k) space and makes 2 passes over the input point set, which arrives in form of a data stream in arbitrary order. It computes an implicit representation of a clustering of (P,d) with cost at most a constant factor larger than that of an optimal partition. Using one further pass, we can assign each point to its corresponding cluster. To develop the coresets, we introduce the concept of α -preserving metric embeddings. Such an embedding satisfies properties that the distance between any pair of points does not decrease and the cost of an optimal solution for the considered problem on input (P,d′) is within a constant factor of the optimal solution on input (P,d). In other words, the goal is to find a metric embedding into a (structurally simpler) metric space that approximates the original metric up to a factor of α with respect to a given problem. We believe that this concept is an interesting generalization of coresets.", "We present a novel analysis of a random sampling approach for four clustering problems in metric spaces: k-median, k-means, min-sum k-clustering, and balanced k-median. For all these problems, we consider the following simple sampling scheme: select a small sample set of input points uniformly at random and then run some approximation algorithm on this sample set to compute an approximation of the best possible clustering of this set. Our main technical contribution is a significantly strengthened analysis of the approximation guarantee by this scheme for the clustering problems.The main motivation behind our analyses was to design sublinear-time algorithms for clustering problems. Our second contribution is the development of new approximation algorithms for the aforementioned clustering problems. Using our random sampling approach, we obtain for these problems the first time approximation algorithms that have running time independent of the input size, and depending on k and the diameter of the metric space only. © 2006 Wiley Periodicals, Inc. Random Struct. Alg., 2006A preliminary extended abstract of this work appeared in Proceedings of the 31st Annual International Colloquium on Automata, Languages and Programming (ICALP), pp. 396-407, 2004.", "", "Clustering is of central importance in a number of disciplines including Machine Learning, Statistics, and Data Mining. This paper has two foci: (1) It describes how existing algorithms for clustering can benefit from simple sampling techniques arising from work in statistics [Pol84]. (2) It motivates and introduces a new model of clustering that is in the spirit of the “PAC (probably approximately correct)” learning model, and gives examples of efficient PAC-clustering algorithms.", "A set X of points in @math is (k,b)-clusterable if X can be partitioned into k subsets (clusters) so that the diameter (alternatively, the radius) of each cluster is at most b. We present algorithms that, by sampling from a set X, distinguish between the case that X is (k,b)-clusterable and the case that X is @math -far from being (k,b')-clusterable for any given @math and for @math . By @math -far from being (k,b')-clusterable we mean that more than @math points should be removed from X so that it becomes (k,b')-clusterable. We give algorithms for a variety of cost measures that use a sample of size independent of |X| and polynomial in k and @math . Our algorithms can also be used to find approximately good clusterings. Namely, these are clusterings of all but an @math -fraction of the points in X that have optimal (or close to optimal) cost. The benefit of our algorithms is that they construct an implicit representation of such clusterings in time independent of |X|. That is, without actually having to partition all points in X, the implicit representation can be used to answer queries concerning the cluster to which any given point belongs.", "Problems of clustering data from pairwise similarity information are ubiquitous in Computer Science. Theoretical treatments typically view the similarity information as ground-truth and then design algorithms to (approximately) optimize various graph-based objective functions. However, in most applications, this similarity information is merely based on some heuristic; the ground truth is really the unknown correct clustering of the data points and the real goal is to achieve low error on the data. In this work, we develop a theoretical approach to clustering from this perspective. In particular, motivated by recent work in learning theory that asks \"what natural properties of a similarity (or kernel) function are sufficient to be able to learn well?\" we ask \"what natural properties of a similarity function are sufficient to be able to cluster well?\" To study this question we develop a theoretical framework that can be viewed as an analog of the PAC learning model for clustering, where the object of study, rather than being a concept class, is a class of (concept, similarity function) pairs, or equivalently, a property the similarity function should satisfy with respect to the ground truth clustering. We then analyze both algorithmic and information theoretic issues in our model. While quite strong properties are needed if the goal is to produce a single approximately-correct clustering, we find that a number of reasonable properties are sufficient under two natural relaxations: (a) list clustering: analogous to the notion of list-decoding, the algorithm can produce a small list of clusterings (which a user can select from) and (b) hierarchical clustering: the algorithm's goal is to produce a hierarchy such that desired clustering is some pruning of this tree (which a user could navigate). We develop a notion of the clustering complexity of a given property (analogous to notions of capacity in learning theory), that characterizes its information-theoretic usefulness for clustering. We analyze this quantity for several natural game-theoretic and learning-theoretic properties, as well as design new efficient algorithms that are able to take advantage of them. Our algorithms for hierarchical clustering combine recent learning-theoretic approaches with linkage-style methods. We also show how our algorithms can be extended to the inductive case, i.e., by using just a constant-sized sample, as in property testing. The analysis here uses regularity-type results of [FK] and [AFKK]." ] }
1312.5105
1794340831
Correlation clustering is perhaps the most natural formulation of clustering. Given @math objects and a pairwise similarity measure, the goal is to cluster the objects so that, to the best possible extent, similar objects are put in the same cluster and dissimilar objects are put in different clusters. Despite its theoretical appeal, the practical relevance of correlation clustering still remains largely unexplored, mainly due to the fact that correlation clustering requires the @math pairwise similarities as input. In this paper we initiate the investigation into algorithms for correlation clustering. In we are given the identifier of a single object and we want to return the cluster to which it belongs in some globally consistent near-optimal clustering, using a small number of similarity queries. Local algorithms for correlation clustering open the door to algorithms, which are particularly useful when the similarity between items is costly to compute, as it is often the case in many practical application domains. They also imply @math distributed and streaming clustering algorithms, @math constant-time estimators and testers for cluster edit distance, and @math property-preserving parallel reconstruction algorithms for clusterability. Specifically, we devise a local clustering algorithm attaining a @math -approximation in time @math independently of the dataset size. An explicit approximate clustering for all objects can be produced in time @math (which is provably optimal). We also provide a fully additive @math -approximation with local query complexity @math and time complexity @math . The latter yields the fastest polynomial-time approximation scheme for correlation clustering known to date.
The literature on also contains algorithms with sublinear query complexity (see, e.g., @cite_42 ); many of them are heuristic or do not apply to correlation clustering. Ailon @cite_3 obtain algorithms for @math with sublinear query complexity, but the running time of their solutions is exponential in @math .
{ "cite_N": [ "@cite_42", "@cite_3" ], "mid": [ "2160066096", "2950197242" ], "abstract": [ "Active data clustering is a novel technique for clustering of proximity data which utilizes principles from sequential experiment design in order to interleave data generation and data analysis. The proposed active data sampling strategy is based on the expected value of information, a concept rooting in statistical decision theory. This is considered to be an important step towards the analysis of large-scale data sets, because it offers a way to overcome the inherent data sparseness of proximity data. We present applications to unsupervised texture segmentation in computer vision and information retrieval in document databases.", "The disagreement coefficient of Hanneke has become a central data independent invariant in proving active learning rates. It has been shown in various ways that a concept class with low complexity together with a bound on the disagreement coefficient at an optimal solution allows active learning rates that are superior to passive learning ones. We present a different tool for pool based active learning which follows from the existence of a certain uniform version of low disagreement coefficient, but is not equivalent to it. In fact, we present two fundamental active learning problems of significant interest for which our approach allows nontrivial active learning bounds. However, any general purpose method relying on the disagreement coefficient bounds only fails to guarantee any useful bounds for these problems. The tool we use is based on the learner's ability to compute an estimator of the difference between the loss of any hypotheses and some fixed \"pivotal\" hypothesis to within an absolute error of at most @math times the" ] }
1312.5105
1794340831
Correlation clustering is perhaps the most natural formulation of clustering. Given @math objects and a pairwise similarity measure, the goal is to cluster the objects so that, to the best possible extent, similar objects are put in the same cluster and dissimilar objects are put in different clusters. Despite its theoretical appeal, the practical relevance of correlation clustering still remains largely unexplored, mainly due to the fact that correlation clustering requires the @math pairwise similarities as input. In this paper we initiate the investigation into algorithms for correlation clustering. In we are given the identifier of a single object and we want to return the cluster to which it belongs in some globally consistent near-optimal clustering, using a small number of similarity queries. Local algorithms for correlation clustering open the door to algorithms, which are particularly useful when the similarity between items is costly to compute, as it is often the case in many practical application domains. They also imply @math distributed and streaming clustering algorithms, @math constant-time estimators and testers for cluster edit distance, and @math property-preserving parallel reconstruction algorithms for clusterability. Specifically, we devise a local clustering algorithm attaining a @math -approximation in time @math independently of the dataset size. An explicit approximate clustering for all objects can be produced in time @math (which is provably optimal). We also provide a fully additive @math -approximation with local query complexity @math and time complexity @math . The latter yields the fastest polynomial-time approximation scheme for correlation clustering known to date.
Local algorithms. The following notion of locality is used in the distributed computing literature. Each vertex of a sparse graph is assigned a processor, and each processor can compute a certain function in a constant number of rounds by passing messages to its neighbours (see Suomela's survey @cite_35 ). Our algorithms are also local in this sense.
{ "cite_N": [ "@cite_35" ], "mid": [ "2138623498" ], "abstract": [ "A local algorithm is a distributed algorithm that runs in constant time, independently of the size of the network. Being highly scalable and fault tolerant, such algorithms are ideal in the operation of large-scale distributed systems. Furthermore, even though the model of local algorithms is very limited, in recent years we have seen many positive results for nontrivial problems. This work surveys the state-of-the-art in the field, covering impossibility results, deterministic local algorithms, randomized local algorithms, and local algorithms for geometric graphs." ] }
1312.5105
1794340831
Correlation clustering is perhaps the most natural formulation of clustering. Given @math objects and a pairwise similarity measure, the goal is to cluster the objects so that, to the best possible extent, similar objects are put in the same cluster and dissimilar objects are put in different clusters. Despite its theoretical appeal, the practical relevance of correlation clustering still remains largely unexplored, mainly due to the fact that correlation clustering requires the @math pairwise similarities as input. In this paper we initiate the investigation into algorithms for correlation clustering. In we are given the identifier of a single object and we want to return the cluster to which it belongs in some globally consistent near-optimal clustering, using a small number of similarity queries. Local algorithms for correlation clustering open the door to algorithms, which are particularly useful when the similarity between items is costly to compute, as it is often the case in many practical application domains. They also imply @math distributed and streaming clustering algorithms, @math constant-time estimators and testers for cluster edit distance, and @math property-preserving parallel reconstruction algorithms for clusterability. Specifically, we devise a local clustering algorithm attaining a @math -approximation in time @math independently of the dataset size. An explicit approximate clustering for all objects can be produced in time @math (which is provably optimal). We also provide a fully additive @math -approximation with local query complexity @math and time complexity @math . The latter yields the fastest polynomial-time approximation scheme for correlation clustering known to date.
Recently, Rubinfeld @cite_24 introduced a model that encompasses notions from several algorithmic subfields, such as locally decodable codes, local reconstruction and local distributed computation. Our definition fits into their framework: it corresponds to algorithms that compute a cluster label function in constant time.
{ "cite_N": [ "@cite_24" ], "mid": [ "2949195414" ], "abstract": [ "For input @math , let @math denote the set of outputs that are the \"legal\" answers for a computational problem @math . Suppose @math and members of @math are so large that there is not time to read them in their entirety. We propose a model of local computation algorithms which for a given input @math , support queries by a user to values of specified locations @math in a legal output @math . When more than one legal output @math exists for a given @math , the local computation algorithm should output in a way that is consistent with at least one such @math . Local computation algorithms are intended to distill the common features of several concepts that have appeared in various algorithmic subfields, including local distributed computation, local algorithms, locally decodable codes, and local reconstruction. We develop a technique, based on known constructions of small sample spaces of @math -wise independent random variables and Beck's analysis in his algorithmic approach to the Lov ' a sz Local Lemma, which under certain conditions can be applied to construct local computation algorithms that run in polylogarithmic time and space. We apply this technique to maximal independent set computations, scheduling radio network broadcasts, hypergraph coloring and satisfying @math -SAT formulas." ] }
1312.5105
1794340831
Correlation clustering is perhaps the most natural formulation of clustering. Given @math objects and a pairwise similarity measure, the goal is to cluster the objects so that, to the best possible extent, similar objects are put in the same cluster and dissimilar objects are put in different clusters. Despite its theoretical appeal, the practical relevance of correlation clustering still remains largely unexplored, mainly due to the fact that correlation clustering requires the @math pairwise similarities as input. In this paper we initiate the investigation into algorithms for correlation clustering. In we are given the identifier of a single object and we want to return the cluster to which it belongs in some globally consistent near-optimal clustering, using a small number of similarity queries. Local algorithms for correlation clustering open the door to algorithms, which are particularly useful when the similarity between items is costly to compute, as it is often the case in many practical application domains. They also imply @math distributed and streaming clustering algorithms, @math constant-time estimators and testers for cluster edit distance, and @math property-preserving parallel reconstruction algorithms for clusterability. Specifically, we devise a local clustering algorithm attaining a @math -approximation in time @math independently of the dataset size. An explicit approximate clustering for all objects can be produced in time @math (which is provably optimal). We also provide a fully additive @math -approximation with local query complexity @math and time complexity @math . The latter yields the fastest polynomial-time approximation scheme for correlation clustering known to date.
Finally, we point out the work of Spielman and Teng @cite_28 pertaining local clustering algorithms. In their papers an algorithm is local'' if it can, given a vertex @math , output @math 's cluster in time nearly linear in the cluster's size. Our local clustering algorithms also have this ability (assuming, as they do, that for each vertex we are given a list of its neighbours), although the results are not comparable because @cite_28 attempt to minimize the cluster's conductance.
{ "cite_N": [ "@cite_28" ], "mid": [ "2135512436" ], "abstract": [ "We study the design of local algorithms for massive graphs. A local graph algorithm is one that finds a solution containing or near a given vertex without looking at the whole graph. We present a local clustering algorithm. Our algorithm finds a good cluster---a subset of vertices whose internal connections are significantly richer than its external connections---near a given vertex. The running time of our algorithm, when it finds a nonempty local cluster, is nearly linear in the size of the cluster it outputs. The running time of our algorithm also depends polylogarithmically on the size of the graph and polynomially on the conductance of the cluster it produces. Our clustering algorithm could be a useful primitive for handling massive graphs, such as social networks and web-graphs. As an application of this clustering algorithm, we present a partitioning algorithm that finds an approximate sparsest cut with nearly optimal balance. Our algorithm takes time nearly linear in the number edges of the graph...." ] }
1312.5105
1794340831
Correlation clustering is perhaps the most natural formulation of clustering. Given @math objects and a pairwise similarity measure, the goal is to cluster the objects so that, to the best possible extent, similar objects are put in the same cluster and dissimilar objects are put in different clusters. Despite its theoretical appeal, the practical relevance of correlation clustering still remains largely unexplored, mainly due to the fact that correlation clustering requires the @math pairwise similarities as input. In this paper we initiate the investigation into algorithms for correlation clustering. In we are given the identifier of a single object and we want to return the cluster to which it belongs in some globally consistent near-optimal clustering, using a small number of similarity queries. Local algorithms for correlation clustering open the door to algorithms, which are particularly useful when the similarity between items is costly to compute, as it is often the case in many practical application domains. They also imply @math distributed and streaming clustering algorithms, @math constant-time estimators and testers for cluster edit distance, and @math property-preserving parallel reconstruction algorithms for clusterability. Specifically, we devise a local clustering algorithm attaining a @math -approximation in time @math independently of the dataset size. An explicit approximate clustering for all objects can be produced in time @math (which is provably optimal). We also provide a fully additive @math -approximation with local query complexity @math and time complexity @math . The latter yields the fastest polynomial-time approximation scheme for correlation clustering known to date.
Testing and estimating clusterability. Our methods can also be used for quickly testing clusterability of a given input graph @math , which is related to the task of estimating the , i.e., the minimum number of edge label swaps (from +'' to --'' and viceversa) needed to transform @math into a cluster graph. Note that this corresponds to the optimal cost of correlation clustering for the given input @math . Clusterability is a hereditary graph property (closed under removal and renaming of vertices), hence it can be tested with one-sided error using a constant number of queries by the powerful result of Alon and Shapira @cite_14 . Combined with the work of Fischer and Newman @cite_7 , this also yields estimators for cluster edit distance that run in time independent of the graph size. Unfortunately, the query complexity of the algorithm given by these results would be a tower exponential of height @math , where @math is the approximation parameter.
{ "cite_N": [ "@cite_14", "@cite_7" ], "mid": [ "1985538939", "2007704435" ], "abstract": [ "The problem of characterizing all the testable graph properties is considered by many to be the most important open problem in the area of property testing. Our main result in this paper is a solution of an important special case of this general problem: Call a property tester oblivious if its decisions are independent of the size of the input graph. We show that a graph property @math has an oblivious one-sided error tester if and only if @math is semihereditary. We stress that any “natural” property that can be tested (either with one-sided or with two-sided error) can be tested by an oblivious tester. In particular, all the testers studied thus far in the literature were oblivious. Our main result can thus be considered as a precise characterization of the natural graph properties, which are testable with one-sided error. One of the main technical contributions of this paper is in showing that any hereditary graph property can be tested with one-sided error. This general result contains as a special case all the previous results about testing graph properties with one-sided error. More importantly, as a special case of our main result, we infer that some of the most well-studied graph properties, both in graph theory and computer science, are testable with one-sided error. Some of these properties are the well-known graph properties of being perfect, chordal, interval, comparability, permutation, and more. None of these properties was previously known to be testable.", "Tolerant testing is an emerging topic in the field of property testing, which was defined in [M. Parnas, D. Ron, and R. Rubinfeld, J. Comput. System Sci., 72 (2006), pp. 1012-1042] and has recently become a very active topic of research. In the general setting, there exist properties that are testable but are not tolerantly testable [E. Fischer and L. Fortnow, Proceedings of the @math th IEEE Conference on Computational Complexity, 2005, pp. 135-140]. On the other hand, we show here that in the setting of the dense graph model, all testable properties are not only tolerantly testable (which was already implicitly proved in [N. Alon, E. Fischer, M. Krivelevich, and M. Szegedy, Combinatorica, 20 (2000), pp. 451-476] and [O. Goldreich and L. Trevisan, Random Structures Algorithms, 23 (2003), pp. 23-57]), but also admit a constant query size algorithm that estimates the distance from the property up to any fixed additive constant. In the course of the proof we develop a framework for extending Szemeredi's regularity lemma, both as a prerequisite for formulating what kind of information about the input graph will provide us with the correct estimation, and as the means for efficiently gathering this information. In particular, we construct a probabilistic algorithm that finds the parameters of a regular partition of an input graph using a constant number of queries, and an algorithm to find a regular partition of a graph using a @math circuit. This, in some ways, strengthens the results of [N. Alon, R. A. Duke, H. Lefmann, V. Rodl, and R. Yuster, J. Algorithms, 16 (1994), pp. 80-109]." ] }
1312.5105
1794340831
Correlation clustering is perhaps the most natural formulation of clustering. Given @math objects and a pairwise similarity measure, the goal is to cluster the objects so that, to the best possible extent, similar objects are put in the same cluster and dissimilar objects are put in different clusters. Despite its theoretical appeal, the practical relevance of correlation clustering still remains largely unexplored, mainly due to the fact that correlation clustering requires the @math pairwise similarities as input. In this paper we initiate the investigation into algorithms for correlation clustering. In we are given the identifier of a single object and we want to return the cluster to which it belongs in some globally consistent near-optimal clustering, using a small number of similarity queries. Local algorithms for correlation clustering open the door to algorithms, which are particularly useful when the similarity between items is costly to compute, as it is often the case in many practical application domains. They also imply @math distributed and streaming clustering algorithms, @math constant-time estimators and testers for cluster edit distance, and @math property-preserving parallel reconstruction algorithms for clusterability. Specifically, we devise a local clustering algorithm attaining a @math -approximation in time @math independently of the dataset size. An explicit approximate clustering for all objects can be produced in time @math (which is provably optimal). We also provide a fully additive @math -approximation with local query complexity @math and time complexity @math . The latter yields the fastest polynomial-time approximation scheme for correlation clustering known to date.
Approximation algorithms for MIN-2-CSP problems @cite_38 also give estimators for cluster edit distance. However, they provide no way of computing each variable assignment in constant time. Moreover, they use time @math to calculate all assignments, and hence do not lend themselves to sublinear-time clustering algorithms.
{ "cite_N": [ "@cite_38" ], "mid": [ "1973652906" ], "abstract": [ "In a maximum-r-constraint satisfaction problem with variables x1, x2, ... ,xn , we are given Boolean functions f1, f2, ..., fm each involving r of the n variables and are to find the maximum number of these functions that can be made true by a truth assignment to the variables. We show that for r fixed, there is an integer q ∈ O(log(1 e) e4) such that if we choose q variables (uniformly) at random, the answer to the subproblem induced on the chosen variables is, with high probability, within an additive error of eqr of qr nr times the answer to the original n-variable problem. The previous best result for the case of r = 2 (which includes many graph problems) was that there is an algorithm which given the induced sub-problem on q = O(1 e5) variables, can find an approximation to the answer to the whole problem within additive error en2. For r≥3, the conference version of this paper (in: Proceedings of the 34th ACM STOC, ACM, New York, 2002, pp. 232-239) and independently Andersson and Engebretsen give the first results with sample complexity q dependent only polynomially upon 1 e. Their algorithm has a sample complexity q of O(1 e7). They (as also the earlier papers) however do not directly prove any relation between the answer to the sub-problem and the whole problem as we do here. Our method also differs from other results in that it is linear algebraic, rather than combinatorial in nature." ] }
1312.5138
2953204537
Ranging by Time of Arrival (TOA) of Narrow-band ultrasound (NBU) has been widely used by many locating systems for its characteristics of low cost and high accuracy. However, because it is hard to support code division multiple access in narrowband signal, to track multiple targets, existing NBU-based locating systems generally need to assign exclusive time slot to each target to avoid the signal conflicts. Because the propagation speed of ultrasound is slow in air, dividing exclusive time slots on a single channel causes the location updating rate for each target rather low, leading to unsatisfied tracking performances as the number of targets increases. In this paper, we investigated a new multiple target locating method using NBU, called UltraChorus, which is to locate multiple targets while allowing them sending NBU signals simultaneously, i.e., in chorus mode. It can dramatically increase the location updating rate. In particular, we investigated by both experiments and theoretical analysis on the necessary and sufficient conditions for resolving the conflicts of multiple NBU signals on a single channel, which is referred as the conditions for chorus ranging and chorus locating. To tackle the difficulty caused by the anonymity of the measured distances, we further developed consistent position generation algorithm and probabilistic particle filter algorithm to label the distances by sources, to generate reasonable location estimations, and to disambiguate the motion trajectories of the multiple concurrent targets based on the anonymous distance measurements. Extensive evaluations by both simulation and testbed were carried out, which verified the effectiveness of our proposed theories and algorithms.
Ranging by TOA of NBU is a very attractive technique for fine-grained indoor locating due to its high accuracy, low cost, safe-to-user and user-imperceptibility. It can provide positioning accuracy in centimeter level even in 3D space, which makes it very fascinating in may indoor applications. Popular ultrasound TOA-based indoor locating system include Bat @cite_8 , Cricket @cite_3 , AUITS @cite_1 , LOSNUS @cite_9 , etc. Popular application scenario include location-based access control @cite_7 , location based advertising delivery @cite_2 , healthcare etc.
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_9", "@cite_1", "@cite_3", "@cite_2" ], "mid": [ "2155197295", "", "2017102396", "", "2112737587", "2110814029" ], "abstract": [ "With proliferation of ubiquitous computing, digital access is facing an increasing risk since unauthorized client located at any place may intrude a local server. Location Based Access Control (LBAC) is a promising solution that tries to protect the client's access within some user-defined secure zones. Although a lot of prior work has focused on LBAC, most of them suffer from coarse judgment resolution problem or considerable manual setting-up efforts. This paper proposes LOCK, a highly accurate, easy-to-use LBAC system, which uses autonomous ultrasound positioning devices and an access control engine to precisely characterize the secure zones and accurately judge the online access authority. Particularly, the ultrasound positioning device provides relative 3D coordinate of the mobile clients. Measurement-Free Calibration (MFC) is proposed to easily calibrate these positioning devices to transform their relative positioning results into an absolute coordinate system. In this coordinate system, secure zones are characterized by a Coherent Secure Zone Fitting (CSZF) method to compensate the disparity between manually measured secure zone and the secure zone seen by the positioning devices. Furthermore, a Round-Trip Judgment (RTJ) algorithm is designed to fast online determine the geographical relationship between the client's position and such secure zones. A prototype of LOCK system was implemented by defining a meeting table as secure zone to control the client's access to a FTP server. Experiment results show that the system can be easily set up and can control the client's access with centimeter level judgment resolution.", "", "This paper presents an indoor positioning system called LOSNUS (LOcalization of Sensor Nodes by Ultra-Sound). It offers high accuracy of ∼10 mm, a locating rate up to ∼10 cycles s and is applicable for both tracking mobile and locating static devices. LOSNUS is mainly designed to localize static devices especially in a wireless sensor network (WSN) with numerously deployed sensor actuator devices which enables substantially improving a lot of aspects of applications, e.g. network integration of nodes, supplying node locations to application programs, supervising locations with respect to accidentally dislocating, automatic setup and detecting faking of node locations. In order to deal with the demand of locating static devices, the system is optimized for cheap implementation and on the other hand for a high resolution of locations. Concept and basic operation, realization of system components and low-cost receiver principles, improved system performance and setup of a test system will be discussed in this paper.", "", "This paper presents the design, implementation, and evaluation of Cricket , a location-support system for in-building, mobile, location-dependent applications. It allows applications running on mobile and static nodes to learn their physical location by using listeners that hear and analyze information from beacons spread throughout the building. Cricket is the result of several design goals, including user privacy, decentralized administration, network heterogeneity, and low cost. Rather than explicitly tracking user location, Cricket helps devices learn where they are and lets them decide whom to advertise this information to; it does not rely on any centralized management or control and there is no explicit coordination between beacons; it provides information to devices regardless of their type of network connectivity; and each Cricket device is made from off-the-shelf components and costs less than U.S. $10. We describe the randomized algorithm used by beacons to transmit information, the use of concurrent radio and ultrasonic signals to infer distance, the listener inference algorithms to overcome multipath and interference, and practical beacon configuration and positioning techniques that improve accuracy. Our experience with Cricket shows that several location-dependent applications such as in-building active maps and device control can be developed with little effort or manual configuration.", "This paper proposes PosPush, a highly accurate location based information delivery system, which utilizes the high resolution 3D locations obtained from ultrasonic positioning devices to efficiently deliver the location based information to users. This system is designed especially for applications where a 3D space is partitioned into a set of closely neighboring small zones, and as a user moves into one of the zones, the corresponding information will be timely transferred to the user. Although a lot of prior work has been focused on Location based Information Delivery (LIDS), most of them are based on very coarse location data to provide proximity-based information delivery. They cannot be exploited for the above-mentioned applications due to the lack of mechanisms to identify the precise zone and to determine the appropriate delivery time. In order to identify precise zones, PosPush defines a zone model by a set of key location points extracted by a location clustering algorithm, and the zone model is used for online zone identification based on hierarchical searching. In order to determine the appropriate delivery time, an Adaptive Window Change Detection (AWCD) method is proposed to detect the fast change along the location stream. Finally, we describe a prototypical application which deliveries information of commodities on a shelf based on PosPush, and verify the feasibility and effectiveness of our proposed system." ] }
1312.5138
2953204537
Ranging by Time of Arrival (TOA) of Narrow-band ultrasound (NBU) has been widely used by many locating systems for its characteristics of low cost and high accuracy. However, because it is hard to support code division multiple access in narrowband signal, to track multiple targets, existing NBU-based locating systems generally need to assign exclusive time slot to each target to avoid the signal conflicts. Because the propagation speed of ultrasound is slow in air, dividing exclusive time slots on a single channel causes the location updating rate for each target rather low, leading to unsatisfied tracking performances as the number of targets increases. In this paper, we investigated a new multiple target locating method using NBU, called UltraChorus, which is to locate multiple targets while allowing them sending NBU signals simultaneously, i.e., in chorus mode. It can dramatically increase the location updating rate. In particular, we investigated by both experiments and theoretical analysis on the necessary and sufficient conditions for resolving the conflicts of multiple NBU signals on a single channel, which is referred as the conditions for chorus ranging and chorus locating. To tackle the difficulty caused by the anonymity of the measured distances, we further developed consistent position generation algorithm and probabilistic particle filter algorithm to label the distances by sources, to generate reasonable location estimations, and to disambiguate the motion trajectories of the multiple concurrent targets based on the anonymous distance measurements. Extensive evaluations by both simulation and testbed were carried out, which verified the effectiveness of our proposed theories and algorithms.
Another approach is to explore the broadband ultrasound. Compared to the narrowband version, broadband ultrasound requires the transducer @cite_6 to have better frequency response performance. The broadband ultrasound wave can accommodate identity of target to support multiple targets. Furthermore, if the wave is encoded with orthogonal code @cite_0 , two waves can be decoded respectively even overlapped. But broadband locating needs high cost transducers, and the signal is more sensitive to the Doppler effects. To the best of our knowledge, very few results have been reported for locating in chorus mode, because the collision problem of NBUs are generally hard to tackle. In this paper, we investigate conditions and algorithms to resolve this challenge.
{ "cite_N": [ "@cite_0", "@cite_6" ], "mid": [ "2001715459", "2171785275" ], "abstract": [ "We present an efficient CDMA detection core suited for multiuser indoor acoustic positioning. An ultrasonic multi-code despreader is proposed, allowing simultaneous broadband acoustic ranging signals to be processed in real time by embedded sensors. The ranging performance is characterised using a dataset gathered from a real deployment of ultrasonic devices and is shown to be favourable. The proposed core can be used as a basis for more sophisticated receivers, such as those capable of detecting heavily Doppler-shifted signals.", "Ultrasonic location systems are a popular solution for the provision of fine-grained indoor positioning data. Applications include enhanced routing for wireless networks, computer-aided navigation, and location-sensitive device behavior. However, current ultrasonic location systems suffer from limitations due to their use of narrowband transducers, This paper investigates the use of broadband ultrasound for indoor positioning systems. Broadband ultrasonic transmitter and receiver units have been developed and characterized. The utilization of these units to construct two positioning systems with different architectures serves to highlight and affirm the concrete, practical benefits of broadband ultrasound for locating people and devices indoors." ] }
1312.5111
2949180239
In this paper, we introduce a tag recommendation algorithm that mimics the way humans draw on items in their long-term memory. This approach uses the frequency and recency of previous tag assignments to estimate the probability of reusing a particular tag. Using three real-world folksonomies gathered from bookmarks in BibSonomy, CiteULike and Flickr, we show how adding a time-dependent component outperforms conventional "most popular tags" approaches and another existing and very effective but less theory-driven, time-dependent recommendation mechanism. By combining our approach with a simple resource-specific frequency analysis, our algorithm outperforms other well-established algorithms, such as FolkRank, Pairwise Interaction Tensor Factorization and Collaborative Filtering. We conclude that our approach provides an accurate and computationally efficient model of a user's temporal tagging behavior. We show how effective principles for information retrieval can be designed and implemented if human memory processes are taken into account.
Recent years have shown that tagging is an important feature of the Social Web supporting the users with a simple mechanism to collaboratively organize and finding content @cite_12 . Although tagging has been shown to significantly improve search @cite_30 (and in particular tags provided by the individual), it is also known that users are typically lazy in providing tags for instance for their bookmarked resources. It is therefore not surprising that recent research has taken up this challenge to support the individual in her tag application process in the form of personalized tag recommenders. To date, the two following approaches have been established -- graph based and content-based tag recommender systems @cite_32 . In our work we focus on graph-based approaches.
{ "cite_N": [ "@cite_30", "@cite_32", "@cite_12" ], "mid": [ "", "2274024856", "2099608293" ], "abstract": [ "", ". . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x List of Symbols and Abbreviations Used . . . . . . . . . . . . . . . . . . xi Chapter", "Recent research provides evidence for the presence of emergent semantics in collaborative tagging systems. While several methods have been proposed, little is known about the factors that influence the evolution of semantic structures in these systems. A natural hypothesis is that the quality of the emergent semantics depends on the pragmatics of tagging: Users with certain usage patterns might contribute more to the resulting semantics than others. In this work, we propose several measures which enable a pragmatic differentiation of taggers by their degree of contribution to emerging semantic structures. We distinguish between categorizers, who typically use a small set of tags as a replacement for hierarchical classification schemes, and describers, who are annotating resources with a wealth of freely associated, descriptive keywords. To study our hypothesis, we apply semantic similarity measures to 64 different partitions of a real-world and large-scale folksonomy containing different ratios of categorizers and describers. Our results not only show that \"verbose\" taggers are most useful for the emergence of tag semantics, but also that a subset containing only 40 of the most 'verbose' taggers can produce results that match and even outperform the semantic precision obtained from the whole dataset. Moreover, the results suggest that there exists a causal link between the pragmatics of tagging and resulting emergent semantics. This work is relevant for designers and analysts of tagging systems interested (i) in fostering the semantic development of their platforms, (ii) in identifying users introducing \"semantic noise\", and (iii) in learning ontologies." ] }
1312.5111
2949180239
In this paper, we introduce a tag recommendation algorithm that mimics the way humans draw on items in their long-term memory. This approach uses the frequency and recency of previous tag assignments to estimate the probability of reusing a particular tag. Using three real-world folksonomies gathered from bookmarks in BibSonomy, CiteULike and Flickr, we show how adding a time-dependent component outperforms conventional "most popular tags" approaches and another existing and very effective but less theory-driven, time-dependent recommendation mechanism. By combining our approach with a simple resource-specific frequency analysis, our algorithm outperforms other well-established algorithms, such as FolkRank, Pairwise Interaction Tensor Factorization and Collaborative Filtering. We conclude that our approach provides an accurate and computationally efficient model of a user's temporal tagging behavior. We show how effective principles for information retrieval can be designed and implemented if human memory processes are taken into account.
Although the latter mentioned approaches perform reasonable well, they are computational expensive compared to simple "most popular tags" approaches. Furthermore, they ignore recent observations made in social tagging systems, such as the variation of the individual tagging behavior over time @cite_21 . To that end, recent research has made first promising steps towards more accurate graph-based models that also account for the variable of time @cite_11 @cite_24 . The approaches have shown to outperform some of the current state-of-the-art tag recommender algorithms.
{ "cite_N": [ "@cite_24", "@cite_21", "@cite_11" ], "mid": [ "2152674497", "1556257772", "2028373970" ], "abstract": [ "The emergence of social tagging systems enables users to organize and share their interested resources. In order to ease the human-computer interaction with such systems, extensive researches have been done on how to recommend personalized tags for rescources. These researches mainly consider user profile, resource content, or the graph structure of users, resources and tags. Users' preferences towards different tags are usually regarded as invariable against time, neglecting the switch of users' short-term interests. In this paper, we examine the temporal factor in users' tagging behaviors by investigating the occurrence patterns of tags and then incorporate this into a novel method for ranking tags. To assess a tag for a user-resource pair, we first consider the user's general interest in it, then we calculate its recurrence probability based on the temporal usage pattern, and at last we consider its tag relevance to the content of the post. Experiments conducted on real datasets from Bibsonomy and Delicious demonstrate that our method outperforms other temporal models and state-of-the-art tag prediction methods.", "Collaborative tagging systems are now deployed extensively to help users share and organize resources. Tag prediction and recommendation systems generally model user behavior as research has shown that accuracy can be significantly improved by modeling users' preferences. However, these preferences are usually treated as constant over time, neglecting the temporal factor within users' interests. On the other hand, little is known about how this factor may influence prediction in social bookmarking systems. In this paper, we investigate the temporal dynamics of user interests in tagging systems and propose a user-tag-specific temporal interests model for tracking users' interests over time. Additionally, we analyze the phenomenon of topic switches in social bookmarking systems, showing that a temporal interests model can benefit from the integration of topic switch detection and that temporal characteristics of social tagging systems are different from traditional concept drift problems. We conduct experiments on three public datasets, demonstrating the importance of personalization and user-tag specialization in tagging systems. Experimental results show that our method can outperform state-of-the-art tag prediction algorithms. We also incorporate our model within existing content-based methods yielding significant improvements in performance.", "In social bookmarking systems, existing methods in tag prediction have shown that the performance of prediction can be significantly improved by modeling users' preferences. However, these preferences are usually treated as constant over time, neglecting the temporal factor within users' behaviors. In this paper, we study the problem of session-like behavior in social tagging systems and demonstrate that the predictive performance can be improved by considering sessions. Experiments, conducted on three public datasets, show that our session-based method can outperform baselines and two state-of-the-art algorithms significantly." ] }
1312.5111
2949180239
In this paper, we introduce a tag recommendation algorithm that mimics the way humans draw on items in their long-term memory. This approach uses the frequency and recency of previous tag assignments to estimate the probability of reusing a particular tag. Using three real-world folksonomies gathered from bookmarks in BibSonomy, CiteULike and Flickr, we show how adding a time-dependent component outperforms conventional "most popular tags" approaches and another existing and very effective but less theory-driven, time-dependent recommendation mechanism. By combining our approach with a simple resource-specific frequency analysis, our algorithm outperforms other well-established algorithms, such as FolkRank, Pairwise Interaction Tensor Factorization and Collaborative Filtering. We conclude that our approach provides an accurate and computationally efficient model of a user's temporal tagging behavior. We show how effective principles for information retrieval can be designed and implemented if human memory processes are taken into account.
Related to the latter strand of research, we present in this paper a novel graph-based tag recommender mechanism that uses the BLL equation which is based on the principles of a popular model of human cognition called ACT-R (e.g., @cite_10 ). We show that the approach is not only extremely simple but also reveal that the algorithm outperforms current state-of-the-art graph-based (e.g., @cite_2 @cite_4 @cite_15 ) and the leading time-based @cite_24 tag recommender approaches.
{ "cite_N": [ "@cite_4", "@cite_24", "@cite_2", "@cite_15", "@cite_10" ], "mid": [ "", "2152674497", "2095419287", "1549874165", "2136518234" ], "abstract": [ "", "The emergence of social tagging systems enables users to organize and share their interested resources. In order to ease the human-computer interaction with such systems, extensive researches have been done on how to recommend personalized tags for rescources. These researches mainly consider user profile, resource content, or the graph structure of users, resources and tags. Users' preferences towards different tags are usually regarded as invariable against time, neglecting the switch of users' short-term interests. In this paper, we examine the temporal factor in users' tagging behaviors by investigating the occurrence patterns of tags and then incorporate this into a novel method for ranking tags. To assess a tag for a user-resource pair, we first consider the user's general interest in it, then we calculate its recurrence probability based on the temporal usage pattern, and at last we consider its tag relevance to the content of the post. Experiments conducted on real datasets from Bibsonomy and Delicious demonstrate that our method outperforms other temporal models and state-of-the-art tag prediction methods.", "Collaborative tagging services (folksonomies) have been among the stars of the Web 2.0 era. They allow their users to label diverse resources with freely chosen keywords (tags). Our studies of two real-world folksonomies unveil that individual users develop highly personalized vocabularies of tags. While these meet individual needs and preferences, the considerable differences between personal tag vocabularies (personomies) impede services such as social search or customized tag recommendation. In this paper, we introduce a novel user-centric tag model that allows us to derive mappings between personal tag vocabularies and the corresponding folksonomies. Using these mappings, we can infer the meaning of user-assigned tags and can predict choices of tags a user may want to assign to new items. Furthermore, our translational approach helps in reducing common problems related to tag ambiguity, synonymous tags, or multilingualism. We evaluate the applicability of our method in tag recommendation and tag-based social search. Extensive experiments show that our translational model improves the prediction accuracy in both scenarios.", "Collaborative tagging systems allow users to assign keywords--so called \"tags\"--to resources. Tags are used for navigation, finding resources and serendipitous browsing and thus provide an immediate benefit for users. These systems usually include tag recommendation mechanisms easing the process of finding good tags for a resource, but also consolidating the tag vocabulary across users. In practice, however, only very basic recommendation strategies are applied. In this paper we evaluate and compare two recommendation algorithms on large-scale real life datasets: an adaptation of user-based collaborative filtering and a graph-based recommender built on top of FolkRank. We show that both provide better results than non-personalized baseline methods. Especially the graph-based recommender outperforms existing methods considerably.", "Adaptive control of thought–rational (ACT–R; J. R. Anderson & C. Lebiere, 1998) has evolved into a theory that consists of multiple modules but also explains how these modules are integrated to produce coherent cognition. The perceptual-motor modules, the goal module, and the declarative memory module are presented as examples of specialized systems in ACT–R. These modules are associated with distinct cortical regions. These modules place chunks in buffers where they can be detected by a production system that responds to patterns of information in the buffers. At any point in time, a single production rule is selected to respond to the current pattern. Subsymbolic processes serve to guide the selection of rules to fire as well as the internal operations of some modules. Much of learning involves tuning of these subsymbolic processes. A number of simple and complex empirical examples are described to illustrate how these modules function singly and in concert." ] }
1312.4967
2034643056
Estimating the body shape and posture of a dressed human subject in motion represented as a sequence of (possibly incomplete) 3D meshes is important for virtual change rooms and security. To solve this problem, statistical shape spaces encoding human body shape and posture variations are commonly used to constrain the search space for the shape estimate. In this work, we propose a novel method that uses a posture-invariant shape space to model body shape variation combined with a skeleton-based deformation to model posture variation. Our method can estimate the body shape and posture of both static scans and motion sequences of human body scans with clothing that fits relatively closely to the body. In case of motion sequences, our method takes advantage of motion cues to solve for a single body shape estimate along with a sequence of posture estimates. We apply our approach to both static scans and motion sequences and demonstrate that using our method, higher fitting accuracy is achieved than when using a variant of the popular SCAPE model [2,18] as statistical model.
The problem of estimating the body shape and posture of humans occurs in many applications and has been researched extensively in computer vision and computer graphics. Many methods focus on estimating the posture of a subject in an image or a 3D scan aiming to predict the body shape (e.g. @cite_3 @cite_2 @cite_15 ). Other methods aim to track a human shape that may include detailed clothing across a sequence of images or 3D scans in order to capture the acquired motion without using markers (e.g. @cite_4 @cite_5 @cite_21 @cite_28 @cite_14 ).
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_28", "@cite_21", "@cite_3", "@cite_2", "@cite_5", "@cite_15" ], "mid": [ "2110434318", "2165258384", "2109752307", "2122578066", "2154750607", "2168415715", "2062192902", "2092146246" ], "abstract": [ "This paper proposes a method for capturing the performance of a human or an animal from a multi-view video sequence. Given an articulated template model and silhouettes from a multi-view image sequence, our approach recovers not only the movement of the skeleton, but also the possibly non-rigid temporal deformation of the 3D surface. While large scale deformations or fast movements are captured by the skeleton pose and approximate surface skinning, true small scale deformations or non-rigid garment motion are captured by fitting the surface to the silhouette. We further propose a novel optimization scheme for skeleton-based pose estimation that exploits the skeleton's tree structure to split the optimization problem into a local one and a lower dimensional global one. We show on various sequences that our approach can capture the 3D motion of animals and humans accurately even in the case of rapid movements and wide apparel like skirts.", "Human motion capture is frequently used to study musculoskelet al biomechanics and clinical problems, as well as to provide realistic animation for the entertainment industry. The most popular technique for human motion capture uses markers placed on the skin, despite some important drawbacks including the impediment to the motion by the presence of skin markers and relative movement between the skin where the markers are placed and the underlying bone. The latter makes it difficult to estimate the motion of the underlying bone, which is the variable of interest for biomechanical and clinical applications. A model-based mark- erless motion capture system is presented in this study, which does not require the placement of any markers on the subject's body. The described method is based on visual hull reconstruction and an ap riorimodel of the subject. A custom version of adapted fast simulated annealing has been developed to match the model to the visual hull. The tracking capability and a quantitative validation of the method were evaluated in a virtual environment for a com- plete gait cycle. The obtained mean errors, for an entire gait cycle, for knee and hip flexion are respectively 1.5 ◦ (± 3.9 ◦ ) and 2.0 ◦ (± 3.0 ◦ ), while for knee and hip adduction they are respectively 2.0 ◦ (± 2.3 ◦ ) and 1.1 ◦ (± 1.7 ◦ ). Results for the ankle and shoulder joints are also presented. Experimental results captured in a gait laboratory with a real subject are also shown to demonstrate the effectiveness and potential of the presented method in a clinical environment.", "This paper proposes a new marker-less approach to capturing human performances from multi-view video. Our algorithm can jointly reconstruct spatio-temporally coherent geometry, motion and textural surface appearance of actors that perform complex and rapid moves. Furthermore, since our algorithm is purely meshbased and makes as few as possible prior assumptions about the type of subject being tracked, it can even capture performances of people wearing wide apparel, such as a dancer wearing a skirt. To serve this purpose our method efficiently and effectively combines the power of surface- and volume-based shape deformation techniques with a new mesh-based analysis-through-synthesis framework. This framework extracts motion constraints from video and makes the laser-scan of the tracked subject mimic the recorded performance. Also small-scale time-varying shape detail is recovered by applying model-guided multi-view stereo to refine the model surface. Our method delivers captured performance data at high level of detail, is highly versatile, and is applicable to many complex types of scenes that could not be handled by alternative marker-based or marker-free recording techniques.", "Details in mesh animations are difficult to generate but they have great impact on visual quality. In this work, we demonstrate a practical software system for capturing such details from multi-view video recordings. Given a stream of synchronized video images that record a human performance from multiple viewpoints and an articulated template of the performer, our system captures the motion of both the skeleton and the shape. The output mesh animation is enhanced with the details observed in the image silhouettes. For example, a performance in casual loose-fitting clothes will generate mesh animations with flowing garment motions. We accomplish this with a fast pose tracking method followed by nonrigid deformation of the template to fit the silhouettes. The entire process takes less than sixteen seconds per frame and requires no markers or texture cues. Captured meshes are in full correspondence making them readily usable for editing operations including texturing, deformation transfer, and deformation model learning.", "We address the problem of human motion tracking by registering a surface to 3-D data. We propose a method that iteratively computes two things: Maximum likelihood estimates for both the kinematic and free-motion parameters of an articulated object, as well as probabilities that the data are assigned either to an object part, or to an outlier cluster. We introduce a new metric between observed points and normals on one side, and a parameterized surface on the other side, the latter being defined as a blending over a set of ellipsoids. We claim that this metric is well suited when one deals with either visual-hull or visual-shape observations. We illustrate the method by tracking human motions using sparse visual-shape data (3-D surface points and normals) gathered from imperfect silhouettes.", "In recent years, depth cameras have become a widely available sensor type that captures depth images at real-time frame rates. Even though recent approaches have shown that 3D pose estimation from monocular 2.5D depth images has become feasible, there are still challenging problems due to strong noise in the depth data and self-occlusions in the motions being captured. In this paper, we present an efficient and robust pose estimation framework for tracking full-body motions from a single depth image stream. Following a data-driven hybrid strategy that combines local optimization with global retrieval techniques, we contribute several technical improvements that lead to speed-ups of an order of magnitude compared to previous approaches. In particular, we introduce a variant of Dijkstra's algorithm to efficiently extract pose features from the depth data and describe a novel late-fusion scheme based on an efficiently computable sparse Hausdorff distance to combine local and global pose estimates. Our experiments show that the combination of these techniques facilitates real-time tracking with stable results even for fast and complex motions, making it applicable to a wide range of inter-active scenarios.", "We present a novel algorithm to jointly capture the motion and the dynamic shape of humans from multiple video streams without using optical markers. Instead of relying on kinematic skeletons, as traditional motion capture methods, our approach uses a deformable high-quality mesh of a human as scene representation. It jointly uses an image-based 3D correspondence estimation algorithm and a fast Laplacian mesh deformation scheme to capture both motion and surface deformation of the actor from the input video footage. As opposed to many related methods, our algorithm can track people wearing wide apparel, it can straightforwardly be applied to any type of subject, e.g. animals, and it preserves the connectivity of the mesh over time. We demonstrate the performance of our approach using synthetic and captured real-world video sequences and validate its accuracy by comparison to the ground truth.", "We present an approach for modeling the human body by Sums of spatial Gaussians (SoG), allowing us to perform fast and high-quality markerless motion capture from multi-view video sequences. The SoG model is equipped with a color model to represent the shape and appearance of the human and can be reconstructed from a sparse set of images. Similar to the human body, we also represent the image domain as SoG that models color consistent image blobs. Based on the SoG models of the image and the human body, we introduce a novel continuous and differentiable model-to-image similarity measure that can be used to estimate the skelet al motion of a human at 5–15 frames per second even for many camera views. In our experiments, we show that our method, which does not rely on silhouettes or training data, offers an good balance between accuracy and computational cost." ] }
1312.4967
2034643056
Estimating the body shape and posture of a dressed human subject in motion represented as a sequence of (possibly incomplete) 3D meshes is important for virtual change rooms and security. To solve this problem, statistical shape spaces encoding human body shape and posture variations are commonly used to constrain the search space for the shape estimate. In this work, we propose a novel method that uses a posture-invariant shape space to model body shape variation combined with a skeleton-based deformation to model posture variation. Our method can estimate the body shape and posture of both static scans and motion sequences of human body scans with clothing that fits relatively closely to the body. In case of motion sequences, our method takes advantage of motion cues to solve for a single body shape estimate along with a sequence of posture estimates. We apply our approach to both static scans and motion sequences and demonstrate that using our method, higher fitting accuracy is achieved than when using a variant of the popular SCAPE model [2,18] as statistical model.
Statistical shape models learn a probability distribution from a database of 3D shapes. To perform statistics on the shapes, the shapes need to be in full correspondence. @cite_1 proposed a method to compute correspondences between human bodies in a standard posture and to learn a shape model using principal component analysis (PCA). This technique has the drawback that small variations in posture are not separated from shape variations. To remedy this, multiple follow-up methods have been proposed. @cite_8 analyze body shape and posture jointly by performing PCA on a rotation-invariant encoding of the model's triangles. While this method models different postures, it cannot directly be constrained to have a constant body shape and different poses for the same subject captured in multiple postures. With the goal of analyzing body shape independently of posture, @cite_18 propose to perform PCA on a shape representation based on localized Laplace coordinates of the mesh. In this work, we combine this shape space with a skeleton-based deformation model that allows to vary the body posture.
{ "cite_N": [ "@cite_18", "@cite_1", "@cite_8" ], "mid": [ "2267768293", "2099011563", "1993846356" ], "abstract": [ "Statistical shape analysis is a tool that allows to quantify the shape variability of a population of shapes. Traditional tools to perform statistical shape analysis compute variations that reflect both shape and posture changes simultaneously. In many applications, such as ergonomic design applications, we are only interested in shape variations. With traditional tools, it is not straightforward to separate shape and posture variations. To overcome this problem, we propose an approach to perform statistical shape analysis in a posture-invariant way. The approach is based on a local representation that is obtained using the Laplace operator.", "We develop a novel method for fitting high-resolution template meshes to detailed human body range scans with sparse 3D markers. We formulate an optimization problem in which the degrees of freedom are an affine transformation at each template vertex. The objective function is a weighted combination of three measures: proximity of transformed vertices to the range data, similarity between neighboring transformations, and proximity of sparse markers at corresponding locations on the template and target surface. We solve for the transformations with a non-linear optimizer, run at two resolutions to speed convergence. We demonstrate reconstruction and consistent parameterization of 250 human body models. With this parameterized set, we explore a variety of applications for human body modeling, including: morphing, texture transfer, statistical analysis of shape, model fitting from sparse markers, feature analysis to modify multiple correlated parameters (such as the weight and height of an individual), and transfer of surface detail and animation controls from a template to fitted models.", "A circuit for controlling a display panel identifying malfunctions in an engine generator receives a plurality of electrical signals from the engine generator, each of which identifies a particular trouble. The electrical signal may be produced by closing a switch. It is caused to operate a latch that lights a light associated with the particular malfunction. Indications of other malfunctions are suppressed until the circuit is reset. A manual reset tests all lights and then leaves them off ready to respond. A power-up reset does not test lights but leaves all lights off ready to respond. The circuit is rendered especially appropriate for military use by hardening against radiation and against pulses of electromagnetic interference." ] }
1312.4967
2034643056
Estimating the body shape and posture of a dressed human subject in motion represented as a sequence of (possibly incomplete) 3D meshes is important for virtual change rooms and security. To solve this problem, statistical shape spaces encoding human body shape and posture variations are commonly used to constrain the search space for the shape estimate. In this work, we propose a novel method that uses a posture-invariant shape space to model body shape variation combined with a skeleton-based deformation to model posture variation. Our method can estimate the body shape and posture of both static scans and motion sequences of human body scans with clothing that fits relatively closely to the body. In case of motion sequences, our method takes advantage of motion cues to solve for a single body shape estimate along with a sequence of posture estimates. We apply our approach to both static scans and motion sequences and demonstrate that using our method, higher fitting accuracy is achieved than when using a variant of the popular SCAPE model [2,18] as statistical model.
Several methods have been proposed to decorrelate the variations due to body shape and posture changes, which allow to vary body shape and posture independently. The most popular of these models is the SCAPE model @cite_20 , which combines a body shape model computed by performing PCA on a population of 3D models captured in a standard posture with a posture model computed by analyzing near-rigid body parts (corresponding to bones) of a single body shape in multiple postures. @cite_27 recently proposed to improve this model by adding multi-linear shape models for each part of the SCAPE model, thereby enabling more realistic deformation behaviour near joints of the body. Neophytou and Hilton @cite_29 proposed an alternative statistical model that consists of a shape space learned as PCA space on normalized postures and a pose space that is learned from different subjects in different postures.
{ "cite_N": [ "@cite_27", "@cite_29", "@cite_20" ], "mid": [ "2026861142", "2080666679", "1989191365" ], "abstract": [ "In this paper, we present a novel approach to model 3D human body with variations on both human shape and pose, by exploring a tensor decomposition technique. 3D human body modeling is important for 3D reconstruction and animation of realistic human body, which can be widely used in Tele-presence and video game applications. It is challenging due to a wide range of shape variations over different people and poses. The existing SCAPE model is popular in computer vision for modeling 3D human body. However, it considers shape and pose deformations separately, which is not accurate since pose deformation is person-dependent. Our tensor-based model addresses this issue by jointly modeling shape and pose deformations. Experimental results demonstrate that our tensor-based model outperforms the SCAPE model quite significantly. We also apply our model to capture human body using Microsoft Kinect sensors with excellent results.", "In this paper we present a framework for generating arbitrary human models and animating them realistically given a few intuitive parameters. Shape and pose space deformation (SPSD) is introduced as a technique for modeling subject specific pose induced deformations from whole-body registered 3D scans. By exploiting examples of different people in multiple poses we are able to realistically animate a novel subject by interpolating and extrapolating in a joint shape and pose parameter space. Our results show that we can produce plausible animations of new people and that greater detail is achieved by incorporating subject specific pose deformations. We demonstrate the application of SPSD to produce subject specific animation sequences driven by RGB-Z performance capture.", "We introduce the SCAPE method (Shape Completion and Animation for PEople)---a data-driven method for building a human shape model that spans variation in both subject shape and pose. The method is based on a representation that incorporates both articulated and non-rigid deformations. We learn a pose deformation model that derives the non-rigid surface deformation as a function of the pose of the articulated skeleton. We also learn a separate model of variation based on body shape. Our two models can be combined to produce 3D surface models with realistic muscle deformation for different people in different poses, when neither appear in the training set. We show how the model can be used for shape completion --- generating a complete surface mesh given a limited set of markers specifying the target shape. We present applications of shape completion to partial view completion and motion capture animation. In particular, our method is capable of constructing a high-quality animated surface model of a moving person, with realistic muscle deformation, using just a single static scan and a marker motion capture sequence of the person." ] }
1312.4967
2034643056
Estimating the body shape and posture of a dressed human subject in motion represented as a sequence of (possibly incomplete) 3D meshes is important for virtual change rooms and security. To solve this problem, statistical shape spaces encoding human body shape and posture variations are commonly used to constrain the search space for the shape estimate. In this work, we propose a novel method that uses a posture-invariant shape space to model body shape variation combined with a skeleton-based deformation to model posture variation. Our method can estimate the body shape and posture of both static scans and motion sequences of human body scans with clothing that fits relatively closely to the body. In case of motion sequences, our method takes advantage of motion cues to solve for a single body shape estimate along with a sequence of posture estimates. We apply our approach to both static scans and motion sequences and demonstrate that using our method, higher fitting accuracy is achieved than when using a variant of the popular SCAPE model [2,18] as statistical model.
A notable exception to using the SCAPE model is the approach by @cite_25 , which uses a rotation-invariant shape space @cite_8 to estimate body shapes under clothing. Recently, @cite_22 proposed an approach based on localized manifold learning that was shown to lead to accurate body shape estimates. While these methods have been shown to perform well on static scans, they are less suitable to predict body shape and postures from motion sequences as the body shape cannot be controlled independently of posture in these shape spaces.
{ "cite_N": [ "@cite_22", "@cite_25", "@cite_8" ], "mid": [ "2048491268", "2074407824", "1993846356" ], "abstract": [ "This paper proposes a method for estimating the 3D body shape of a person with robustness to clothing. We formulate the problem as optimization over the manifold of valid depth maps of body shapes learned from synthetic training data. The manifold itself is represented using a novel data structure, a Multi-Resolution Manifold Forest (MRMF), which contains vertical edges between tree nodes as well as horizontal edges between nodes across trees that correspond to overlapping partitions. We show that this data structure allows both efficient localization and navigation on the manifold for on-the-fly building of local linear models (manifold charting). We demonstrate shape estimation of clothed users, showing significant improvement in accuracy over global shape models and models using pre-computed clusters. We further compare the MRMF with alternative manifold charting methods on a public dataset for estimating 3D motion from noisy 2D marker observations, obtaining state-of-the-art results.", "The paper presents a method to estimate the detailed 3D body shape of a person even if heavy or loose clothing is worn. The approach is based on a space of human shapes, learned from a large database of registered body scans. Together with this database we use as input a 3D scan or model of the person wearing clothes and apply a fitting method, based on ICP (iterated closest point) registration and Laplacian mesh deformation. The statistical model of human body shapes enforces that the model stays within the space of human shapes. The method therefore allows us to compute the most likely shape and pose of the subject, even if it is heavily occluded or body parts are not visible. Several experiments demonstrate the applicability and accuracy of our approach to recover occluded or missing body parts from 3D laser scans.", "A circuit for controlling a display panel identifying malfunctions in an engine generator receives a plurality of electrical signals from the engine generator, each of which identifies a particular trouble. The electrical signal may be produced by closing a switch. It is caused to operate a latch that lights a light associated with the particular malfunction. Indications of other malfunctions are suppressed until the circuit is reset. A manual reset tests all lights and then leaves them off ready to respond. A power-up reset does not test lights but leaves all lights off ready to respond. The circuit is rendered especially appropriate for military use by hardening against radiation and against pulses of electromagnetic interference." ] }
1312.4967
2034643056
Estimating the body shape and posture of a dressed human subject in motion represented as a sequence of (possibly incomplete) 3D meshes is important for virtual change rooms and security. To solve this problem, statistical shape spaces encoding human body shape and posture variations are commonly used to constrain the search space for the shape estimate. In this work, we propose a novel method that uses a posture-invariant shape space to model body shape variation combined with a skeleton-based deformation to model posture variation. Our method can estimate the body shape and posture of both static scans and motion sequences of human body scans with clothing that fits relatively closely to the body. In case of motion sequences, our method takes advantage of motion cues to solve for a single body shape estimate along with a sequence of posture estimates. We apply our approach to both static scans and motion sequences and demonstrate that using our method, higher fitting accuracy is achieved than when using a variant of the popular SCAPE model [2,18] as statistical model.
In this work, we are interested in fitting a single body shape estimate and multiple body posture estimates to a given sequence of scans, which requires a shape space that models variations of body shape and posture independently. The variant of the SCAPE model proposed by @cite_9 is a commonly used state-of-the-art method that has been shown to lead to accurate body shape and posture estimates and that models shape and posture variations independently. We propose a new shape space that combines a posture-invariant statistical shape model with a skeleton-based deformation, and show that this model can fit more accurately to 3D input meshes than this popular variant of the SCAPE model.
{ "cite_N": [ "@cite_9" ], "mid": [ "2088230067" ], "abstract": [ "We present a system for quick and easy manipulation of the body shape and proportions of a human actor in arbitrary video footage. The approach is based on a morphable model of 3D human shape and pose that was learned from laser scans of real people. The algorithm commences by spatio-temporally fitting the pose and shape of this model to the actor in either single-view or multi-view video footage. Once the model has been fitted, semantically meaningful attributes of body shape, such as height, weight or waist girth, can be interactively modified by the user. The changed proportions of the virtual human model are then applied to the actor in all video frames by performing an image-based warping. By this means, we can now conveniently perform spatio-temporal reshaping of human actors in video footage which we show on a variety of video sequences." ] }
1312.5297
2953046629
While Twitter provides an unprecedented opportunity to learn about breaking news and current events as they happen, it often produces skepticism among users as not all the information is accurate but also hoaxes are sometimes spread. While avoiding the diffusion of hoaxes is a major concern during fast-paced events such as natural disasters, the study of how users trust and verify information from tweets in these contexts has received little attention so far. We survey users on credibility perceptions regarding witness pictures posted on Twitter related to Hurricane Sandy. By examining credibility perceptions on features suggested for information verification in the field of Epistemology, we evaluate their accuracy in determining whether pictures were real or fake compared to professional evaluations performed by experts. Our study unveils insight about tweet presentation, as well as features that users should look at when assessing the veracity of tweets in the context of fast-paced events. Some of our main findings include that while author details not readily available on Twitter feeds should be emphasized in order to facilitate verification of tweets, showing multiple tweets corroborating a fact misleads users to trusting what actually is a hoax. We contrast some of the behavioral patterns found on tweets with literature in Psychology research.
Most of the research dealing with credibility on Twitter has focused on development of automatic techniques to assess credibility of tweets. @cite_3 trained a supervised classifier to categorize tweets as credible or non-credible by using a set of predefined features they grouped in four types: message, user, topic, and propagation. They found the classifier to be highly accurate as compared to credibility assessments provided by AMT workers. Similarly, others have presented their research on automated classifiers or ranking systems by using graph-based methods @cite_25 @cite_29 @cite_30 , using external sources such as Wikipedia @cite_8 , using content features @cite_18 , or comparing some of the previous methods @cite_21 .
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_8", "@cite_29", "@cite_21", "@cite_3", "@cite_25" ], "mid": [ "2172553180", "1983849556", "1973368057", "2084591134", "", "", "2398287226" ], "abstract": [ "Ranking tweets is a fundamental task to make it easier to distill the vast amounts of information shared by users. In this paper, we explore the novel idea of ranking tweets on a topic using heterogeneous networks. We construct heterogeneous networks by harnessing cross-genre linkages between tweets and semantically-related web documents from formal genres, and inferring implicit links between tweets and users. To rank tweets effectively by capturing the semantics and importance of different linkages, we introduce Tri-HITS, a model to iteratively propagate ranking scores across heterogeneous networks. We show that integrating both formal genre and inferred social networks with tweet networks produces a higher-quality ranking than the tweet networks alone. 1 Title and Abstract in Chinese u", "Twitter is a major forum for rapid dissemination of user-provided content in real time. As such, a large proportion of the information it contains is not particularly relevant to many users and in fact is perceived as unwanted 'noise' by many. There has been increased research interest in predicting whether tweets are relevant, newsworthy or credible, using a variety of models and methods. In this paper, we focus on an analysis that highlights the utility of the individual features in Twitter such as hash tags, retweets and mentions for predicting credibility. We first describe a context-based evaluation of the utility of a set of features for predicting manually provided credibility assessments on a corpus of microblog tweets. This is followed by an evaluation of the distribution presence of each feature across 8 diverse crawls of tweet data. Last, an analysis of feature distribution across dyadic pairs of tweets and retweet chains of various lengths is described. Our results show that the best indicators of credibility include URLs, mentions, retweets and tweet length and that features occur more prominently in data describing emergency and unrest situations.", "We propose methods for calculating credibility values of messages in Social Network Services (SNSs), such as Linked In and Face book. Many users post messages on SNSs, however, not all of these messages are credible. Our method is based on two assumptions: an SNS message is credible (1) if the SNS message is similar to information from other resources and (2) if the information is confirmed as credible. For assumption (1), we developed a method to retrieve similar descriptions from Wikipedia articles. For assumption (2), we developed a method for assessing Wikipedia articles using the edit history. Using these two methods, we can calculate accurate credibility values for SNS messages. In an experiment, we confirmed that our method can calculate appropriate credibility values for SNS messages if Wikipedia has credible articles related to the SNS messages.", "We analyze the information credibility of news propagated through Twitter, a popular microblogging service. Previous research has shown that most of the messages posted on Twitter are truthful, but the service is also used to spread misinformation and false rumors, often unintentionally. On this paper we focus on automatic methods for assessing the credibility of a given set of tweets. Specifically, we analyze microblog postings related to \"trending\" topics, and classify them as credible or not credible, based on features extracted from them. We use features from users' posting and re-posting (\"re-tweeting\") behavior, from the text of the posts, and from citations to external sources. We evaluate our methods using a significant number of human assessments about the credibility of items on a recent sample of Twitter postings. Our results shows that there are measurable differences in the way messages propagate, that can be used to classify them automatically as credible or not credible, with precision and recall in the range of 70 to 80 .", "", "", "Though Twitter acts as a realtime news source with people acting as sensors and sending event updates from all over the world, rumors spread via Twitter have been noted to cause considerable damage. Given a set of popular Twitter events along with related users and tweets, we study the problem of automatically assessing the credibility of such events. We propose a credibility analysis approach enhanced with event graph-based optimization to solve the problem. First we experiment by performing PageRanklike credibility propagation on a multi-typed network consisting of events, tweets, and users. Further, within each iteration, we enhance the basic trust analysis by updating event credibility scores using regularization on a new graph of events. Our experiments using events extracted from two tweet feed datasets, each with millions of tweets show that our event graph optimization approach outperforms the basic credibility analysis approach. Also, our methods are significantly more accurate (∼86 ) than the decision tree classifier approach (∼72 )." ] }
1312.5297
2953046629
While Twitter provides an unprecedented opportunity to learn about breaking news and current events as they happen, it often produces skepticism among users as not all the information is accurate but also hoaxes are sometimes spread. While avoiding the diffusion of hoaxes is a major concern during fast-paced events such as natural disasters, the study of how users trust and verify information from tweets in these contexts has received little attention so far. We survey users on credibility perceptions regarding witness pictures posted on Twitter related to Hurricane Sandy. By examining credibility perceptions on features suggested for information verification in the field of Epistemology, we evaluate their accuracy in determining whether pictures were real or fake compared to professional evaluations performed by experts. Our study unveils insight about tweet presentation, as well as features that users should look at when assessing the veracity of tweets in the context of fast-paced events. Some of our main findings include that while author details not readily available on Twitter feeds should be emphasized in order to facilitate verification of tweets, showing multiple tweets corroborating a fact misleads users to trusting what actually is a hoax. We contrast some of the behavioral patterns found on tweets with literature in Psychology research.
When it comes to the study of credibility perceptions of users, the emergence of the Internet evoked an increase of interest on information credibility @cite_34 . Researchers have suggested Web-specific features like visual design or incoming links @cite_23 @cite_17 @cite_36 , and have given advice for assessing credibility of online content to avoid hoaxes @cite_26 @cite_14 . Researchers have also studied credibility issues in blogs and microblogs. @cite_9 surveyed blog users to rate credibility of blogs as compared to traditional media. They found that most users find blogs highly credible, and believe they provide more depth and more thoughtful analysis than traditional media. Regarding credibility of tweets, @cite_5 examines the challenges presented by social media for tweet verification by journalists. He views Twitter as a media where information comes as a mix of news and information in real-time and with no established order, different from journalism's traditional individualistic and top-down ideology.
{ "cite_N": [ "@cite_26", "@cite_14", "@cite_36", "@cite_9", "@cite_23", "@cite_5", "@cite_34", "@cite_17" ], "mid": [ "1556256689", "151128577", "2083118562", "2064730034", "2113770130", "2034532433", "2137194788", "1971842139" ], "abstract": [ "", "", "As more of our communication, commerce, and personal data goes online, credibility becomes an increasingly important issue. How do we determine if our e-commerce sites, our healthcare sites, or our online communication partners are credible? This paper examines the research literature in the area of web credibility. This review starts by examining the cognitive foundations of credibility. Other sections of the paper examine not only the general credibility of web sites, but also online communication, such as e-mail, instant messaging, and online communities. Training and education, as well as future issues (such as CAPTCHAs and phishing), will be addressed. The implications for multiple populations (users, web developers, browser designers, and librarians) will be discussed.", "This study surveyed Weblog users online to investigate how credible they view blogs as compared to traditional media as well as other online sources. This study also explores the degree to which reliance on Weblogs as well as traditional and online media sources predicts credibility of Weblogs after controlling for demographic and political factors. Weblog users judged blogs as highly credible—more credible than traditional sources. They did, however, rate traditional sources as moderately credible. Weblog users rated blogs higher on depth of information than they did on fairness.", "Little of the work on online credibility assessment has considered how the information-seeking process figures into the final evaluation of content people encounter. Using unique data about how a diverse group of young adults looks for and evaluates Web content, our paper makes contributions to existing literature by highlighting factors beyond site features in how users assess credibility. We find that the process by which users arrive at a site is an important component of how they judge the final destination. In particular, search context, branding and routines, and a reliance on those in one’s networks play important roles in online information-seeking and evaluation. We also discuss that users differ considerably in their skills when it comes to judging online content credibility.", "Subjects rated how certain they were that each of 60 statements was true or false. The statements were sampled from areas of knowledge including politics, sports, and the arts, and were plausible but unlikely to be specifically known by most college students. Subjects gave ratings on three successive occasions at 2-week intervals. Embedded in the list were a critical set of statements that were either repeated across the sessions or were not repeated. For both true and false statements, there was a significant increase in the validity judgments for the repeated statements and no change in the validity judgments for the non-repeated statements. Frequency of occurrence is apparently a criterion used to establish the referential validity of plausible statements.", "People increasingly rely on Internet and web-based information despite evidence that it is potentially inaccurate and biased. Therefore, this study sought to assess people's perceptions of the credibility of various categories of Internet information compared to similar information provided by other media. The 1,041 respondents also were asked about whether they verified Internet information. Overall, respondents reported they considered Internet information to be as credible as that obtained from television, radio, and magazines, but not as credible as newspaper information. Credibility among the types of information sought, such as news and entertainment, varied across media channels. Respondents said they rarely verified web-based information, although this too varied by the type of information sought. Levels of experience and how respondents perceived the credibility of information were related to whether they verified information. This study explores the social relevance of the findings and discusses...", "In this study 2,684 people evaluated the credibility of two live Web sites on a similar topic (such as health sites). We gathered the comments people wrote about each siteis credibility and analyzed the comments to find out what features of a Web site get noticed when people evaluate credibility. We found that the idesign looki of the site was mentioned most frequently, being present in 46.1 of the comments. Next most common were comments about information structure and information focus. In this paper we share sample participant comments in the top 18 areas that people noticed when evaluating Web site credibility. We discuss reasons for the prominence of design look, point out how future studies can build on what we have learned in this new line of research, and outline six design implications for human-computer interaction professionals." ] }
1312.5297
2953046629
While Twitter provides an unprecedented opportunity to learn about breaking news and current events as they happen, it often produces skepticism among users as not all the information is accurate but also hoaxes are sometimes spread. While avoiding the diffusion of hoaxes is a major concern during fast-paced events such as natural disasters, the study of how users trust and verify information from tweets in these contexts has received little attention so far. We survey users on credibility perceptions regarding witness pictures posted on Twitter related to Hurricane Sandy. By examining credibility perceptions on features suggested for information verification in the field of Epistemology, we evaluate their accuracy in determining whether pictures were real or fake compared to professional evaluations performed by experts. Our study unveils insight about tweet presentation, as well as features that users should look at when assessing the veracity of tweets in the context of fast-paced events. Some of our main findings include that while author details not readily available on Twitter feeds should be emphasized in order to facilitate verification of tweets, showing multiple tweets corroborating a fact misleads users to trusting what actually is a hoax. We contrast some of the behavioral patterns found on tweets with literature in Psychology research.
The first user study on credibility perceptions of tweets is that by @cite_20 . Users were shown tweets with alterations from one another, such as different profile picture, or different tweet content. They studied how credibility ratings perceived by users varied according to these alterations. They found that the basic information shown on major tweet interfaces is not sufficient so as to assessing the credibility of a tweet, and that showing more details about the author of a tweet would help to that end. On a related study, @cite_28 conducted a survey study to compare credibility perceptions of U.S. users on Twitter and Chinese users on Weibo. They found cultural differences between both kinds of users, such as Chinese users being much more context-sensitive, and U.S. users perceiving microblogs generally as less credible.
{ "cite_N": [ "@cite_28", "@cite_20" ], "mid": [ "2186629417", "2167024389" ], "abstract": [ "Microblogs have become an increasingly important source of information, both in the U.S. (Twitter) and in China (Weibo). However, the brevity of microblog updates, combined with increasing access of microblog content through search rather than through direct network connections, makes it challenging to assess the credibility of news relayed in this manner [34]. This paper reports on experimental and survey data that compare the impact of several features of microblog updates (author’s gender, name style, profile image, location, and degree of network overlap with the reader) on credibility perceptions among U.S. and Chinese audiences. We reveal the complex mechanism of credibility perceptions, identify several key differences in how users from each country critically consume microblog content, and discuss how to incorporate these findings into the design of improved user interfaces for accessing microblogs in different cultural settings.", "Twitter is now used to distribute substantive content such as breaking news, increasing the importance of assessing the credibility of tweets. As users increasingly access tweets through search, they have less information on which to base credibility judgments as compared to consuming content from direct social network connections. We present survey results regarding users' perceptions of tweet credibility. We find a disparity between features users consider relevant to credibility assessment and those currently revealed by search engines. We then conducted two experiments in which we systematically manipulated several features of tweets to assess their impact on credibility ratings. We show that users are poor judges of truthfulness based on content alone, and instead are influenced by heuristics such as user name when making credibility assessments. Based on these findings, we discuss strategies tweet authors can use to enhance their credibility with readers (and strategies astute readers should be aware of!). We propose design improvements for displaying social search results so as to better convey credibility." ] }
1312.5297
2953046629
While Twitter provides an unprecedented opportunity to learn about breaking news and current events as they happen, it often produces skepticism among users as not all the information is accurate but also hoaxes are sometimes spread. While avoiding the diffusion of hoaxes is a major concern during fast-paced events such as natural disasters, the study of how users trust and verify information from tweets in these contexts has received little attention so far. We survey users on credibility perceptions regarding witness pictures posted on Twitter related to Hurricane Sandy. By examining credibility perceptions on features suggested for information verification in the field of Epistemology, we evaluate their accuracy in determining whether pictures were real or fake compared to professional evaluations performed by experts. Our study unveils insight about tweet presentation, as well as features that users should look at when assessing the veracity of tweets in the context of fast-paced events. Some of our main findings include that while author details not readily available on Twitter feeds should be emphasized in order to facilitate verification of tweets, showing multiple tweets corroborating a fact misleads users to trusting what actually is a hoax. We contrast some of the behavioral patterns found on tweets with literature in Psychology research.
Taking up a complementary goal to that by @cite_20 , who explored the role of different features on credibility perceptions of tweets, our work studies how perceptions of tweets lead users to making a correct decision in a verification process aiming to identify truthful tweets and to get rid of hoaxes.
{ "cite_N": [ "@cite_20" ], "mid": [ "2167024389" ], "abstract": [ "Twitter is now used to distribute substantive content such as breaking news, increasing the importance of assessing the credibility of tweets. As users increasingly access tweets through search, they have less information on which to base credibility judgments as compared to consuming content from direct social network connections. We present survey results regarding users' perceptions of tweet credibility. We find a disparity between features users consider relevant to credibility assessment and those currently revealed by search engines. We then conducted two experiments in which we systematically manipulated several features of tweets to assess their impact on credibility ratings. We show that users are poor judges of truthfulness based on content alone, and instead are influenced by heuristics such as user name when making credibility assessments. Based on these findings, we discuss strategies tweet authors can use to enhance their credibility with readers (and strategies astute readers should be aware of!). We propose design improvements for displaying social search results so as to better convey credibility." ] }
1312.4522
1658678621
Given a finite, connected graph @math , the lamplighter chain on @math is the lazy random walk @math on the associated lamplighter graph @math . The mixing time of the lamplighter chain on the torus @math is known to have a cutoff at a time asymptotic to the cover time of @math if @math , and to half the cover time if @math . We show that the mixing time of the lamplighter chain on @math has a cutoff at @math times the cover time of @math as @math , where @math is an explicit weakly decreasing map from @math onto @math . In particular, as @math varies, the threshold continuously interpolates between the known thresholds for @math and @math . Perhaps surprisingly, we find a phase transition (non-smoothness of @math ) at the point @math , where high dimensional behavior ( @math for all @math ) commences. Here @math is the effective resistance from @math to @math in @math .
The mixing time of @math was first studied by H "aggstr "om and Jonasson in @cite_12 in the case of the complete graph @math and the one-dimensional cycle @math . Their work implies a total variation cutoff with threshold @math in the former case and that there is no cutoff in the latter. The connection between @math and @math is explored further in @cite_4 (see also the account given in [Chapter 19] LPW ), in addition to developing the relationship between @math and the relaxation time (i.e. inverse spectral gap) of @math , and the relationship between exponential moments of the size of the uncovered set @math of @math at time @math and the uniform, i.e. @math -mixing time of @math . In particular, it is shown in [Theorem 1.3] PR that if @math is a sequence of graphs with @math and @math then Related bounds on the order of magnitude of the uniform mixing time and the relaxation with generalized lamps were obtained respectively in @cite_8 and @cite_2 .
{ "cite_N": [ "@cite_2", "@cite_4", "@cite_12", "@cite_8" ], "mid": [ "2055134178", "2021912525", "2046434141", "2117320315" ], "abstract": [ "Suppose that G and H are finite, connected graphs, G regular, X is a lazy random walk on G and Z is a reversible ergodic Markov chain on H. The generalized lamplighter chain X* associated with X and Z is the random walk on the wreath product H G, the graph whose vertices consist of pairs (f,x) where f=(f_v)_ v V(G) is a labeling of the vertices of G by elements of H and x is a vertex in G. In each step, X* moves from a configuration (f,x) by updating x to y using the transition rule of X and then independently updating both f_x and f_y according to the transition probabilities on H; f_z for z different of x,y remains unchanged. We estimate the mixing time of X* in terms of the parameters of H and G. Further, we show that the relaxation time of X* is the same order as the maximal expected hitting time of G plus |G| times the relaxation time of the chain on H.", "Given a finite graph @math , a vertex of the lamplighter graph @math consists of a zero-one labeling of the vertices of @math , and a marked vertex of @math . For transitive @math we show that, up to constants, the relaxation time for simple random walk in @math is the maximal hitting time for simple random walk in @math , while the mixing time in total variation on @math is the expected cover time on @math . The mixing time in the uniform metric on @math admits a sharp threshold, and equals @math multiplied by the relaxation time on @math , up to a factor of @math . For @math , the lamplighter group over the discrete two dimensional torus, the relaxation time is of order @math , the total variation mixing time is of order @math , and the uniform mixing time is of order @math . For @math when @math , the relaxation time is of order @math , the total variation mixing time is of order @math , and the uniform mixing time is of order @math . In particular, these three quantities are of different orders of magnitude.", "Consider a graph, G, for which the vertices can have two modes, 0 or 1. Suppose that a particle moves around on G according to a discrete time Markov chain with the following rules. With (strictly positive) probabilities pm, pc and pr it moves to a randomly chosen neighbour, changes the mode of the vertex it is at or just stands still, respectively. We call such a random process a (pm, pc, pr)-lamplighter process on G. Assume that the process starts with the particle in a fixed position and with all vertices having mode 0. The convergence rate to stationarity in terms of the total variation norm is studied for the special cases with G = KN, the complete graph with N vertices, and when G = mod N. In the former case we prove that as N --> [infinity], ((2pc + pm) 4pcpm)N log N is a threshold for the convergence rate. In the latter case we show that the convergence rate is asymptotically determined by the cover time CN in that the total variation norm after aN2 steps is given by P(CN > aN2). The limit of this probability can in turn be calculated by considering a Brownian motion with two absorbing barriers. In particular, this means that there is no threshold for this case.", "Suppose that G is a finite, connected graph and X is a lazy random walk on G . The lamplighter chain X ? associated with X is the random walk on the wreath product G ? =Z 2 ?G , the graph whose vertices consist of pairs (f - ,x) where f is a labeling of the vertices of G by elements of Z 2 = 0,1 and x is a vertex in G . There is an edge between (f - ,x) and (g - ,y) in G ? if and only if x is adjacent to y in G and f z =g z for all z?x,y . In each step, X ? moves from a configuration (f - ,x) by updating x to y using the transition rule of X and then sampling both f x and f y according to the uniform distribution on Z 2 ; f z for z?x,y remains unchanged. We give matching upper and lower bounds on the uniform mixing time of X ? provided G satisfies mild hypotheses. In particular, when G is the hypercube Z d 2 , we show that the uniform mixing time of X ? is T(d2 d ) . More generally, we show that when G is a torus Z d n for d=3 , the uniform mixing time of X ? is T(dn d ) uniformly in n and d . A critical ingredient for our proof is a concentration estimate for the local time of the random walk in a subset of vertices." ] }
1312.4522
1658678621
Given a finite, connected graph @math , the lamplighter chain on @math is the lazy random walk @math on the associated lamplighter graph @math . The mixing time of the lamplighter chain on the torus @math is known to have a cutoff at a time asymptotic to the cover time of @math if @math , and to half the cover time if @math . We show that the mixing time of the lamplighter chain on @math has a cutoff at @math times the cover time of @math as @math , where @math is an explicit weakly decreasing map from @math onto @math . In particular, as @math varies, the threshold continuously interpolates between the known thresholds for @math and @math . Perhaps surprisingly, we find a phase transition (non-smoothness of @math ) at the point @math , where high dimensional behavior ( @math for all @math ) commences. Here @math is the effective resistance from @math to @math in @math .
By combining the results of @cite_7 and @cite_5 , it is observed in @cite_4 that @math has a threshold at @math . Thus, gives the best bounds, since @math attains the lower bound and @math attains the upper bound. In @cite_3 , it is shown that @math when @math and more generally that @math whenever @math is a sequence of graphs with @math satisfying certain uniform local transience assumptions. This prompted the question [Section 7] Miller-Peres of whether for each @math there exists a (natural) family of graphs @math such that @math as @math . In this work we give an affirmative answer to this question by analyzing the lamplighter chain on a thin @math D torus.
{ "cite_N": [ "@cite_5", "@cite_3", "@cite_4", "@cite_7" ], "mid": [ "2028343069", "2033953301", "2021912525", "2031558625" ], "abstract": [ "Under a natural hypothesis, the cover time for a finite Markov chain can be approximated by its expectation, as the size of state space tends to infinity. This result is deduced from an abstract result concerning covering, an unstructured set by i.i.d. arbitrarily distributed random subsets.", "We show that the measure on markings of Znd, d ≥ 3, with elements of 0, 1 given by i.i.d. fair coin flips on the range @math of a random walk X run until time T and 0 otherwise becomes indistinguishable from the uniform measure on such markings at the threshold T = ½Tcov(Znd). As a consequence of our methods, we show that the total variation mixing time of the random walk on the lamplighter graph Z2 ≀ Znd, d ≥ 3, has a cutoff with threshold ½Tcov(Znd). We give a general criterion under which both of these results hold; other examples for which this applies include bounded degree expander families, the intersection of an infinite supercritical percolation cluster with an increasing family of balls, the hypercube and the Caley graph of the symmetric group generated by transpositions. The proof also yields precise asymptotics for the decay of correlation in the uncovered set.", "Given a finite graph @math , a vertex of the lamplighter graph @math consists of a zero-one labeling of the vertices of @math , and a marked vertex of @math . For transitive @math we show that, up to constants, the relaxation time for simple random walk in @math is the maximal hitting time for simple random walk in @math , while the mixing time in total variation on @math is the expected cover time on @math . The mixing time in the uniform metric on @math admits a sharp threshold, and equals @math multiplied by the relaxation time on @math , up to a factor of @math . For @math , the lamplighter group over the discrete two dimensional torus, the relaxation time is of order @math , the total variation mixing time is of order @math , and the uniform mixing time is of order @math . For @math when @math , the relaxation time is of order @math , the total variation mixing time is of order @math , and the uniform mixing time is of order @math . In particular, these three quantities are of different orders of magnitude.", "Let τ n (x) denote the time of first visit of a point x on the lattice torus Z 2 n = 2 2 nZ 2 by the simple random walk. The size of the set of a, n-late points £ n (α) = x ∈ Z 2 n : Tn(x) ≥ α4 π (n log n) 2 is approximately n 2(1-α) , for α ∈ (0,1) [£ n (α) is empty if a > 1 and n is large enough]. These sets have interesting clustering and fractal properties: we show that for β e (0, 1), a disc of radius n β centered at nonrandom x typically contains about n 2β(1-α β2) points from £ n (α) (and is empty if β < √α), whereas choosing the center x of the disc uniformly in £ n (α) boosts the typical number of α, n-late points in it to n 2β(1-α) . We also estimate the typical number of pairs of a, n-late points within distance n β of each other; this typical number can be significantly smaller than the expected number of such pairs, calculated by Brummelhuis and Hilhorst [Phys. A 176 (1991) 387-408]. On the other hand, our results show that the number of ordered pairs of late points within distance n β of each other is larger than what one might predict by multiplying the total number of late points, by the number of late points in a disc of radius n β centered at a typical late point." ] }
1312.4477
1996909986
Recent research on pattern discovery has progressed from mining frequent patterns and sequences to mining structured patterns, such as trees and graphs. Graphs as general data structure can model complex relations among data with wide applications in web exploration and social networks. However, the process of mining large graph patterns is a challenge due to the existence of large number of subgraphs. In this paper, we aim to mine only frequent complete graph patterns. A graph g in a database is complete if every pair of distinct vertices is connected by a unique edge. Grid Complete Graph (GCG) is a mining algorithm developed to explore interesting pruning techniques to extract maximal complete graphs from large spatial dataset existing in Sloan Digital Sky Survey (SDSS) data. Using a divide and conquer strategy, GCG shows high efficiency especially in the presence of large number of patterns. In this paper, we describe GCG that can mine not only simple co-location spatial patterns but also complex ones. To the best of our knowledge, this is the first algorithm used to exploit the extraction of maximal complete graphs in the process of mining complex co-location patterns in large spatial dataset.
@cite_9 defined the co-location pattern as the presence of a spatial feature in the neighborhood of instances of other spatial features. They developed an algorithm for mining valid rules in spatial databases using an Apriori based approach. Their algorithm does not separate the co-location mining and interesting pattern mining steps like our approach does. Also, they did not consider complex relationships or patterns.
{ "cite_N": [ "@cite_9" ], "mid": [ "2012886963" ], "abstract": [ "Mining co-location patterns from spatial databases may reveal types of spatial features likely located as neighbors in space. In this paper, we address the problem of mining confident co-location rules without a support threshold. First, we propose a novel measure called the maximal participation index. We show that every confident co-location rule corresponds to a co-location pattern with a high maximal participation index value. Second, we show that the maximal participation index is non-monotonic, and thus the conventional Apriori-like pruning does not work directly. We identify an interesting weak monotonic property for the index and develop efficient algorithms to mine confident co-location rules. An extensive performance study shows that our method is both effective and efficient for large spatial databases." ] }
1312.4477
1996909986
Recent research on pattern discovery has progressed from mining frequent patterns and sequences to mining structured patterns, such as trees and graphs. Graphs as general data structure can model complex relations among data with wide applications in web exploration and social networks. However, the process of mining large graph patterns is a challenge due to the existence of large number of subgraphs. In this paper, we aim to mine only frequent complete graph patterns. A graph g in a database is complete if every pair of distinct vertices is connected by a unique edge. Grid Complete Graph (GCG) is a mining algorithm developed to explore interesting pruning techniques to extract maximal complete graphs from large spatial dataset existing in Sloan Digital Sky Survey (SDSS) data. Using a divide and conquer strategy, GCG shows high efficiency especially in the presence of large number of patterns. In this paper, we describe GCG that can mine not only simple co-location spatial patterns but also complex ones. To the best of our knowledge, this is the first algorithm used to exploit the extraction of maximal complete graphs in the process of mining complex co-location patterns in large spatial dataset.
@cite_0 used cliques as a co-location pattern (subgraphs), but in our research we used complete graphs instead. Similar to our approach, they separated the clique mining from the pattern mining stages. However, they did not use maximal complete graph. They treated each clique as a transaction and used an Apriori based technique for mining association rules. Since they used cliques (rather than maximal complete graphs) as their transactions, the counting of pattern instances is very different. They considered complex relationships within the pattern mining stage. However, their definition of negative patterns is very different -- they used infrequent types while we base our definition on the concept of absence in . They also used a different measure, namely, maxPI.
{ "cite_N": [ "@cite_0" ], "mid": [ "2123656392" ], "abstract": [ "We describe the need for mining complex relationships in spatial data. Complex relationships are defined as those involving two or more of: multifeature colocation, self-colocation, one-to-many relationships, self-exclusion and multifeature exclusion. We demonstrate that even in the mining of simple relationships, knowledge of complex relationships is necessary to accurately calculate the significance of results. We implement a representation of spatial data such that it contains known 'weak-monotonic' properties, which are exploited for the efficient mining of complex relationships, and discuss the strengths and limitations of this representation." ] }
1312.4477
1996909986
Recent research on pattern discovery has progressed from mining frequent patterns and sequences to mining structured patterns, such as trees and graphs. Graphs as general data structure can model complex relations among data with wide applications in web exploration and social networks. However, the process of mining large graph patterns is a challenge due to the existence of large number of subgraphs. In this paper, we aim to mine only frequent complete graph patterns. A graph g in a database is complete if every pair of distinct vertices is connected by a unique edge. Grid Complete Graph (GCG) is a mining algorithm developed to explore interesting pruning techniques to extract maximal complete graphs from large spatial dataset existing in Sloan Digital Sky Survey (SDSS) data. Using a divide and conquer strategy, GCG shows high efficiency especially in the presence of large number of patterns. In this paper, we describe GCG that can mine not only simple co-location spatial patterns but also complex ones. To the best of our knowledge, this is the first algorithm used to exploit the extraction of maximal complete graphs in the process of mining complex co-location patterns in large spatial dataset.
@cite_15 enhanced the algorithm proposed in @cite_9 and used it to mine special types of co-location relationships in addition to cliques, namely; the , and patterns. This means they didn't use maximal complete graphs.
{ "cite_N": [ "@cite_9", "@cite_15" ], "mid": [ "2012886963", "2097604114" ], "abstract": [ "Mining co-location patterns from spatial databases may reveal types of spatial features likely located as neighbors in space. In this paper, we address the problem of mining confident co-location rules without a support threshold. First, we propose a novel measure called the maximal participation index. We show that every confident co-location rule corresponds to a co-location pattern with a high maximal participation index value. Second, we show that the maximal participation index is non-monotonic, and thus the conventional Apriori-like pruning does not work directly. We identify an interesting weak monotonic property for the index and develop efficient algorithms to mine confident co-location rules. An extensive performance study shows that our method is both effective and efficient for large spatial databases.", "Spatial collocation patterns associate the co-existence of non-spatial features in a spatial neighborhood. An example of such a pattern can associate contaminated water reservoirs with certain deceases in their spatial neighborhood. Previous work on discovering collocation patterns converts neighborhoods of feature instances to itemsets and applies mining techniques for transactional data to discover the patterns. We propose a method that combines the discovery of spatial neighborhoods with the mining process. Our technique is an extension of a spatial join algorithm that operates on multiple inputs and counts long pattern instances. As demonstrated by experimentation, it yields significant performance improvements compared to previous approaches." ] }
1312.4477
1996909986
Recent research on pattern discovery has progressed from mining frequent patterns and sequences to mining structured patterns, such as trees and graphs. Graphs as general data structure can model complex relations among data with wide applications in web exploration and social networks. However, the process of mining large graph patterns is a challenge due to the existence of large number of subgraphs. In this paper, we aim to mine only frequent complete graph patterns. A graph g in a database is complete if every pair of distinct vertices is connected by a unique edge. Grid Complete Graph (GCG) is a mining algorithm developed to explore interesting pruning techniques to extract maximal complete graphs from large spatial dataset existing in Sloan Digital Sky Survey (SDSS) data. Using a divide and conquer strategy, GCG shows high efficiency especially in the presence of large number of patterns. In this paper, we describe GCG that can mine not only simple co-location spatial patterns but also complex ones. To the best of our knowledge, this is the first algorithm used to exploit the extraction of maximal complete graphs in the process of mining complex co-location patterns in large spatial dataset.
Most of the previous research and to the best of our knowledge, previous work has used Apriori type algorithms for mining interesting co-location patterns. However, we embedded GLIMIT @cite_14 as the underlying pattern mining algorithm as already discussed in . To the best of our knowledge, no previous work has used the concept of .
{ "cite_N": [ "@cite_14" ], "mid": [ "2108545961" ], "abstract": [ "In our geometric view, an itemset is a vector (itemvector) in the space of transactions. Linear and potentially non-linear transformations can be applied to the itemvectors before mining patterns. Aggregation functions and interestingness measures can be applied to the transformed vectors and pushed inside the mining process. We show that interesting itemset mining can be carried out by instantiating four abstract functions: a transformation (g), an algebraic aggregation operator (o) and measures (f and F). For frequent itemset mining (FIM), g and F are identity transformations, o is intersection and f is the cardinality. Based on this geometric view we present a novel algorithm that uses space linear in the number of 1-itemsets to mine all interesting itemsets in a single pass over the data, with no candidate generation. It scales (roughly) linearly in running time with the number of interesting item- sets. FIM experiments show that it outperforms FP-growth on realistic datasets above a small support threshold (0.29 and 1.2 in our experiments)." ] }
1312.4509
1841890616
The growing need for continuous processing capabilities has led to the development of multicore systems with a complex cache hierarchy. Such multicore systems are generally designed for improving the performance in average case, while hard real-time systems must consider worst-case scenarios. An open challenge is therefore to efficiently schedule hard real-time tasks on a multicore architecture. In this work, we propose a mathematical formulation for computing a static scheduling that minimize L1 data cache misses between hard real-time tasks on a multicore architecture using communication affinities.
@cite_1 focuses on the memory-to- @math traffic in the cache hierarchy of soft real-time systems. They propose a two steps method to discourage the co-scheduling of the tasks generating such traffic. First, the tasks that may induce significant memory-to- @math traffic are gathered into groups. Then at runtime, they use a scheduling policy that reduces concurrency within groups. @cite_2 also proposes several global multi-core scheduling strategies for soft real-time systems to minimize the @math cache trashing. Co-scheduling of the tasks of a same group is used to optimize the efficient use of the @math shared cache. Task promotion is another example of a studied scheduling policy. When considering hard real-time systems, to the best of our knowledge we are only aware of @cite_6 . Cache-partitioning techniques are used to avoid interferences, at the @math cache level, between the tasks that are running simultaneously. In addition to regular temporal constraints used within a schedulability test, cache constraints due to cache-partitionning are added and steer the computation of the scheduling. They propose a linear programming formulation to solve this problem and an approximation of this formulation for larger task sets.
{ "cite_N": [ "@cite_1", "@cite_6", "@cite_2" ], "mid": [ "1943109230", "2141973605", "2122154156" ], "abstract": [ "Multicore architectures, which have multiple processing units on a single chip, are widely viewed as a way to achieve higher processor performance, given that thermal and power problems impose limits on the performance of single-core designs. Accordingly, several chip manufacturers have already released, or will soon release, chips with dual cores, and it is predicted that chips with up to 32 cores will be available within a decade. To effectively use the available processing resources on multicore platforms, software designs should avoid co-executing applications or threads that can worsen the performance of shared caches, if not thrash them. While cache-aware scheduling techniques for such platforms have been proposed for throughput-oriented applications, to the best of our knowledge, no such work has targeted real-time applications. In this paper, we propose and evaluate a cache-aware Pfair-based scheduling scheme for real-time tasks on multicore platforms", "The major obstacle to use multicores for real-time applications is that we may not predict and provide any guarantee on real-time properties of embedded software on such platforms; the way of handling the on-chip shared resources such as L2 cache may have a significant impact on the timing predictability. In this paper, we propose to use cache space isolation techniques to avoid cache contention for hard real-time tasks running on multicores with shared caches. We present a scheduling strategy for real-time tasks with both timing and cache space constraints, which allows each task to use a fixed number of cache partitions, and makes sure that at any time a cache partition is occupied by at most one running task. In this way, the cache spaces of tasks are isolated at run-time. As technical contributions, we have developed a sufficient schedulability test for non-preemptive fixed-priority scheduling for multicores with shared L2 cache, encoded as a linear programming problem. To improve the scalability of the test, we then present our second schedulability test of quadratic complexity, which is an over approximation of the first test. To evaluate the performance and scalability of our techniques, we use randomly generated task sets. Our experiments show that the first test which employs an LP solver can easily handle task sets with thousands of tasks in minutes using a desktop computer. It is also shown that the second test is comparable with the first one in terms of precision, but scales much better due to its low complexity, and is therefore a good candidate for efficient schedulability tests in the design loop for embedded systems or as an on-line test for admission control.", "Multicore architectures, which have multiple processing units on a single chip, have been adopted by most chip manufacturers. Most such chips contain on-chip caches that are shared by some or all of the cores on the chip. To effectively use the available processing resources on such platforms,scheduling methods must be aware of these caches. In this paper, we explore various heuristics that attempt to improve cache performance when scheduling real-time workloads. Such heuristics are applicable when multiple multithreaded applications exist with large working sets. In addition, we present a case study that shows how our best-performing heuristics can improve the end-user performance of video encoding applications." ] }
1312.4182
1963163055
How much adversarial noise can protocols for interactive communication tolerate? This question was examined by Braverman and Rao (IEEE Trans. Inf. Theory, 2014) for the case of "robust" protocols, where each party sends messages only in fixed and predetermined rounds. We consider a new class of non-robust protocols for Interactive Communication, which we call adaptive protocols. Such protocols adapt structurally to the noise induced by the channel in the sense that both the order of speaking, and the length of the protocol may vary depending on observed noise. We define models that capture adaptive protocols and study upper and lower bounds on the permissible noise rate in these models. When the length of the protocol may adaptively change according to the noise, we demonstrate a protocol that tolerates noise rates up to @math . When the order of speaking may adaptively change as well, we demonstrate a protocol that tolerates noise rates up to @math . Hence, adaptivity circumvents an impossibility result of @math on the fraction of tolerable noise (Braverman and Rao, 2014).
Over the last years, there has been great interest in interactive protocols, considering various properties of such protocols such as their efficiency @cite_13 @cite_7 (stochastic noise), @cite_4 @cite_20 @cite_24 @cite_33 (adversarial noise), their noise resilience under different assumptions and models @cite_15 @cite_18 @cite_30 @cite_6 , their information rate @cite_1 @cite_27 @cite_25 @cite_12 and other properties, such as privacy @cite_5 @cite_31 or list-decoding @cite_22 @cite_24 @cite_32 . We stress that all the works prior to this work (and to the independent work @cite_22 @cite_24 ), assume the robust, non-adaptive setting.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_31", "@cite_4", "@cite_33", "@cite_7", "@cite_22", "@cite_1", "@cite_32", "@cite_6", "@cite_24", "@cite_27", "@cite_5", "@cite_12", "@cite_15", "@cite_13", "@cite_25", "@cite_20" ], "mid": [ "", "", "2043868537", "", "", "", "2952019917", "", "", "", "1978882850", "", "", "", "2216201412", "", "2026591308", "" ], "abstract": [ "", "", "Consider two parties Alice and Bob, who hold private inputs x and y, and wish to compute a function f(x, y) privately in the information theoretic sense; that is, each party should learn nothing beyond f(x, y). However, the communication channel available to them is noisy. This means that the channel can introduce errors in the transmission between the two parties. Moreover, the channel is adversarial in the sense that it knows the protocol that Alice and Bob are running, and maliciously introduces errors to disrupt the communication, subject to some bound on the total number of errors. A fundamental question in this setting is to design a protocol that remains private in the presence of large number of errors. If Alice and Bob are only interested in computing f(x, y) correctly, and not privately, then quite robust protocols are known that can tolerate a constant fraction of errors. However, none of these solutions is applicable in the setting of privacy, as they inherently leak information about the parties' inputs. This leads to the question whether we can simultaneously achieve privacy and error-resilience against a constant fraction of errors. We show that privacy and error-resilience are contradictory goals. In particular, we show that for every constant c > 0, there exists a function f which is privately computable in the error-less setting, but for which no private and correct protocol is resilient against a c-fraction of errors. The same impossibility holds also for sub-constant noise rate, e.g., when c is exponentially small (as a function of the input size).", "", "", "", "We consider the task of interactive communication in the presence of adversarial errors and present tight bounds on the tolerable error-rates in a number of different settings. Most significantly, we explore adaptive interactive communication where the communicating parties decide who should speak next based on the history of the interaction. Braverman and Rao [STOC'11] show that non-adaptively one can code for any constant error rate below 1 4 but not more. They asked whether this bound could be improved using adaptivity. We answer this open question in the affirmative (with a slightly different collection of resources): Our adaptive coding scheme tolerates any error rate below 2 7 and we show that tolerating a higher error rate is impossible. We also show that in the setting of [CRYPTO'13], where parties share randomness not known to the adversary, adaptivity increases the tolerable error rate from 1 2 to 2 3. For list-decodable interactive communications, where each party outputs a constant size list of possible outcomes, the tight tolerable error rate is 1 2. Our negative results hold even if the communication and computation are unbounded, whereas for our positive results communication and computation are polynomially bounded. Most prior work considered coding schemes with linear amount of communication, while allowing unbounded computations. We argue that studying tolerable error rates in this relaxed context helps to identify a setting's intrinsic optimal error rate. We set forward a strong working hypothesis which stipulates that for any setting the maximum tolerable error rate is independent of many computational and communication complexity measures. We believe this hypothesis to be a powerful guideline for the design of simple, natural, and efficient coding schemes and for understanding the (im)possibilities of coding for interactive communications.", "", "", "", "We study coding schemes for error correction in interactive communications. Such interactive coding schemes simulate any @math -round interactive protocol using @math rounds over an adversarial channel that corrupts up to @math transmissions. Important performance measures for a coding scheme are its maximum tolerable error rate @math , communication complexity @math , and computational complexity. We give the first coding scheme for the standard setting which performs optimally in all three measures: Our randomized non-adaptive coding scheme has a near-linear computational complexity and tolerates any error rate @math with a linear @math communication complexity. This improves over prior results which each performed well in two of these measures. We also give results for other settings of interest, namely, the first computationally and communication efficient schemes that tolerate @math adaptively, @math if only one party is required to decode, and @math if list decoding is allowed. These are the optimal tolerable error rates for the respective settings. These coding schemes also have near linear computational and communication complexity. These results are obtained via two techniques: We give a general black-box reduction which reduces unique decoding, in various settings, to list decoding. We also show how to boost the computational and communication efficiency of any list decoder to become near linear.", "", "", "", "Error correction and message authentication are well studied in the literature, and various efficient solutions have been suggested and analyzed. This is however not the case for data streams in which the message is very long, possibly infinite, and not known in advance to the sender. Trivial solutions for error-correcting and authenticating data streams either suffer from a long delay at the receiver’s end or cannot perform well when the communication channel is noisy.", "", "We provide the first capacity approaching coding schemes that robustly simulate any interactive protocol over an adversarial channel that corrupts any @math fraction of the transmitted symbols. Our coding schemes achieve a communication rate of @math over any adversarial channel. This can be improved to @math for random, oblivious, and computationally bounded channels, or if parties have shared randomness unknown to the channel. Surprisingly, these rates exceed the @math interactive channel capacity bound which [Kol and Raz; STOC'13] recently proved for random errors. We conjecture @math and @math to be the optimal rates for their respective settings and therefore to capture the interactive channel capacity for random and adversarial errors. In addition to being very communication efficient, our randomized coding schemes have multiple other advantages. They are computationally efficient, extremely natural, and significantly simpler than prior (non-capacity approaching) schemes. In particular, our protocols do not employ any coding but allow the original protocol to be performed as-is, interspersed only by short exchanges of hash values. When hash values do not match, the parties backtrack. Our approach is, as we feel, by far the simplest and most natural explanation for why and how robust interactive communication in a noisy environment is possible.", "" ] }
1312.4182
1963163055
How much adversarial noise can protocols for interactive communication tolerate? This question was examined by Braverman and Rao (IEEE Trans. Inf. Theory, 2014) for the case of "robust" protocols, where each party sends messages only in fixed and predetermined rounds. We consider a new class of non-robust protocols for Interactive Communication, which we call adaptive protocols. Such protocols adapt structurally to the noise induced by the channel in the sense that both the order of speaking, and the length of the protocol may vary depending on observed noise. We define models that capture adaptive protocols and study upper and lower bounds on the permissible noise rate in these models. When the length of the protocol may adaptively change according to the noise, we demonstrate a protocol that tolerates noise rates up to @math . When the order of speaking may adaptively change as well, we demonstrate a protocol that tolerates noise rates up to @math . Hence, adaptivity circumvents an impossibility result of @math on the fraction of tolerable noise (Braverman and Rao, 2014).
The only other work that studies adaptive protocols is the abovementioned work of Ghaffari, Haeupler, and Sudan @cite_22 , which makes different modeling decision than our work. @ show that in their adaptive model, @math is a tight bound on fraction of permissible noise. The length of the protocol obtained in @cite_22 is quadratic in the length of the noiseless protocol, thus its rate is vanishing. However Ghaffari and Haeupler @cite_24 later improve the length to be linear while still tolerating the optimal @math noise of that model. Allowing the parties to preshare randomness increases the admissible noise to @math . We stress again that the setting of @cite_22 and ours are incomparable. Indeed, the tight @math bound of @cite_22 does not hold in our model and we can resist relative noise rates of up to @math or @math in the @math and @math models respectively. Similarly, while @math is the bound on noise when parties are allowed to share randomness in @cite_22 , in our model, the relative noise resilience for this setting is @math .
{ "cite_N": [ "@cite_24", "@cite_22" ], "mid": [ "1978882850", "2952019917" ], "abstract": [ "We study coding schemes for error correction in interactive communications. Such interactive coding schemes simulate any @math -round interactive protocol using @math rounds over an adversarial channel that corrupts up to @math transmissions. Important performance measures for a coding scheme are its maximum tolerable error rate @math , communication complexity @math , and computational complexity. We give the first coding scheme for the standard setting which performs optimally in all three measures: Our randomized non-adaptive coding scheme has a near-linear computational complexity and tolerates any error rate @math with a linear @math communication complexity. This improves over prior results which each performed well in two of these measures. We also give results for other settings of interest, namely, the first computationally and communication efficient schemes that tolerate @math adaptively, @math if only one party is required to decode, and @math if list decoding is allowed. These are the optimal tolerable error rates for the respective settings. These coding schemes also have near linear computational and communication complexity. These results are obtained via two techniques: We give a general black-box reduction which reduces unique decoding, in various settings, to list decoding. We also show how to boost the computational and communication efficiency of any list decoder to become near linear.", "We consider the task of interactive communication in the presence of adversarial errors and present tight bounds on the tolerable error-rates in a number of different settings. Most significantly, we explore adaptive interactive communication where the communicating parties decide who should speak next based on the history of the interaction. Braverman and Rao [STOC'11] show that non-adaptively one can code for any constant error rate below 1 4 but not more. They asked whether this bound could be improved using adaptivity. We answer this open question in the affirmative (with a slightly different collection of resources): Our adaptive coding scheme tolerates any error rate below 2 7 and we show that tolerating a higher error rate is impossible. We also show that in the setting of [CRYPTO'13], where parties share randomness not known to the adversary, adaptivity increases the tolerable error rate from 1 2 to 2 3. For list-decodable interactive communications, where each party outputs a constant size list of possible outcomes, the tight tolerable error rate is 1 2. Our negative results hold even if the communication and computation are unbounded, whereas for our positive results communication and computation are polynomially bounded. Most prior work considered coding schemes with linear amount of communication, while allowing unbounded computations. We argue that studying tolerable error rates in this relaxed context helps to identify a setting's intrinsic optimal error rate. We set forward a strong working hypothesis which stipulates that for any setting the maximum tolerable error rate is independent of many computational and communication complexity measures. We believe this hypothesis to be a powerful guideline for the design of simple, natural, and efficient coding schemes and for understanding the (im)possibilities of coding for interactive communications." ] }
1312.4664
2950043328
This paper addresses the problem of filtering with a state-space model. Standard approaches for filtering assume that a probabilistic model for observations (i.e. the observation model) is given explicitly or at least parametrically. We consider a setting where this assumption is not satisfied; we assume that the knowledge of the observation model is only provided by examples of state-observation pairs. This setting is important and appears when state variables are defined as quantities that are very different from the observations. We propose Kernel Monte Carlo Filter, a novel filtering method that is focused on this setting. Our approach is based on the framework of kernel mean embeddings, which enables nonparametric posterior inference using the state-observation examples. The proposed method represents state distributions as weighted samples, propagates these samples by sampling, estimates the state posteriors by Kernel Bayes' Rule, and resamples by Kernel Herding. In particular, the sampling and resampling procedures are novel in being expressed using kernel mean embeddings, so we theoretically analyze their behaviors. We reveal the following properties, which are similar to those of corresponding procedures in particle methods: (1) the performance of sampling can degrade if the effective sample size of a weighted sample is small; (2) resampling improves the sampling performance by increasing the effective sample size. We first demonstrate these theoretical findings by synthetic experiments. Then we show the effectiveness of the proposed filter by artificial and real data experiments, which include vision-based mobile robot localization.
As far as we know, there exist a few methods that can be applied to this setting directly . These methods learn the observation model from state-observation examples nonparametrically, and then use it to run a particle filter with a transition model. @cite_21 proposed to apply conditional density estimation based on the @math -nearest neighbors approach for learning the observation model. A problem here is that conditional density estimation suffers from the curse of dimensionality if observations are high-dimensional . @cite_21 avoided this problem by estimating the conditional density function of the state given observation, and used it as an alternative for the observation model. This heuristic may introduce bias in estimation, however. @cite_18 proposed to use Gaussian Process regression for leaning the observation model. This method will perform well if the Gaussian noise assumption is satisfied, but cannot be applied to structured observations.
{ "cite_N": [ "@cite_18", "@cite_21" ], "mid": [ "1790231888", "2112734135" ], "abstract": [ "Estimating the location of a mobile device or a robot from wireless signal strength has become an area of highly active research. The key problem in this context stems from the complexity of how signals propagate through space, especially in the presence of obstacles such as buildings, walls or people. In this paper we show how Gaussian processes can be used to generate a likelihood model for signal strength measurements. We also show how parameters of the model, such as signal noise and spatial correlation between measurements, can be learned from data via hyperparameter estimation. Experiments using WiFi indoor data and GSM cellphone connectivity demonstrate the superior performance of our approach.", "To navigate reliably in indoor environments, a mobile robot must know where it is. This includes both the ability of globally localizing the robot from scratch, as well as tracking the robot's position once its location is known. Vision has long been advertised as providing a solution to these problems, but we still lack efficient solutions in unmodified environments. Many existing approaches require modification of the environment to function properly, and those that work within unmodified environments seldomly address the problem of global localization. In this paper we present a novel, vision-based localization method based on the CONDENSATION algorithm, a Bayesian filtering method that uses a sampling-based density representation. We show how the CONDENSATION algorithm can be rued in a novel way to track the position of the camera platform rather than tracking an object in the scene. In addition, it can also be used to globally localize the camera platform, given a visual map of the environment. Based on these two observations, we present a vision-based robot localization method that provides a solution to a difficult and open problem in the mobile robotics community. As evidence for the viability of our approach, we show both global localization and tracking results in the context of a state of the art robotics application." ] }
1312.4664
2950043328
This paper addresses the problem of filtering with a state-space model. Standard approaches for filtering assume that a probabilistic model for observations (i.e. the observation model) is given explicitly or at least parametrically. We consider a setting where this assumption is not satisfied; we assume that the knowledge of the observation model is only provided by examples of state-observation pairs. This setting is important and appears when state variables are defined as quantities that are very different from the observations. We propose Kernel Monte Carlo Filter, a novel filtering method that is focused on this setting. Our approach is based on the framework of kernel mean embeddings, which enables nonparametric posterior inference using the state-observation examples. The proposed method represents state distributions as weighted samples, propagates these samples by sampling, estimates the state posteriors by Kernel Bayes' Rule, and resamples by Kernel Herding. In particular, the sampling and resampling procedures are novel in being expressed using kernel mean embeddings, so we theoretically analyze their behaviors. We reveal the following properties, which are similar to those of corresponding procedures in particle methods: (1) the performance of sampling can degrade if the effective sample size of a weighted sample is small; (2) resampling improves the sampling performance by increasing the effective sample size. We first demonstrate these theoretical findings by synthetic experiments. Then we show the effectiveness of the proposed filter by artificial and real data experiments, which include vision-based mobile robot localization.
There exist related but different problem settings from ours. One situation is that examples for state transitions are also given, and the transition model is to be learned nonparametrically from these examples. For this setting, there are methods based on kernel mean embeddings and Gaussian Processes . The filtering method by @cite_0 @cite_28 is in particular closely related to KMCF, as it also uses Kernel Bayes' Rule. A main difference from KMCF is that it computes forward probabilities by Kernel Sum Rule , which nonparametrically learns the transition model from the state transition examples. While the setting is different from ours, we compare KMCF with this method in our experiments as a baseline.
{ "cite_N": [ "@cite_0", "@cite_28" ], "mid": [ "2950268835", "2952680595" ], "abstract": [ "A nonparametric kernel-based method for realizing Bayes' rule is proposed, based on representations of probabilities in reproducing kernel Hilbert spaces. Probabilities are uniquely characterized by the mean of the canonical map to the RKHS. The prior and conditional probabilities are expressed in terms of RKHS functions of an empirical sample: no explicit parametric model is needed for these quantities. The posterior is likewise an RKHS mean of a weighted sample. The estimator for the expectation of a function of the posterior is derived, and rates of consistency are shown. Some representative applications of the kernel Bayes' rule are presented, including Baysian computation without likelihood and filtering with a nonparametric state-space model.", "State-space models are successfully used in many areas of science, engineering and economics to model time series and dynamical systems. We present a fully Bayesian approach to inference (i.e. state estimation and system identification) in nonlinear nonparametric state-space models. We place a Gaussian process prior over the state transition dynamics, resulting in a flexible model able to capture complex dynamical phenomena. To enable efficient inference, we marginalize over the transition dynamics function and infer directly the joint smoothing distribution using specially tailored Particle Markov Chain Monte Carlo samplers. Once a sample from the smoothing distribution is computed, the state transition predictive distribution can be formulated analytically. Our approach preserves the full nonparametric expressivity of the model and can make use of sparse Gaussian processes to greatly reduce computational complexity." ] }
1312.4664
2950043328
This paper addresses the problem of filtering with a state-space model. Standard approaches for filtering assume that a probabilistic model for observations (i.e. the observation model) is given explicitly or at least parametrically. We consider a setting where this assumption is not satisfied; we assume that the knowledge of the observation model is only provided by examples of state-observation pairs. This setting is important and appears when state variables are defined as quantities that are very different from the observations. We propose Kernel Monte Carlo Filter, a novel filtering method that is focused on this setting. Our approach is based on the framework of kernel mean embeddings, which enables nonparametric posterior inference using the state-observation examples. The proposed method represents state distributions as weighted samples, propagates these samples by sampling, estimates the state posteriors by Kernel Bayes' Rule, and resamples by Kernel Herding. In particular, the sampling and resampling procedures are novel in being expressed using kernel mean embeddings, so we theoretically analyze their behaviors. We reveal the following properties, which are similar to those of corresponding procedures in particle methods: (1) the performance of sampling can degrade if the effective sample size of a weighted sample is small; (2) resampling improves the sampling performance by increasing the effective sample size. We first demonstrate these theoretical findings by synthetic experiments. Then we show the effectiveness of the proposed filter by artificial and real data experiments, which include vision-based mobile robot localization.
Another related setting is that the observation model itself is given and sampling is possible, but computation of its values is expensive or even impossible. Therefore ordinary Bayes' rule cannot be used for filtering. To overcome this limitation, @cite_14 and @cite_10 proposed to apply approximate Bayesian computation (ABC) methods. For each iteration of filtering, these methods generate state-observation pairs from the observation model. Then they pick some pairs that have close observations to the test observation, and regard the states in these pairs as samples from a posterior. Note that these methods are not applicable to our setting, since we do not assume that the observation model is provided. That said, our method may be applied to their setting, by generating state-observation examples from the observation model. While such a comparison would be interesting, this paper focuses on comparison among the methods applicable to our setting.
{ "cite_N": [ "@cite_14", "@cite_10" ], "mid": [ "2128306087", "2067100182" ], "abstract": [ "Approximate Bayesian computation (ABC) has become a popular technique to facilitate Bayesian inference from complex models. In this article we present an ABC approximation designed to perform biased filtering for a Hidden Markov Model when the likelihood function is intractable. We use a sequential Monte Carlo (SMC) algorithm to both fit and sample from our ABC approximation of the target probability density. This approach is shown to, empirically, be more accurate w.r.t. the original filter than competing methods. The theoretical bias of our method is investigated; it is shown that the bias goes to zero at the expense of increased computational effort. Our approach is illustrated on a constrained sequential lasso for portfolio allocation to 15 constituents of the FTSE 100 share index.", "Measures of genomic similarity are often the basis of flexible statistical analyses, and when based on kernel methods, they provide a powerful platform to take advantage of a broad and deep statistical theory, and a wide range of existing software; see the companion paper for a review of this material [1]. The kernel method converts information – perhaps complex or high-dimensional information – for a pair of subjects to a quantitative value representing either similarity or dissimilarity, with the requirement that it must create a positive semidefinite matrix when applied to all pairs of subjects. This approach provides enormous opportunities to enhance genetic analyses by including a wide range of publically-available data as structured kernel ‘prior’ information. Kernel methods are appealing for their generality, yet this generality can make it challenging to formulate measures of similarity that directly address a specific scientific aim, or that are most powerful to detect a specific genetic mechanism. Although it is difficult to create a cook book of kernels for genetic studies, useful guidelines can be gleaned from a variety of novel published approaches. We review some novel developments of kernels for specific analyses and speculate on how to build kernels for complex genomic attributes based on publically available data. The creativity of analysts, with rigorous evaluations by applications to real and simulated data, will ultimately provide a much stronger array of kernel ‘tools’ for genetic analyses." ] }
1312.4494
1555111922
We determine the asymptotic behavior of the maximum subgraph density of large random graphs with a prescribed degree sequence. The result applies in particular to the Erdős–Renyi model, where it settles a conjecture of Hajek [IEEE Trans. Inform. Theory 36 (1990) 1398–1414]. Our proof consists in extending the notion of balanced loads from finite graphs to their local weak limits, using unimodularity. This is a new illustration of the objective method described by Aldous and Steele [In Probability on Discrete Structures (2004) 1–72 Springer].
This work is a new illustration of the general principles exposed in the by Aldous and Steele @cite_2 . The latter provides a powerful framework for the unified study of sparse random graphs and has already led to several remarkable results. Two prototypical examples are the celebrated @math limit in the random assignment problem due to Aldous @cite_14 , and the asymptotic enumeration of spanning trees in large graphs by Lyons @cite_21 . Since then, the method has been successfully applied to various other combinatorial enumeration optimization problems on graphs, including (but not limited to) @cite_17 @cite_11 @cite_15 @cite_25 @cite_18 @cite_5 @cite_26 @cite_0 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_11", "@cite_26", "@cite_21", "@cite_0", "@cite_2", "@cite_5", "@cite_15", "@cite_25", "@cite_17" ], "mid": [ "2952096314", "1504317671", "2963332076", "1757513105", "2061309095", "2022363238", "1872905819", "2013024470", "2141499077", "", "1597809183" ], "abstract": [ "A h-uniform hypergraph H=(V,E) is called (l,k)-orientable if there exists an assignment of each hyperedge e to exactly l of its vertices such that no vertex is assigned more than k hyperedges. Let H_ n,m,h be a hypergraph, drawn uniformly at random from the set of all h-uniform hypergraphs with n vertices and m edges. In this paper, we determine the threshold of the existence of a (l,k)-orientation of H_ n,m,h for k>=1 and h>l>=1, extending recent results motivated by applications such as cuckoo hashing or load balancing with guaranteed maximum load. Our proof combines the local weak convergence of sparse graphs and a careful analysis of a Gibbs measure on spanning subgraphs with degree constraints. It allows us to deal with a much broader class than the uniform hypergraphs.", "Author(s): Aldous, DJ | Abstract: The random assignment (or bipartite matching) problem asks about An = minπ ∑ni=1 c(i, π(i)) where (c(i, j)) is a n × n matrix with i.i.d. entries, say with exponential(1) distribution, and the minimum is over permutations π. Mezard and Parisi (1987) used the replica method from statistical physics to argue nonrigorously that EAn → ζ(2) = π2 6. Aldous (1992) identified the limit in terms of a matching problem on a limit infinite tree. Here we construct the optimal matching on the infinite tree. This yields a rigorous proof of the ζ(2) limit and of the conjectured limit distribution of edge-costs and their rank-orders in the optimal matching. It also yields the asymptotic essential uniqueness property: every almost-optimal matching coincides with the optimal matching except on a small proportion of edges. © 2001 John Wiley a Sons, Inc. Random Struct. Alg., 18, 381-418, 2001.", "", "This paper is motivated by two applications, namely i) generalizations of cuckoo hashing, a computationally simple approach to assigning keys to objects, and ii) load balancing in content distribution networks, where one is interested in determining the impact of content replication on performance. These two problems admit a common abstraction: in both scenarios, performance is characterized by the maximum weight of a generalization of a matching in a bipartite graph, featuring node and edge capacities. Our main result is a law of large numbers characterizing the asymptotic maximum weight matching in the limit of large bipartite random graphs, when the graphs admit a local weak limit that is a tree. This result specializes to the two application scenarios, yielding new results in both contexts. In contrast with previous results, the key novelty is the ability to handle edge capacities with arbitrary integer values. An analysis of belief propagation algorithms (BP) with multivariate belief vectors underlies the proof. In particular, we show convergence of the corresponding BP by exploiting monotonicity of the belief vectors with respect to the so-called upshifted likelihood ratio stochastic order. This auxiliary result can be of independent interest, providing a new set of structural conditions which ensure convergence of BP.", "We give new formulas for the asymptotics of the number of spanning trees of a large graph. A special case answers a question of McKay [Europ. J. Combin. 4 149–160] for regular graphs. The general answer involves a quantity for infinite graphs that we call ‘tree entropy’, which we show is a logarithm of a normalized determinant of the graph Laplacian for infinite graphs. Tree entropy is also expressed using random walks. We relate tree entropy to the metric entropy of the uniform spanning forest process on quasi-transitive amenable graphs, extending a result of Burton and Pemantle [Ann. Probab. 21 1329–1371].", "We apply the objective method of Aldous to the problem of finding the minimum-cost edge cover of the complete graph with random independent and identically distributed edge costs. The limit, as the number of vertices goes to infinity, of the expected minimum cost for this problem is known via a combinatorial approach of Hessler and Wastlund. We provide a proof of this result using the machinery of the objective method and local weak convergence, which was used to prove the (2) limit of the random assignment problem. A proof via the objective method is useful because it provides us with more information on the nature of the edge's incident on a typical root in the minimum-cost edge cover. We further show that a belief propagation algorithm converges asymptotically to the optimal solution. This can be applied in a computational linguistics problem of semantic projection. The belief propagation algorithm yields a near optimal solution with lesser complexity than the known best algorithms designed for optimality in worst-case settings.", "This survey describes a general approach to a class of problems that arise in combinatorial probability and combinatorial optimization. Formally, the method is part of weak convergence theory, but in concrete problems the method has a flavor of its own. A characteristic element of the method is that it often calls for one to introduce a new, infinite, probabilistic object whose local properties inform us about the limiting properties of a sequence of finite problems.", "Using the theory of negative association for measures and the notion of unimodularity for random weak limits of sparse graphs, we establish the validity of the cavity method for counting spanning subgraphs subject to local constraints in asymptotically tree-like graphs. Specifically, the normalized logarithm of the associated partition function (free energy) is shown to converge along any sequence of graphs whose random weak limit is a tree, and the limit is directly expressed in terms of the unique solution to a limiting cavity equation. On a Galton–Watson tree, the latter simplifies into a recursive distributional equation which can be solved explicitly. As an illustration, we provide a new asymptotic formula for the maximum size of a b-matching in the Erdős–Renyi random graph with fixed average degree and diverging size, for any . To the best of our knowledge, this is the first time that correlation inequalities and unimodularity are combined together to yield a general proof of uniqueness of Gibbs measures on infinite trees. We believe that a similar argument is applicable to other Gibbs measures than those over spanning subgraphs considered here. © 2012 Wiley Periodicals, Inc. Random Struct. Alg., 2013", "The random assignment problem asks for the minimum-cost perfect matching in the complete n × n bipartite graph Knn with i.i.d. edge weights, say uniform on [0, 1]. In a remarkable work by Aldous [Aldous, D. 2001. The ζ(2) limit in the random assignment problem. RSA18 381--418], the optimal cost was shown to converge to ζ(2) as n → ∞, as conjectured by Mezard and Parisi [Mezard, M., G. Parisi. 1987. On the solution of the random link matching problem. J. Phys.48 1451--1459] through the so-called cavity method. The latter also suggested a nonrigorous decentralized strategy for finding the optimum, which turned out to be an instance of the belief propagation (BP) heuristic discussed by Pearl [Pearl, J. 1988. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Francisco]. In this paper we use the objective method to analyze the performance of BP as the size of the underlying graph becomes large. Specifically, we establish that the dynamic of BP on Knn converges in distribution as n → ∞ to an appropriately defined dynamic on the Poisson weighted infinite tree, and we then prove correlation decay for this limiting dynamic. As a consequence, we obtain that BP finds an asymptotically correct assignment in O(n2) time only. This contrasts with both the worst-case upper bound for convergence of BP derived by [Bayati, M., D. Shah, M. Sharma. 2008. Max-product for maximum weight matching: Convergence, correctness, and LP duality. IEEE Trans. Inform. Theory54(3) 1241--1251.] and the best-known computational cost of Θ(n3) achieved by Edmonds and Karp's algorithm [Edmonds, J., R. Karp. 1972. Theoretical improvements in algorithmic efficiency for network flow problems. J. ACM19 248--264].", "", "The theory of the minimal spanning tree (MST) of a connected graph whose edges are assigned lengths according to independent identically distributed random variables is developed from two directions. First, it is shown how the Tutte polynomial for a connected graph can be used to provide an exact formula for the length of the minimal spanning tree under the model of uniformly distributed edge lengths. Second, it is shown how the theory of local weak convergence provides a systematic approach to the asymptotic theory of the length of the MST and related power sums. Consequences of these investigations include (1) the exact rational determination of the expected length of the MST for the complete graph Kn for 2 ≤ n ≤ 9 and (2) refinements of the results of Penrose (1998) for the MST of the d-cube and results of Beveridge, Frieze, and McDiarmid (1998) and Frieze, Ruzink6, and Thoma (2000) for graphs with modest expansion properties. In most cases, the results reviewed here have not reached their final form, and they should be viewed as part of work-in-progress." ] }
1312.4026
2950001689
We study the complexity of (approximate) winner determination under the Monroe and Chamberlin--Courant multiwinner voting rules, which determine the set of representatives by optimizing the total (dis)satisfaction of the voters with their representatives. The total (dis)satisfaction is calculated either as the sum of individual (dis)satisfactions (the utilitarian case) or as the (dis)satisfaction of the worst off voter (the egalitarian case). We provide good approximation algorithms for the satisfaction-based utilitarian versions of the Monroe and Chamberlin--Courant rules, and inapproximability results for the dissatisfaction-based utilitarian versions of them and also for all egalitarian cases. Our algorithms are applicable and particularly appealing when voters submit truncated ballots. We provide experimental evaluation of the algorithms both on real-life preference-aggregation data and on synthetic data. These experiments show that our simple and fast algorithms can in many cases find near-perfect solutions.
There are several single-winner voting rules for which winner determination is known to be @math -hard. These rules include, for example, Dodgson's rule @cite_2 @cite_9 @cite_21 , Young's rule @cite_32 @cite_21 , and Kemeny's rule @cite_2 @cite_22 @cite_14 . For the single-transferable vote rule (STV), the winner determination problem becomes @math -hard if we use the so-called parallel-universes tie-breaking @cite_13 . Many of these hardness results hold even in the sense of parameterized complexity theory (however, there also is a number of fixed-parameter tractability results; see the references above for details).
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_9", "@cite_21", "@cite_32", "@cite_2", "@cite_13" ], "mid": [ "2095940121", "2057573268", "2066051963", "2004181309", "1646465584", "2013372570", "1650619589" ], "abstract": [ "The Kemeny Score problem is central to many applications in the context of rank aggregation. Given a set of permutations (votes) over a set of candidates, one searches for a \"consensus permutation\" that is \"closest\" to the given set of permutations. Computing an optimal consensus permutation is NP-hard. We provide first, encouraging fixed-parameter tractability results for computing optimal scores (that is, the overall distance of an optimal consensus permutation). Our fixed-parameter algorithms employ the parameters \"score of the consensus\", \"maximum distance between two input permutations\", and \"number of candidates\". We extend our results to votes with ties and incomplete votes, thus, in both cases having no longer permutations as input.", "Kemeny proposed a voting scheme which is distinguished by the fact that it is the unique voting scheme that is neutral, consistent, and Condorcet. Bartholdi, Tovey, and Trick showed that determining the winner in Kemeny's system is NP-hard. We provide a stronger lower bound and an upper bound matching the lower bound, namely, we show that determining the winner in Kemeny's system is complete for P||NP, the class of sets solvable via parallel access to NP.", "In 1876, Lewis Carroll proposed a voting system in which the winner is the candidate who with the fewest changes in voters' preferences becomes a Condorcet winner—a candidate who beats all other candidates in pairwise majority-rule elections. Bartholdi, Tovey, and Trick provided a lower bound—NP-hardness—on the computational complexity of determining the election winner in Carroll's system. We provide a stronger lower bound and an upper bound that matches our lower bound. In particular, determining the winner in Carroll's system is complete for parallel access to NP, that is, it is complete for Theta_ 2 p for which it becomes the most natural complete problem known. It follows that determining the winner in Carroll's elections is not NP-complete unless the polynomial hierarchy collapses.", "Abstract We show that the two NP-complete problems of Dodgson Score and Young Score have differing computational complexities when the winner is close to being a Condorcet winner. On the one hand, we present an efficient fixed-parameter algorithm for determining a Condorcet winner in Dodgson elections by a minimum number of switches in the votes. On the other hand, we prove that the corresponding problem for Young elections, where one has to delete votes instead of performing switches, is W[2]-complete. In addition, we study Dodgson elections that allow ties between the candidates and give fixed-parameter tractability as well as W[2]-completeness results depending on the cost model for switching ties.", "In 1977 Young proposed a voting scheme that extends the Condorcet Principle based on the fewest possible number of voters whose removal yields a Condorcet winner. We prove that both the winner and the ranking problem for Young elections is complete for || NP , the class of problems solvable in polynomial time by parallel access to NP. Analogous results for Lewis Carroll's 1876 voting scheme were recently established by In contrast, we prove that the winner and ranking problems in Fishburn's homogeneous variant of Carroll's voting scheme can be solved efficiently by linear programming.", "We show that a voting scheme suggested by Lewis Carroll can be impractical in that it can be computationally prohibitive (specifically, NP-hard) to determine whether any particular candidate has won an election. We also suggest a class of “impracticality theorems” which say that any fair voting scheme must, in the worst-case, require excessive computation to determine a winner.", "In social choice, a preference function (PF) takes a set of votes (linear orders over a set of alternatives) as input, and produces one or more rankings (also linear orders over the alternatives) as output. Such functions have many applications, for example, aggregating the preferences of multiple agents, or merging rankings (of, say, webpages) into a single ranking. The key issue is choosing a PF to use. One natural and previously studied approach is to assume that there is an unobserved \"correct\" ranking, and the votes are noisy estimates of this. Then, we can use the PF that always chooses the maximum likelihood estimate (MLE) of the correct ranking. In this paper, we define simple ranking scoring functions (SRSFs) and show that the class of neutral SRSFs is exactly the class of neutral PFs that are MLEs for some noise model. We also define composite ranking scoring functions (CRSFs) and show a condition under which these coincide with SRSFs. We study key properties such as consistency and continuity, and consider some example PFs. In particular, we study Single Transferable Vote (STV), a commonly used PF, showing that it is a CRSF but not an SRSF, thereby clarifying the extent to which it is an MLE function. This also gives a new perspective on how ties should be broken under STV. We leave some open questions." ] }
1312.4026
2950001689
We study the complexity of (approximate) winner determination under the Monroe and Chamberlin--Courant multiwinner voting rules, which determine the set of representatives by optimizing the total (dis)satisfaction of the voters with their representatives. The total (dis)satisfaction is calculated either as the sum of individual (dis)satisfactions (the utilitarian case) or as the (dis)satisfaction of the worst off voter (the egalitarian case). We provide good approximation algorithms for the satisfaction-based utilitarian versions of the Monroe and Chamberlin--Courant rules, and inapproximability results for the dissatisfaction-based utilitarian versions of them and also for all egalitarian cases. Our algorithms are applicable and particularly appealing when voters submit truncated ballots. We provide experimental evaluation of the algorithms both on real-life preference-aggregation data and on synthetic data. These experiments show that our simple and fast algorithms can in many cases find near-perfect solutions.
These hardness results motivated the search for approximation algorithms. There are now very good approximation algorithms for Kemeny's rule @cite_27 @cite_35 @cite_23 and for Dodgson's rule @cite_41 @cite_29 @cite_1 @cite_17 @cite_4 . In both cases the results are, in essence, optimal. For Kemeny's rule there is a polynomial-time approximation scheme @cite_23 and for Dodgson's rule the achieved approximation ratio is optimal under standard complexity-theoretic assumptions @cite_1 (unfortunately, the approximation ratio is not constant but depends logarithmically on the number of candidates). On the other hand, for Young's rule it is known that no good approximation algorithms exist @cite_1 .
{ "cite_N": [ "@cite_35", "@cite_4", "@cite_41", "@cite_29", "@cite_1", "@cite_27", "@cite_23", "@cite_17" ], "mid": [ "2016963000", "2017658482", "", "1516798378", "2024298276", "2091858563", "1986978101", "2123942784" ], "abstract": [ "We consider the following simple algorithm for feedback arc set problem in weighted tournaments --- order the vertices by their weighted indegrees. We show that this algorithm has an approximation guarantee of 5 if the weights satisfy probability constraints (for any pair of vertices u and v, w uv + w vu = 1). Special cases of feedback arc set problem in such weighted tournaments include feedback arc set problem in unweighted tournaments and rank aggregation. Finally, for any constant e > 0, we exhibit an infinite family of (unweighted) tournaments for which the above algorithm (irrespective of how ties are broken) has an approximation ratio of 5 - e.", "In 1876 Charles Lutwidge Dodgson suggested the intriguing voting rule that today bears his name. Although Dodgson's rule is one of the most well-studied voting rules, it suffers from serious deficiencies, both from the computational point of view - it is NP-hard even to approximate the Dodgson score within sublogarithmic factors - and from the social choice point of view - it fails basic social choice desiderata such as monotonicity and homogeneity. In a previous paper [, SODA 2009] we have asked whether there are approximation algorithms for Dodgson's rule that are monotonic or homogeneous. In this paper we give definitive answers to these questions. We design a monotonic exponential-time algorithm that yields a 2-approximation to the Dodgson score, while matching this result with a tight lower bound. We also present a monotonic polynomial-time O(log m)-approximation algorithm (where m is the number of alternatives); this result is tight as well due to a complexity-theoretic lower bound. Furthermore, we show that a slight variation of a known voting rule yields a monotonic, homogeneous, polynomial-time O(m log m)-approximation algorithm, and establish that it is impossible to achieve a better approximation ratio even if one just asks for homogeneity. We complete the picture by studying several additional social choice properties; for these properties, we prove that algorithms with an approximation ratio that depends only on m do not exist.", "", "In the year 1876 the mathematician Charles Dodgson, who wrote fiction under the now more famous name of Lewis Carroll, devised a beautiful voting system that has long fascinated political scientists. However, determining the winner of a Dodgson election is known to be complete for the Θ2 p level of the polynomial hierarchy. This implies that unless P=NP no polynomial-time solution to this problem exists, and unless the polynomial hierarchy collapses to NP the problem is not even in NP. Nonetheless, we prove that when the number of voters is much greater than the number of candidates—although the number of voters may still be polynomial in the number of candidates—a simple greedy algorithm very frequently finds the Dodgson winners in such a way that it “knows” that it has found them, and furthermore the algorithm never incorrectly declares a nonwinner to be a winner.", "The voting rules proposed by Dodgson and Young are both designed to find an alternative closest to being a Condorcet winner, according to two different notions of proximity; the score of a given alternative is known to be hard to compute under either rule. In this paper, we put forward two algorithms for approximating the Dodgson score: a combinatorial, greedy algorithm and an LP-based algorithm, both of which yield an approximation ratio of H\"m\"-\"1, where m is the number of alternatives and H\"m\"-\"1 is the (m-1)st harmonic number. We also prove that our algorithms are optimal within a factor of 2, unless problems in NP have quasi-polynomial-time algorithms. Despite the intuitive appeal of the greedy algorithm, we argue that the LP-based algorithm has an advantage from a social choice point of view. Further, we demonstrate that computing any reasonable approximation of the ranking produced by Dodgson@?s rule is NP-hard. This result provides a complexity-theoretic explanation of sharp discrepancies that have been observed in the social choice theory literature when comparing Dodgson elections with simpler voting rules. Finally, we show that the problem of calculating the Young score is NP-hard to approximate by any factor. This leads to an inapproximability result for the Young ranking.", "We address optimization problems in which we are given contradictory pieces of input information and the goal is to find a globally consistent solution that minimizes the extent of disagreement with the respective inputs. Specifically, the problems we address are rank aggregation, the feedback arc set problem on tournaments, and correlation and consensus clustering. We show that for all these problems (and various weighted versions of them), we can obtain improved approximation factors using essentially the same remarkably simple algorithm. Additionally, we almost settle a long-standing conjecture of Bang-Jensen and Thomassen and show that unless NP⊆BPP, there is no polynomial time algorithm for the problem of minimum feedback arc set in tournaments.", "We present a polynomial time approximation scheme (PTAS) for the minimum feedback arc set problem on tournaments. A simple weighted generalization gives a PTAS for Kemeny-Young rank aggregation.", "In 1992, Bartholdi, Tovey, and Trick opened the study of control attacks on elections-- attempts to improve the election outcome by such actions as adding deleting candidates or voters. That work has led to many results on how algorithms can be used to find attacks on elections and how complexity-theoretic hardness results can be used as shields against attacks. However, all the work in this line has assumed that the attacker employs just a single type of attack. In this paper, we model and study the case in which the attacker launches a multipronged (i.e., multimode) attack. We do so to more realistically capture the richness of real-life settings. For example, an attacker might simultaneously try to suppress some voters, attract new voters into the election, and introduce a spoiler candidate. Our model provides a unified framework for such varied attacks. By constructing polynomialtime multiprong attack algorithms we prove that for various election systems even such concerted, flexible attacks can be perfectly planned in deterministic polynomial time." ] }
1312.4026
2950001689
We study the complexity of (approximate) winner determination under the Monroe and Chamberlin--Courant multiwinner voting rules, which determine the set of representatives by optimizing the total (dis)satisfaction of the voters with their representatives. The total (dis)satisfaction is calculated either as the sum of individual (dis)satisfactions (the utilitarian case) or as the (dis)satisfaction of the worst off voter (the egalitarian case). We provide good approximation algorithms for the satisfaction-based utilitarian versions of the Monroe and Chamberlin--Courant rules, and inapproximability results for the dissatisfaction-based utilitarian versions of them and also for all egalitarian cases. Our algorithms are applicable and particularly appealing when voters submit truncated ballots. We provide experimental evaluation of the algorithms both on real-life preference-aggregation data and on synthetic data. These experiments show that our simple and fast algorithms can in many cases find near-perfect solutions.
The work of @cite_4 and of @cite_17 on approximate winner determination for Dodgson's rule is particularly interesting from our perspective. In the former, the authors advocate treating approximation algorithms for Dodgson's rule as voting rules in their own right and design them to have desirable properties. In the latter, the authors show that a well-established voting rule (so-called Maximin rule) is a reasonable (though not optimal) approximation of Dodgson's rule. This perspective is important for anyone interested in using approximation algorithms for winner determination in elections (as might be the case for our algorithms for the Monroe and Chamberlin--Courant rules).
{ "cite_N": [ "@cite_4", "@cite_17" ], "mid": [ "2017658482", "2123942784" ], "abstract": [ "In 1876 Charles Lutwidge Dodgson suggested the intriguing voting rule that today bears his name. Although Dodgson's rule is one of the most well-studied voting rules, it suffers from serious deficiencies, both from the computational point of view - it is NP-hard even to approximate the Dodgson score within sublogarithmic factors - and from the social choice point of view - it fails basic social choice desiderata such as monotonicity and homogeneity. In a previous paper [, SODA 2009] we have asked whether there are approximation algorithms for Dodgson's rule that are monotonic or homogeneous. In this paper we give definitive answers to these questions. We design a monotonic exponential-time algorithm that yields a 2-approximation to the Dodgson score, while matching this result with a tight lower bound. We also present a monotonic polynomial-time O(log m)-approximation algorithm (where m is the number of alternatives); this result is tight as well due to a complexity-theoretic lower bound. Furthermore, we show that a slight variation of a known voting rule yields a monotonic, homogeneous, polynomial-time O(m log m)-approximation algorithm, and establish that it is impossible to achieve a better approximation ratio even if one just asks for homogeneity. We complete the picture by studying several additional social choice properties; for these properties, we prove that algorithms with an approximation ratio that depends only on m do not exist.", "In 1992, Bartholdi, Tovey, and Trick opened the study of control attacks on elections-- attempts to improve the election outcome by such actions as adding deleting candidates or voters. That work has led to many results on how algorithms can be used to find attacks on elections and how complexity-theoretic hardness results can be used as shields against attacks. However, all the work in this line has assumed that the attacker employs just a single type of attack. In this paper, we model and study the case in which the attacker launches a multipronged (i.e., multimode) attack. We do so to more realistically capture the richness of real-life settings. For example, an attacker might simultaneously try to suppress some voters, attract new voters into the election, and introduce a spoiler candidate. Our model provides a unified framework for such varied attacks. By constructing polynomialtime multiprong attack algorithms we prove that for various election systems even such concerted, flexible attacks can be perfectly planned in deterministic polynomial time." ] }
1312.4026
2950001689
We study the complexity of (approximate) winner determination under the Monroe and Chamberlin--Courant multiwinner voting rules, which determine the set of representatives by optimizing the total (dis)satisfaction of the voters with their representatives. The total (dis)satisfaction is calculated either as the sum of individual (dis)satisfactions (the utilitarian case) or as the (dis)satisfaction of the worst off voter (the egalitarian case). We provide good approximation algorithms for the satisfaction-based utilitarian versions of the Monroe and Chamberlin--Courant rules, and inapproximability results for the dissatisfaction-based utilitarian versions of them and also for all egalitarian cases. Our algorithms are applicable and particularly appealing when voters submit truncated ballots. We provide experimental evaluation of the algorithms both on real-life preference-aggregation data and on synthetic data. These experiments show that our simple and fast algorithms can in many cases find near-perfect solutions.
The hardness of the winner determination problem for the Monroe and Chamberlin--Courant rules have been considered in several papers. Procaccia, Rosenschein and Zohar @cite_18 were the first to show the hardness of these two rules for the case of a particular approval-style dissatisfaction function. Their results were complemented by Lu and Boutilier @cite_25 , Betzler, Slinko and Uhlmann @cite_8 , Yu, Chan, and Elkind @cite_36 , @cite_0 , and Skowron and Faliszewski @cite_33 . These are showing the hardness in case of the Borda dissatisfaction function, obtain results on parameterized hardness of the two rules, and results on hardness (or easiness) for the cases where the profiles are single-peaked or single-crossing. Further, Lu and Boutilier @cite_25 initiated the study of approximability for the Chamberlin--Courant rule (and were the first to use satisfaction-based framework). Specifically, they gave the @math -approximation algorithm for the Chamberlin--Courant rule. The motivation of Lu and Boutilier was coming from the point of view of recommendation systems and, in that sense, our view of the rules is quite similar to theirs.
{ "cite_N": [ "@cite_18", "@cite_33", "@cite_8", "@cite_36", "@cite_0", "@cite_25" ], "mid": [ "2131891143", "1620816389", "2124837549", "6335828", "2110060186", "1238745702" ], "abstract": [ "We demonstrate that winner selection in two prominent proportional representation voting systems is a computationally intractable problem—implying that these systems are impractical when the assembly is large. On a different note, in settings where the size of the assembly is constant, we show that the problem can be solved in polynomial time.", "We study approximation algorithms for several variants of the MaxCover problem, with the focus on algorithms that run in FPT time. In the MaxCover problem we are given a set N of elements, a family S of subsets of N, and an integer K. The goal is to find up to K sets from S that jointly cover (i.e., include) as many elements as possible. This problem is well-known to be NP-hard and, under standard complexitytheoretic assumptions, the best possible polynomial-time approximation algorithm has approximation ratio (1 − 1 ). We first consider a variant of MaxCover with bounded element frequencies, i.e., a variant where there is a constant p such that each element belongs to at most p sets in S. For this case we show that there is an FPT approximation scheme (i.e., for each β there is a β-approximation algorithm running in FPT time) for the problem of maximizing the number of covered elements, and a randomized FPT approximation scheme for the problem of minimizing the number of elements left uncovered (we take K to be the parameter). Then, for the case where there is a constant p such that each element belongs to at least p sets from S, we show that the standard greedy approximation algorithm achieves approximation ratio exactly 1 − e −max(pK kSk,1) . We conclude by considering an unrestricted variant of MaxCover, and show approximation algorithms that run in exponential time and combine an exact algorithm with a greedy approximation. Some of our results improve currently known results for MaxVertexCover.", "We investigate two systems of fully proportional representation suggested by Chamberlin & Courant and Monroe. Both systems assign a representative to each voter so that the \"sum of misrepresentations\" is minimized. The winner determination problem for both systems is known to be NP-hard, hence this work aims at investigating whether there are variants of the proposed rules and or specific electorates for which these problems can be solved efficiently. As a variation of these rules, instead of minimizing the sum of misrepresentations, we considered minimizing the maximal misrepresentation introducing effectively two new rules. In the general case these \"minimax\" versions of classical rules appeared to be still NP-hard. We investigated the parameterized complexity of winner determination of the two classical and two new rules with respect to several parameters. Here we have a mixture of positive and negative results: e.g., we proved fixed-parameter tractability for the parameter the number of candidates but fixed-parameter intractability for the number of winners. For single-peaked electorates our results are overwhelmingly positive: we provide polynomial-time algorithms for most of the considered problems. The only rule that remains NP-hard for single-peaked electorates is the classical Monroe rule.", "We study the complexity of electing a committee under several variants of the Chamberlin-Courant rule when the voters' preferences are single-peaked on a tree. We first show that this problem is easy for the egalitarian, or \"minimax\" version of this problem, for arbitrary trees and misrepresentation functions. For the standard (utilitarian) version of this problem we provide an algorithm for an arbitrary misrepresentation function whose running time is polynomial in the input size as long as the number of leaves of the underlying tree is bounded by a constant. On the other hand, we prove that our problem remains computationally hard on trees that have bounded degree, diameter, or pathwidth. Finally, we show how to modify Trick's [1989] algorithm to check whether an election is single-peaked on a tree whose number of leaves does not exceed a given parameter λ.", "We study the complexity of winner determination in single-crossing elections under two classic fully proportional representation rules-Chamberlin-Courant's rule and Monroe's rule. Winner determination for these rules is known to be NP-hard for unrestricted preferences. We show that for single-crossing preferences this problem admits a polynomial-time algorithm for Chamberlin-Courant's rule, but remains NP-hard for Monroe's rule. Our algorithm for Chamberlin-Courant's rule can be modified to work for elections with bounded single-crossing width. We then consider elections that are both single-peaked and single-crossing, and develop an efficient algorithm for the egalitarian variant of Monroe's rule for such elections. While [3] have recently presented a polynomial-time algorithm for this rule under single-peaked preferences, our algorithm has considerably better worst-case running time than that of", "We develop a general framework for social choice problems in which a limited number of alternatives can be recommended to an agent population. In our budgeted social choice model, this limit is determined by a budget, capturing problems that arise naturally in a variety of contexts, and spanning the continuum from pure consensus decision making (i.e., standard social choice) to fully personalized recommendation. Our approach applies a form of segmentation to social choice problems-- requiring the selection of diverse options tailored to different agent types--and generalizes certain multiwinner election schemes. We show that standard rank aggregation methods perform poorly, and that optimization in our model is NP-complete; but we develop fast greedy algorithms with some theoretical guarantees. Experiments on real-world datasets demonstrate the effectiveness of our algorithms." ] }
1312.4026
2950001689
We study the complexity of (approximate) winner determination under the Monroe and Chamberlin--Courant multiwinner voting rules, which determine the set of representatives by optimizing the total (dis)satisfaction of the voters with their representatives. The total (dis)satisfaction is calculated either as the sum of individual (dis)satisfactions (the utilitarian case) or as the (dis)satisfaction of the worst off voter (the egalitarian case). We provide good approximation algorithms for the satisfaction-based utilitarian versions of the Monroe and Chamberlin--Courant rules, and inapproximability results for the dissatisfaction-based utilitarian versions of them and also for all egalitarian cases. Our algorithms are applicable and particularly appealing when voters submit truncated ballots. We provide experimental evaluation of the algorithms both on real-life preference-aggregation data and on synthetic data. These experiments show that our simple and fast algorithms can in many cases find near-perfect solutions.
In this paper we take the view that the Monroe and Chamberlin--Courant rules are special cases of the following resource allocation problem. The alternatives are shareable resources, each with a certain capacity defined as the maximal number of agents that may share this resource. Each agent has preferences over the resources and is interested in getting exactly one. The goal is to select a predetermined number @math of resources and to find an optimal allocation of these resources (see for details). This provides a unified framework for the two rules and reveals the connection of proportional representation problem to other resource allocation problems. In particular, it closely resembles multi-unit resource allocation with single-unit demand [Chapter 11] ley-sho:b:multiagent-systems (see also the work of @cite_38 for a survey of the most fundamental issues in the multiagent resource allocation theory) and resource allocation with sharable indivisible goods @cite_38 @cite_42 . Below, we point out other connections of the Monroe and Chamberlin--Courant rules to several other problems.
{ "cite_N": [ "@cite_38", "@cite_42" ], "mid": [ "1521758236", "1496970431" ], "abstract": [ "The allocation of resources within a system of autonomous agents, that not only havepreferences over alternative allocations of resources but also actively participate in com-puting an allocation, is an exciting area of research at the interface of Computer Scienceand Economics. This paper is a survey of some of the most salient issues in MultiagentResource Allocation. In particular, we review various languages to represent the pref-erences of agents over alternative allocations of resources as well as different measuresof social welfare to assess the overall quality of an allocation. We also discuss pertinentissues regarding allocation procedures and present important complexity results. Ourpresentation of theoretical issues is complemented by a discussion of software packagesfor the simulation of agent-based market places. We also introduce four major applica-tion areas for Multiagent Resource Allocation, namely industrial procurement, sharingof satellite resources, manufacturing control, and grid computing", "We study a particular multiagent resource allocation problem with indivisible, but sharable resources. In our model, the utility of an agent for using a bundle of resources is the difference between the valuation of that bundle and a congestion cost (or delay), a figure formed by adding up the individual congestion costs of each resource in the bundle. The valuation and the delay can be agent-dependent. When the agents that share a resource also share the resource's control, the current users of a resource will require some compensation when a new agent wants to use the resource. We study the existence of distributed protocols that lead to a social optimum. Depending on constraints on the valuation functions (mainly modularity), on the delay functions (e.g., convexity), and the structural complexity of the deals between agents, we prove either the existence of some sequences of deals or the convergence of all sequences of deals to a social optimum. When the agents do not have joint control over the resources (i.e., they can use any resource they want), we study the existence of pure Nash equilibria. We provide results for modular valuation functions and relate them to results from the literature on congestion games." ] }
1312.3399
1970140218
We present a scalable set-valued safety-preserving controller for constrained continuous-time linear time-invariant (LTI) systems subject to additive, unknown but bounded disturbance or uncertainty. The approach relies upon a conservative approximation of the discriminating kernel using robust maximal reachable sets---an extension of our earlier work on computation of the viability kernel for high-dimensional systems. Based on ellipsoidal techniques for reachability, a piecewise ellipsoidal algorithm with polynomial complexity is described that under-approximates the discriminating kernel under LTI dynamics. This precomputed piecewise ellipsoidal set is then used online to synthesize a permissive state-feedback safety-preserving controller. The controller is modeled as a hybrid automaton and can be formulated such that under certain conditions the resulting control signal is continuous across its transitions. We show the performance of the controller on a twelve-dimensional flight envelope protection problem for a quadrotor with actuation saturation and unknown wind disturbances.
A classification technique based on support vector machines (SVMs) is presented in @cite_35 that approximates the viability kernel and yields an analytical expression of its boundary. A sequential minimal optimization algorithm is solved to compute the SVM that in turn forms a barrier function on or close to the boundary of the viability kernel in the discretized state space. While the method successfully reduces the computational time for the synthesis of control laws when the dimension of the input space is high, its applicability to systems with high-dimensional is limited. The method does not provide any guarantees that the synthesized control laws are safety-preserving.
{ "cite_N": [ "@cite_35" ], "mid": [ "1982613542" ], "abstract": [ "We propose an algorithm which performs a progressive approximation of a viability kernel, iteratively using a classification method. We establish the mathematical conditions that the classification method should fulfil to guarantee the convergence to the actual viability kernel. We study more particularly the use of support vector machines (SVMs) as classification techniques. We show that they make possible to use gradient optimisation techniques to find a viable control at each time step, and over several time steps. This allows us to avoid the exponential growth of the computing time with the dimension of the control space. It also provides simple and efficient control procedures. We illustrate the method with some examples inspired from ecology" ] }
1312.3399
1970140218
We present a scalable set-valued safety-preserving controller for constrained continuous-time linear time-invariant (LTI) systems subject to additive, unknown but bounded disturbance or uncertainty. The approach relies upon a conservative approximation of the discriminating kernel using robust maximal reachable sets---an extension of our earlier work on computation of the viability kernel for high-dimensional systems. Based on ellipsoidal techniques for reachability, a piecewise ellipsoidal algorithm with polynomial complexity is described that under-approximates the discriminating kernel under LTI dynamics. This precomputed piecewise ellipsoidal set is then used online to synthesize a permissive state-feedback safety-preserving controller. The controller is modeled as a hybrid automaton and can be formulated such that under certain conditions the resulting control signal is continuous across its transitions. We show the performance of the controller on a twelve-dimensional flight envelope protection problem for a quadrotor with actuation saturation and unknown wind disturbances.
The notion of approximate bisimulation @cite_21 can be used to construct a discrete abstraction of the continuous state space such that the observed behavior of the corresponding abstract system is close'' to that of the original continuous system. Girard in a series of papers @cite_20 @cite_42 @cite_6 use this notion to construct safety-preserving controllers for approximately bisimilar discrete abstractions and prove that the controller is for the original systems. The technique is applied to incrementally stable switched systems (for which approximately bisimilar discrete abstractions of arbitrary precision can be constructed) with autonomous or affine dynamics, and safety-preserving switched controllers are synthesized. The abstraction, however, relies on sampling of time and space (i.e. gridding) and therefore its applicability appears to be limited to low-dimensional systems---even when multi-scale abstraction (adaptive gridding) techniques are employed.
{ "cite_N": [ "@cite_21", "@cite_42", "@cite_6", "@cite_20" ], "mid": [ "2074027094", "1980222041", "2962709723", "2122311520" ], "abstract": [ "In this paper, we define the notion of approximate bisimulation relation between two continuous systems. While exact bisimulation requires that the observations of two systems are and remain identical, approximate bisimulation allows the observations to be different provided the distance between them remains bounded by some parameter called precision. Approximate bisimulation relations are conveniently defined as level sets of a so-called bisimulation function which can be characterized using Lyapunov-like differential inequalities. For a class of constrained linear systems, we develop computationally effective characterizations of bisimulation functions that can be interpreted in terms of linear matrix inequalities and optimal values of static games. We derive a method to evaluate the precision of the approximate bisimulation relation between a constrained linear system and its projection. This method has been implemented in a Matlab toolbox: MATISSE. An example of use of the toolbox in the context of safety verification is shown.", "We propose a technique for the synthesis of safety controllers for switched systems using multi-scale abstractions. To this end we build on a recent notion of multi-scale discrete abstractions for incrementally stable switched systems. These abstractions are defined on a sequence of embedded lattices approximating the state-space, the finer ones being used only in a restricted area where fast switching is needed. This makes it possible to deal with fast switching while keeping the number of states in the abstraction at a reasonable level. We present a synthesis algorithm that exploits the specificities of multi-scale abstractions. The abstractions are computed on the fly during controller synthesis. The finest scales of the abstraction are effectively explored only when fast switching is needed, that is when the system approaches the unsafe set. We provide experimental results that show drastic improvements of the complexity of controller synthesis using multi-scale abstractions instead of uniform abstractions.", "In this paper, we consider the problem of controller design using approximately bisimilar abstractions with an emphasis on safety and reachability specifications. We propose abstraction-based approaches to controller synthesis for both types of specifications. We start by synthesizing a controller for an approximately bisimilar abstraction. Then, using a concretization procedure, we obtain a controller for our initial system that is proved ''correct by design''. We provide guarantees of performance by giving estimates of the distance of the synthesized controller to the maximal (i.e., the most permissive) safety controller or to the time-optimal reachability controller. Finally, we use these techniques, combined with discrete approximately bisimilar abstractions of switched systems developed recently, for switching controller synthesis.", "This paper deals with the synthesis of state-feedback controllers using approximately bisimilar abstractions with an emphasis on safety problems. Such problems consist in synthesizing a controller that restricts the behaviors of a system so that its outputs remain in some specified safe set. One is usually interested in designing a controller that is as permissive as possible since this enables to ensure, a posteriori, secondary control objectives. Using the natural refinement technique for approximately bisimilar abstractions, a controller for a system can be synthesized from a controller for an abstraction. However, these controllers have some limitations in terms of performances, implementation cost and robustness. The main contribution of this paper is a new procedure for the synthesis of controllers for safety specifications using approximately bisimilar abstractions. Given a controller for an abstraction, we describe an approach for synthesizing a controller for the system that does not suffer from the previous limitations. Moreover, we show that if the controller of the abstraction is the maximal controller (i.e. the most permissive) then we can evaluate the distance between the synthesized controller and the maximal controller of the system. This distance can be made arbitrarily small by choosing sufficiently precise abstractions. We apply our results to synthesis problems for switched systems." ] }
1312.2903
1604297979
Finite sample properties of random covariance-type matrices have been the subject of much research. In this paper we focus on the “lower tail”’ of such a matrix, and prove that it is subgaussian under a simple fourth moment assumption on the onedimensional marginals of the random vectors. A similar result holds for more general sums of random positive semidefinite matrices, and the (relatively simple) proof uses a variant of the so-called PAC-Bayesian method for bounding empirical processes. We give two applications of the main result. In the first one we obtain a new finitesample bound for ordinary least squares estimator in linear regression with random design. Our result is model-free, requires fairly weak moment assumptions and is almost optimal. Our second application is to bounding restricted eigenvalue constants of certain random ensembles with “heavy tails”. These constants are important in the analysis of problems in Compressed Sensing and High Dimensional Statistics, where one recovers a sparse vector from a small umber of linear measurements. Our result implies that heavy tails still allow for the fast recovery rates found in efficient methods such as the LASSO and the Dantzig selector. Along the way we strengthen, with a fairly short argument, a recent result of Rudelson and Zhou on the restricted eigenvalue property.
In Statistics and Machine Learning it is natural to assume that the vectors @math are generated randomly by a mechanism that is not under control of the experimenter. One may enquire whether such random ensembles will typically satisfy restricted eigenvaue properties. This problem has been addressed for Gaussian ensembles by @cite_1 and for subgaussian and bounded-coordinate ensembles by Rudelson and Zhou @cite_0 . In both cases it is shown that @math can be bounded in terms of @math for some @math , when @math . We prove here that finite moment assumptions suffice to bound restricted eigenvalues of chosen sets @math . Note that the bounded coordinate case neigher implies nor is implied by our results.
{ "cite_N": [ "@cite_0", "@cite_1" ], "mid": [ "2023982864", "2162312215" ], "abstract": [ "Random matrices are widely used in sparse recovery problems, and the relevant properties of matrices with i.i.d. entries are well understood. This paper discusses the recently introduced restricted eigenvalue (RE) condition, which is among the most general assumptions on the matrix, guaranteeing recovery. We prove a reduction principle showing that the RE condition can be guaranteed by checking the restricted isometry on a certain family of low-dimensional subspaces. This principle allows us to establish the RE condition for several broad classes of random matrices with dependent entries, including random matrices with sub-Gaussian rows and nontrivial covariance structure, as well as matrices with independent rows, and uniformly bounded entries.", "Methods based on l1-relaxation, such as basis pursuit and the Lasso, are very popular for sparse regression in high dimensions. The conditions for success of these methods are now well-understood: (1) exact recovery in the noiseless setting is possible if and only if the design matrix X satisfies the restricted nullspace property, and (2) the squared l2-error of a Lasso estimate decays at the minimax optimal rate k log p n, where k is the sparsity of the p-dimensional regression problem with additive Gaussian noise, whenever the design satisfies a restricted eigenvalue condition. The key issue is thus to determine when the design matrix X satisfies these desirable properties. Thus far, there have been numerous results showing that the restricted isometry property, which implies both the restricted nullspace and eigenvalue conditions, is satisfied when all entries of X are independent and identically distributed (i.i.d.), or the rows are unitary. This paper proves directly that the restricted nullspace and eigenvalue conditions hold with high probability for quite general classes of Gaussian matrices for which the predictors may be highly dependent, and hence restricted isometry conditions can be violated with high probability. In this way, our results extend the attractive theoretical guarantees on l1-relaxations to a much broader class of problems than the case of completely independent or unitary designs." ] }
1312.2903
1604297979
Finite sample properties of random covariance-type matrices have been the subject of much research. In this paper we focus on the “lower tail”’ of such a matrix, and prove that it is subgaussian under a simple fourth moment assumption on the onedimensional marginals of the random vectors. A similar result holds for more general sums of random positive semidefinite matrices, and the (relatively simple) proof uses a variant of the so-called PAC-Bayesian method for bounding empirical processes. We give two applications of the main result. In the first one we obtain a new finitesample bound for ordinary least squares estimator in linear regression with random design. Our result is model-free, requires fairly weak moment assumptions and is almost optimal. Our second application is to bounding restricted eigenvalue constants of certain random ensembles with “heavy tails”. These constants are important in the analysis of problems in Compressed Sensing and High Dimensional Statistics, where one recovers a sparse vector from a small umber of linear measurements. Our result implies that heavy tails still allow for the fast recovery rates found in efficient methods such as the LASSO and the Dantzig selector. Along the way we strengthen, with a fairly short argument, a recent result of Rudelson and Zhou on the restricted eigenvalue property.
Let us note the main differences between this theorem and the results in @cite_1 @cite_0 : our theorem holds for a specific choice of @math -- ie. it is not uniform over @math with @math -- and uses the normalized" matrix @math instead of @math . Both differences are related to our moment assumptions, and both turn out not to be problematic in certain scenarios, such as randomized, RIPless compressed sensing" @cite_26 and statistical regression problems, where one wants to solve one problem instance and uniform guarantees are unnecessary (cf. linearsparse below). We note that the normalization on @math is farly natural, at it ensures the unit diagonal" condition in LASSO . We also note that stronger moment assumptions allow for stronger conclusions via the same proof methods; we illustrate this with a simple example.
{ "cite_N": [ "@cite_0", "@cite_26", "@cite_1" ], "mid": [ "2023982864", "2137198385", "2162312215" ], "abstract": [ "Random matrices are widely used in sparse recovery problems, and the relevant properties of matrices with i.i.d. entries are well understood. This paper discusses the recently introduced restricted eigenvalue (RE) condition, which is among the most general assumptions on the matrix, guaranteeing recovery. We prove a reduction principle showing that the RE condition can be guaranteed by checking the restricted isometry on a certain family of low-dimensional subspaces. This principle allows us to establish the RE condition for several broad classes of random matrices with dependent entries, including random matrices with sub-Gaussian rows and nontrivial covariance structure, as well as matrices with independent rows, and uniformly bounded entries.", "This paper introduces a simple and very general theory of compressive sensing. In this theory, the sensing mechanism simply selects sensing vectors independently at random from a probability distribution F; it includes all standard models-e.g., Gaussian, frequency measurements-discussed in the literature, but also provides a framework for new measurement strategies as well. We prove that if the probability distribution F obeys a simple incoherence property and an isotropy property, one can faithfully recover approximately sparse signals from a minimal number of noisy measurements. The novelty is that our recovery results do not require the restricted isometry property (RIP) to hold near the sparsity level in question, nor a random model for the signal. As an example, the paper shows that a signal with s nonzero entries can be faithfully recovered from about s logn Fourier coefficients that are contaminated with noise.", "Methods based on l1-relaxation, such as basis pursuit and the Lasso, are very popular for sparse regression in high dimensions. The conditions for success of these methods are now well-understood: (1) exact recovery in the noiseless setting is possible if and only if the design matrix X satisfies the restricted nullspace property, and (2) the squared l2-error of a Lasso estimate decays at the minimax optimal rate k log p n, where k is the sparsity of the p-dimensional regression problem with additive Gaussian noise, whenever the design satisfies a restricted eigenvalue condition. The key issue is thus to determine when the design matrix X satisfies these desirable properties. Thus far, there have been numerous results showing that the restricted isometry property, which implies both the restricted nullspace and eigenvalue conditions, is satisfied when all entries of X are independent and identically distributed (i.i.d.), or the rows are unitary. This paper proves directly that the restricted nullspace and eigenvalue conditions hold with high probability for quite general classes of Gaussian matrices for which the predictors may be highly dependent, and hence restricted isometry conditions can be violated with high probability. In this way, our results extend the attractive theoretical guarantees on l1-relaxations to a much broader class of problems than the case of completely independent or unitary designs." ] }
1312.3240
2949833008
We propose a method for knowledge transfer between semantically related classes in ImageNet. By transferring knowledge from the images that have bounding-box annotations to the others, our method is capable of automatically populating ImageNet with many more bounding-boxes and even pixel-level segmentations. The underlying assumption that objects from semantically related classes look alike is formalized in our novel Associative Embedding (AE) representation. AE recovers the latent low-dimensional space of appearance variations among image windows. The dimensions of AE space tend to correspond to aspects of window appearance (e.g. side view, close up, background). We model the overlap of a window with an object using Gaussian Processes (GP) regression, which spreads annotation smoothly through AE space. The probabilistic nature of GP allows our method to perform self-assessment, i.e. assigning a quality estimate to its own output. It enables trading off the amount of returned annotations for their quality. A large scale experiment on 219 classes and 0.5 million images demonstrates that our method outperforms state-of-the-art methods and baselines for both object localization and segmentation. Using self-assessment we can automatically return bounding-box annotations for 30 of all images with high localization accuracy (i.e. 73 average overlap with ground-truth).
The work @cite_16 populates ImageNet with segmentations. It propagates ground-truth segmentations from PASCAL VOC @cite_15 onto ImageNet. They use a nearest neighbour technique @cite_27 to transfer segmentations from a given source set to a target image. We compare to this method experimentally (sec. ), by putting a bounding-box over their segmentations.
{ "cite_N": [ "@cite_27", "@cite_15", "@cite_16" ], "mid": [ "2060475276", "2031489346", "2117741877" ], "abstract": [ "We present a novel technique for figure-ground segmentation, where the goal is to separate all foreground objects in a test image from the background. We decompose the test image and all images in a supervised training set into overlapping windows likely to cover foreground objects. The key idea is to transfer segmentation masks from training windows that are visually similar to windows in the test image. These transferred masks are then used to derive the unary potentials of a binary, pairwise energy function defined over the pixels of the test image, which is minimized with standard graph-cuts. This results in a fully automatic segmentation scheme, as opposed to interactive techniques based on similar energy functions. Using windows as support regions for transfer efficiently exploits the training data, as the test image does not need to be globally similar to a training image for the method to work. This enables to compose novel scenes using local parts of training images. Our approach obtains very competitive results on three datasets (PASCAL VOC 2010 segmentation challenge, Weizmann horses, Graz-02).", "The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection. This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.", "ImageNet is a large-scale hierarchical database of object classes with millions of images.We propose to automatically populate it with pixelwise object-background segmentations, by leveraging existing manual annotations in the form of class labels and bounding-boxes. The key idea is to recursively exploit images segmented so far to guide the segmentation of new images. At each stage this propagation process expands into the images which are easiest to segment at that point in time, e.g. by moving to the semantically most related classes to those segmented so far. The propagation of segmentation occurs both (a) at the image level, by transferring existing segmentations to estimate the probability of a pixel to be foreground, and (b) at the class level, by jointly segmenting images of the same class and by importing the appearance models of classes that are already segmented. Through experiments on 577 classes and 500k images we show that our technique (i) annotates a wide range of classes with accurate segmentations; (ii) effectively exploits the hierarchical structure of ImageNet; (iii) scales efficiently, especially when implemented on superpixels; (iv) outperforms a baseline GrabCut ( 2004) initialized on the image center, as well as segmentation transfer from a fixed source pool and run independently on each target image (Kuettel and Ferrari 2012). Moreover, our method also delivers state-of-the-art results on the recent iCoseg dataset for co-segmentation." ] }
1312.3240
2949833008
We propose a method for knowledge transfer between semantically related classes in ImageNet. By transferring knowledge from the images that have bounding-box annotations to the others, our method is capable of automatically populating ImageNet with many more bounding-boxes and even pixel-level segmentations. The underlying assumption that objects from semantically related classes look alike is formalized in our novel Associative Embedding (AE) representation. AE recovers the latent low-dimensional space of appearance variations among image windows. The dimensions of AE space tend to correspond to aspects of window appearance (e.g. side view, close up, background). We model the overlap of a window with an object using Gaussian Processes (GP) regression, which spreads annotation smoothly through AE space. The probabilistic nature of GP allows our method to perform self-assessment, i.e. assigning a quality estimate to its own output. It enables trading off the amount of returned annotations for their quality. A large scale experiment on 219 classes and 0.5 million images demonstrates that our method outperforms state-of-the-art methods and baselines for both object localization and segmentation. Using self-assessment we can automatically return bounding-box annotations for 30 of all images with high localization accuracy (i.e. 73 average overlap with ground-truth).
is used in computer vision to facilitate learning a new target class with the help of labelled examples from related source classes. Transfer is typically done through regularization of model parameters @cite_3 @cite_1 , an intermediate attribute layer @cite_12 (e.g. yellow, furry), or by sharing parts @cite_17 . In GP @cite_28 transfer learning is usually based on sharing hyper-parameters between tasks. In this work we not only share hyper-parameters, but the as well. Also, our GP kernel is defined over an augmented AE space @math , which is constructed specifically for a particular combination of source and target classes. In principle, one could view AE as kernel learning method for GP, which exploits the specifics of visual data.
{ "cite_N": [ "@cite_28", "@cite_1", "@cite_3", "@cite_12", "@cite_17" ], "mid": [ "2119595900", "2122156965", "2155904486", "2134270519", "1968933322" ], "abstract": [ "In this paper we investigate multi-task learning in the context of Gaussian Processes (GP). We propose a model that learns a shared covariance function on input-dependent features and a \"free-form\" covariance matrix over tasks. This allows for good flexibility when modelling inter-task dependencies while avoiding the need for large amounts of data for training. We show that under the assumption of noise-free observations and a block design, predictions for a given task only depend on its target values and therefore a cancellation of inter-task transfer occurs. We evaluate the benefits of our model on two practical applications: a compiler performance prediction problem and an exam score prediction task. Additionally, we make use of GP approximations and properties of our model in order to provide scalability to large data sets.", "Learning object categories from small samples is a challenging problem, where machine learning tools can in general provide very few guarantees. Exploiting prior knowledge may be useful to reproduce the human capability of recognizing objects even from only one single view. This paper presents an SVM-based model adaptation algorithm able to select and weight appropriately prior knowledge coming from different categories. The method relies on the solution of a convex optimization problem which ensures to have the minimal leave-one-out error on the training set. Experiments on a subset of the Caltech-256 database show that the proposed method produces better results than both choosing one single prior model, and transferring from all previous experience in a flat uninformative way.", "Current computational approaches to learning visual object categories require thousands of training images, are slow, cannot learn in an incremental manner and cannot incorporate prior information into the learning process. In addition, no algorithm presented in the literature has been tested on more than a handful of object categories. We present an method for learning object categories from just a few training images. It is quick and it uses prior information in a principled way. We test it on a dataset composed of images of objects belonging to 101 widely varied categories. Our proposed method is based on making use of prior information, assembled from (unrelated) object categories which were previously learnt. A generative probabilistic model is used, which represents the shape and appearance of a constellation of features belonging to the object. The parameters of the model are learnt incrementally in a Bayesian manner. Our incremental algorithm is compared experimentally to an earlier batch Bayesian algorithm, as well as to one based on maximum-likelihood. The incremental and batch versions have comparable classification performance on small training sets, but incremental learning is significantly faster, making real-time learning feasible. Both Bayesian methods outperform maximum likelihood on small training sets.", "We study the problem of object classification when training and test classes are disjoint, i.e. no training examples of the target classes are available. This setup has hardly been studied in computer vision research, but it is the rule rather than the exception, because the world contains tens of thousands of different object classes and for only a very few of them image, collections have been formed and annotated with suitable class labels. In this paper, we tackle the problem by introducing attribute-based classification. It performs object detection based on a human-specified high-level description of the target objects instead of training images. The description consists of arbitrary semantic attributes, like shape, color or even geographic information. Because such properties transcend the specific learning task at hand, they can be pre-learned, e.g. from image datasets unrelated to the current task. Afterwards, new classes can be detected based on their attribute representation, without the need for a new training phase. In order to evaluate our method and to facilitate research in this area, we have assembled a new large-scale dataset, “Animals with Attributes”, of over 30,000 animal images that match the 50 classes in Osherson's classic table of how strongly humans associate 85 semantic attributes with animal classes. Our experiments show that by using an attribute layer it is indeed possible to build a learning object detection system that does not require any training images of the target classes.", "The deformable part-based model (DPM) proposed by has demonstrated state-of-the-art results in object localization. The model offers a high degree of learnt invariance by utilizing viewpoint-dependent mixture components and movable parts in each mixture component. One might hope to increase the accuracy of the DPM by increasing the number of mixture components and parts to give a more faithful model, but limited training data prevents this from being effective. We propose an extension to the DPM which allows for sharing of object part models among multiple mixture components as well as object classes. This results in more compact models and allows training examples to be shared by multiple components, ameliorating the effect of a limited size training set. We (i) reformulate the DPM to incorporate part sharing, and (ii) propose a novel energy function allowing for coupled training of mixture components and object classes. We report state-of-the-art results on the PASCAL VOC dataset." ] }