aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1702.00477
2587072014
One of the most common approaches for multiobjective optimization is to generate a solution set that well approximates the whole Pareto-optimal frontier to facilitate the later decision-making process. However, how to evaluate and compare the quality of different solution sets remains challenging. Existing measures typically require additional problem knowledge and information, such as a reference point or a substituted set of the Pareto-optimal frontier. In this paper, we propose a quality measure, called dominance move (DoM), to compare solution sets generated by multiobjective optimizers. Given two solution sets, DoM measures the minimum sum of move distances for one set to weakly Pareto dominate the other set. DoM can be seen as a natural reflection of the difference between two solutions, capturing all aspects of solution sets' quality, being compliant with Pareto dominance, and does not need any additional problem knowledge and parameters. We present an exact method to calculate the DoM in the biobjective case. We show the necessary condition of constructing the optimal partition for a solution set's minimum move, and accordingly propose an efficient algorithm to recursively calculate the DoM. Finally, DoM is evaluated on several groups of artificial and real test cases as well as by a comparison with two well-established quality measures.
The relation between solutions can be naturally extended to solution sets @cite_21 . Let two solution sets @math . Solution set @math is said to @math (denoted as @math ) if every solution @math is weakly dominated by at least one solution @math . If for every solution @math there exists at least one solution @math that dominates @math , we say that @math dominates @math (denoted as @math ). We can see that the weak dominance relation between two sets does not rule out their equality, while the dominance relation does completely. There thus exists another situation that @math weakly dominates but does not equal @math . That is, every solution in @math is weakly dominated by one solution in @math but there is at least one solution in @math that is not weakly dominated by any solution in @math (i.e., @math ). This relation represents the most general and weakest form of superiority between two solution sets and was defined as @math being @math (denoted as @math ) in @cite_21 . Put it simply, @math means that @math is at least as good as @math , while @math is not as good as @math .
{ "cite_N": [ "@cite_21" ], "mid": [ "2098907614" ], "abstract": [ "An important issue in multiobjective optimization is the quantitative comparison of the performance of different algorithms. In the case of multiobjective evolutionary algorithms, the outcome is usually an approximation of the Pareto-optimal set, which is denoted as an approximation set, and therefore the question arises of how to evaluate the quality of approximation sets. Most popular are methods that assign each approximation set a vector of real numbers that reflect different aspects of the quality. Sometimes, pairs of approximation sets are also considered. In this study, we provide a rigorous analysis of the limitations underlying this type of quality assessment. To this end, a mathematical framework is developed which allows one to classify and discuss existing techniques." ] }
1702.00477
2587072014
One of the most common approaches for multiobjective optimization is to generate a solution set that well approximates the whole Pareto-optimal frontier to facilitate the later decision-making process. However, how to evaluate and compare the quality of different solution sets remains challenging. Existing measures typically require additional problem knowledge and information, such as a reference point or a substituted set of the Pareto-optimal frontier. In this paper, we propose a quality measure, called dominance move (DoM), to compare solution sets generated by multiobjective optimizers. Given two solution sets, DoM measures the minimum sum of move distances for one set to weakly Pareto dominate the other set. DoM can be seen as a natural reflection of the difference between two solutions, capturing all aspects of solution sets' quality, being compliant with Pareto dominance, and does not need any additional problem knowledge and parameters. We present an exact method to calculate the DoM in the biobjective case. We show the necessary condition of constructing the optimal partition for a solution set's minimum move, and accordingly propose an efficient algorithm to recursively calculate the DoM. Finally, DoM is evaluated on several groups of artificial and real test cases as well as by a comparison with two well-established quality measures.
@cite_32 introduced a quality measure based on the integrated preference functional (IPF), without the requirement of the knowledge of the Pareto-optimal frontier. This measure used partial information on the decision maker's value function and was designed for biobjective optimization problems. Later, @cite_36 extended this IPF measure, enabling it feasible for general @math -objective optimization problems. However, since the the form of the decision maker's value function was represented as a convex combination of objectives, only supported points in a solution set contribute to the IPF result. To address this issue, @cite_54 considered the weighted Tchebycheff function as the value function in IPF, which makes the evaluation result include the information of unsupported points. This modification introduces an additional parameter, the ideal point (for the calculation of the Tchebycheff function), which may affect the evaluation result to some extent.
{ "cite_N": [ "@cite_36", "@cite_54", "@cite_32" ], "mid": [ "", "2103640719", "2002895030" ], "abstract": [ "", "We consider the problem of evaluating the quality of solution sets generated by heuristics for multiple-objective combinatorial optimization problems. We extend previous research on the integrated preference functional (IPF), which assigns a scalar value to a given discrete set of nondominated points so that the weighted Tchebycheff function can be used as the underlying implicit value function. This extension is useful because modeling the decision maker's value function with the weighted Tchebycheff function reflects the impact of unsupported points when evaluating sets of nondominated points. We present an exact calculation method for the IPF measure in this case for an arbitrary number of criteria. We show that every nondominated point has its optimal weight interval for the weighted Tchebycheff function. Accordingly, all nondominated points, and not only the supported points in a set, contribute to the value of the IPF measure when using the weighted Tchebycheff function. Two-and three-criteria numerical examples illustrate the desirable properties of the weighted Tchebycheff function, providing a richer measure than the original IPF based on a convex combination of objectives.", "We present the Integrated Preference Functional (IPF) for comparing the quality of proposed sets of near-pareto-optimal solutions to bi-criteria optimization problems. Evaluating the quality of such solution sets is one of the key issues in developing and comparing heuristics for multiple objective combinatorial optimization problems. The IPF is a set functional that, given a weight density function provided by a decision maker and a discrete set of solutions for a particular problem, assigns a numerical value to that solution set. This value can be used to compare the quality of different sets of solutions, and therefore provides a robust, quantitative approach for comparing different heuristic, a posteriori solution procedures for difficult multiple objective optimization problems. We provide specific examples of decision maker preference functions and illustrate the calculation of the resulting IPF for specific solution sets and a simple family of combined objectives." ] }
1702.00477
2587072014
One of the most common approaches for multiobjective optimization is to generate a solution set that well approximates the whole Pareto-optimal frontier to facilitate the later decision-making process. However, how to evaluate and compare the quality of different solution sets remains challenging. Existing measures typically require additional problem knowledge and information, such as a reference point or a substituted set of the Pareto-optimal frontier. In this paper, we propose a quality measure, called dominance move (DoM), to compare solution sets generated by multiobjective optimizers. Given two solution sets, DoM measures the minimum sum of move distances for one set to weakly Pareto dominate the other set. DoM can be seen as a natural reflection of the difference between two solutions, capturing all aspects of solution sets' quality, being compliant with Pareto dominance, and does not need any additional problem knowledge and parameters. We present an exact method to calculate the DoM in the biobjective case. We show the necessary condition of constructing the optimal partition for a solution set's minimum move, and accordingly propose an efficient algorithm to recursively calculate the DoM. Finally, DoM is evaluated on several groups of artificial and real test cases as well as by a comparison with two well-established quality measures.
As pointed out by @cite_21 , the @math indicator has some desirable features, such as no need of a reference set, complying with the Pareto dominance relation, and representing natural extension to the evaluation of approximation schemes in operations research and theory. However, one weakness of the @math indicator is that its evaluation result is only related to one particular solution in either solution set. This could lead to an inaccurate evaluation of quality comparison between solution sets. Figure gives an example that the @math indicator fails to distinguish between solution sets ( @math and @math ). As can be seen from the figure, @math has more solutions and a better coverage over the Pareto-optimal frontier than @math , but the two sets have the same comparison result ( @math ).
{ "cite_N": [ "@cite_21" ], "mid": [ "2098907614" ], "abstract": [ "An important issue in multiobjective optimization is the quantitative comparison of the performance of different algorithms. In the case of multiobjective evolutionary algorithms, the outcome is usually an approximation of the Pareto-optimal set, which is denoted as an approximation set, and therefore the question arises of how to evaluate the quality of approximation sets. Most popular are methods that assign each approximation set a vector of real numbers that reflect different aspects of the quality. Sometimes, pairs of approximation sets are also considered. In this study, we provide a rigorous analysis of the limitations underlying this type of quality assessment. To this end, a mathematical framework is developed which allows one to classify and discuss existing techniques." ] }
1702.00477
2587072014
One of the most common approaches for multiobjective optimization is to generate a solution set that well approximates the whole Pareto-optimal frontier to facilitate the later decision-making process. However, how to evaluate and compare the quality of different solution sets remains challenging. Existing measures typically require additional problem knowledge and information, such as a reference point or a substituted set of the Pareto-optimal frontier. In this paper, we propose a quality measure, called dominance move (DoM), to compare solution sets generated by multiobjective optimizers. Given two solution sets, DoM measures the minimum sum of move distances for one set to weakly Pareto dominate the other set. DoM can be seen as a natural reflection of the difference between two solutions, capturing all aspects of solution sets' quality, being compliant with Pareto dominance, and does not need any additional problem knowledge and parameters. We present an exact method to calculate the DoM in the biobjective case. We show the necessary condition of constructing the optimal partition for a solution set's minimum move, and accordingly propose an efficient algorithm to recursively calculate the DoM. Finally, DoM is evaluated on several groups of artificial and real test cases as well as by a comparison with two well-established quality measures.
The hypervolume (HV) metric @cite_20 is one of the most popular quality measures in multiobjective optimization. It calculates the volume of the space enclosed by a solution set and a reference point, and a large value is preferable. The HV of a solution set @math can be described as the Lebesgue measure @math of the union hypercubes @math defined by @math and the reference point @math : where @math ( @math for all @math ).
{ "cite_N": [ "@cite_20" ], "mid": [ "2106334424" ], "abstract": [ "Evolutionary algorithms (EAs) are often well-suited for optimization problems involving several, often conflicting objectives. Since 1985, various evolutionary approaches to multiobjective optimization have been developed that are capable of searching for multiple solutions concurrently in a single run. However, the few comparative studies of different methods presented up to now remain mostly qualitative and are often restricted to a few approaches. In this paper, four multiobjective EAs are compared quantitatively where an extended 0 1 knapsack problem is taken as a basis. Furthermore, we introduce a new evolutionary approach to multicriteria optimization, the strength Pareto EA (SPEA), that combines several features of previous multiobjective EAs in a unique manner. It is characterized by (a) storing nondominated solutions externally in a second, continuously updated population, (b) evaluating an individual's fitness dependent on the number of external nondominated points that dominate it, (c) preserving population diversity using the Pareto dominance relationship, and (d) incorporating a clustering procedure in order to reduce the nondominated set without destroying its characteristics. The proof-of-principle results obtained on two artificial problems as well as a larger problem, the synthesis of a digital hardware-software multiprocessor system, suggest that SPEA can be very effective in sampling from along the entire Pareto-optimal front and distributing the generated solutions over the tradeoff surface. Moreover, SPEA clearly outperforms the other four multiobjective EAs on the 0 1 knapsack problem." ] }
1702.00477
2587072014
One of the most common approaches for multiobjective optimization is to generate a solution set that well approximates the whole Pareto-optimal frontier to facilitate the later decision-making process. However, how to evaluate and compare the quality of different solution sets remains challenging. Existing measures typically require additional problem knowledge and information, such as a reference point or a substituted set of the Pareto-optimal frontier. In this paper, we propose a quality measure, called dominance move (DoM), to compare solution sets generated by multiobjective optimizers. Given two solution sets, DoM measures the minimum sum of move distances for one set to weakly Pareto dominate the other set. DoM can be seen as a natural reflection of the difference between two solutions, capturing all aspects of solution sets' quality, being compliant with Pareto dominance, and does not need any additional problem knowledge and parameters. We present an exact method to calculate the DoM in the biobjective case. We show the necessary condition of constructing the optimal partition for a solution set's minimum move, and accordingly propose an efficient algorithm to recursively calculate the DoM. Finally, DoM is evaluated on several groups of artificial and real test cases as well as by a comparison with two well-established quality measures.
The HV indicator has good theoretical properties @cite_21 and can give a comprehensive evaluation of a solution set in terms of both convergence and diversity. While the computational complexity of calculating HV increases exponentially with the number of objectives, the Monte Carlo sampling method can provide a good balance between accuracy and running time @cite_44 @cite_13 . However, the HV indicator is sensitive to the choice of the reference point. How to choose a proper reference point is not trivial and different reference points can lead to inconsistent evaluation results @cite_38 . Take two solution sets ( @math and @math ) in Figure as an example. When the reference point is set to @math ( Figure (a) ), @math . When the reference point is @math ( Figure (b) ), @math .
{ "cite_N": [ "@cite_44", "@cite_38", "@cite_21", "@cite_13" ], "mid": [ "2108968575", "1521316934", "2098907614", "2009691097" ], "abstract": [ "In the field of evolutionary multi-criterion optimization, the hypervolume indicator is the only single set quality measure that is known to be strictly monotonic with regard to Pareto dominance: whenever a Pareto set approximation entirely dominates another one, then the indicator value of the dominant set will also be better. This property is of high interest and relevance for problems involving a large number of objective functions. However, the high computational effort required for hypervolume calculation has so far prevented the full exploitation of this indicator's potential; current hypervolume-based search algorithms are limited to problems with only a few objectives. This paper addresses this issue and proposes a fast search algorithm that uses Monte Carlo simulation to approximate the exact hypervolume values. The main idea is not that the actual indicator values are important, but rather that the rankings of solutions induced by the hypervolume indicator. In detail, we present HypE, a hypervolume estimation algorithm for multi-objective optimization, by which the accuracy of the estimates and the available computing resources can be traded off; thereby, not only do many-objective problems become feasible with hypervolume-based search, but also the runtime can be flexibly adapted. Moreover, we show how the same principle can be used to statistically compare the outcomes of different multi-objective optimizers with respect to the hypervolume----so far, statistical testing has been restricted to scenarios with few objectives. The experimental results indicate that HypE is highly effective for many-objective problems in comparison to existing multi-objective evolutionary algorithms. HypE is available for download at http: www.tik.ee.ethz.ch sop download supplementary hype .", "Evolutionary multiobjective optimization (EMO) boasts a proliferation of algorithms and benchmark problems. We need principled ways to compare the performance of different EMO algorithms, but this is complicated by the fact that the result of an EMO run is not a single scalar value, but a collection of vectors forming a nondominated set. Various metrics for nondominated sets have been suggested. We compare several, using the framework of 'outperformance relations' (Hansen and Jaszkiewicz, 1998). This enables us to criticize and contrast a variety of published metrics, leading to some recommendations on which seem most useful in practice.", "An important issue in multiobjective optimization is the quantitative comparison of the performance of different algorithms. In the case of multiobjective evolutionary algorithms, the outcome is usually an approximation of the Pareto-optimal set, which is denoted as an approximation set, and therefore the question arises of how to evaluate the quality of approximation sets. Most popular are methods that assign each approximation set a vector of real numbers that reflect different aspects of the quality. Sometimes, pairs of approximation sets are also considered. In this study, we provide a rigorous analysis of the limitations underlying this type of quality assessment. To this end, a mathematical framework is developed which allows one to classify and discuss existing techniques.", "Many state-of-the-art evolutionary vector optimization algorithms compute the contributing hypervolume for ranking candidate solutions. However, with an increasing number of objectives, calculating the volumes becomes intractable. Therefore, although hypervolume-based algorithms are often the method of choice for bi-criteria optimization, they are regarded as not suitable for many-objective optimization. Recently, Monte Carlo methods have been derived and analyzed for approximating the contributing hypervolume. Turning theory into practice, we employ these results in the ranking procedure of the multi-objective covariance matrix adaptation evolution strategy (MO-CMA-ES) as an example of a state-of-the-art method for vector optimization. It is empirically shown that the approximation does not impair the quality of the obtained solutions given a budget of objective function evaluations, while considerably reducing the computation time in the case of multiple objectives. These results are obtained on common benchmark functions as well as on two design optimization tasks. Thus, employing Monte Carlo approximations makes hypervolume-based algorithms applicable to many-objective optimization." ] }
1702.00844
2062539278
A recent development in radio astronomy is to replace traditional dishes with many small antennas. The signals are combined to form one large, virtual telescope. The enormous data streams are cross-correlated to filter out noise. This is especially challenging, since the computational demands grow quadratically with the number of data streams. Moreover, the correlator is not only computationally intensive, but also very I O intensive. The LOFAR telescope, for instance, will produce over 100 terabytes per day. The future SKA telescope will even require in the order of exaflops, and petabits s of I O. A recent trend is to correlate in software instead of dedicated hardware, to increase flexibility and to reduce development efforts. We evaluate the correlator algorithm on multi-core CPUs and many-core architectures, such as NVIDIA and ATI GPUs, and the Cell B.E. The correlator is a streaming, real-time application, and is much more I O intensive than applications that are typically implemented on many-core hardware today. We compare with the LOFAR production correlator on an IBM Blue Gene P supercomputer. We investigate performance, power efficiency, and programmability. We identify several important architectural problems which cause architectures to perform suboptimally. Our findings are applicable to data-intensive applications in general. The processing power and memory bandwidth of current GPUs are highly imbalanced for correlation purposes. While the production correlator on the Blue Gene P achieves a superb 96 of the theoretical peak performance, this is only 16 on ATI GPUs, and 32 on NVIDIA GPUs. The Cell B.E. processor, in contrast, achieves an excellent 92 . We found that the Cell B.E. and NVIDIA GPUs are the most energy-efficient solutions, they run the correlator at least 4 times more energy efficiently than the Blue Gene P. The research presented is an important pathfinder for next-generation telescopes.
Intel's 80-core Terascale Processor @cite_20 was the first generally programmable microprocessor to break the teraflop barrier. It has a good flop Watt ratio, making it an interesting candidate for future correlators.
{ "cite_N": [ "@cite_20" ], "mid": [ "1964057018" ], "abstract": [ "Intel's 80-core terascale processor was the first generally programmable microprocessor to break the Teraflops barrier. The primary goal for the chip was to study power management and on-die communication technologies. When announced in 2007, it received a great deal of attention for running a stencil kernel at 1.0 single precision TFLOPS while using only 97 Watts. The literature about the chip, however, focused on the hardware, saying little about the software environment or the kernels used to evaluate the chip. This paper completes the literature on the 80-core terascale processor by fully defining the chip's software environment. We describe the instruction set, the programming environment, the kernels written for the chip, and our experiences programming this microprocessor. We close by discussing the lessons learned from this project and what it implies for future message passing, network-on-a-chip processors." ] }
1702.00844
2062539278
A recent development in radio astronomy is to replace traditional dishes with many small antennas. The signals are combined to form one large, virtual telescope. The enormous data streams are cross-correlated to filter out noise. This is especially challenging, since the computational demands grow quadratically with the number of data streams. Moreover, the correlator is not only computationally intensive, but also very I O intensive. The LOFAR telescope, for instance, will produce over 100 terabytes per day. The future SKA telescope will even require in the order of exaflops, and petabits s of I O. A recent trend is to correlate in software instead of dedicated hardware, to increase flexibility and to reduce development efforts. We evaluate the correlator algorithm on multi-core CPUs and many-core architectures, such as NVIDIA and ATI GPUs, and the Cell B.E. The correlator is a streaming, real-time application, and is much more I O intensive than applications that are typically implemented on many-core hardware today. We compare with the LOFAR production correlator on an IBM Blue Gene P supercomputer. We investigate performance, power efficiency, and programmability. We identify several important architectural problems which cause architectures to perform suboptimally. Our findings are applicable to data-intensive applications in general. The processing power and memory bandwidth of current GPUs are highly imbalanced for correlation purposes. While the production correlator on the Blue Gene P achieves a superb 96 of the theoretical peak performance, this is only 16 on ATI GPUs, and 32 on NVIDIA GPUs. The Cell B.E. processor, in contrast, achieves an excellent 92 . We found that the Cell B.E. and NVIDIA GPUs are the most energy-efficient solutions, they run the correlator at least 4 times more energy efficiently than the Blue Gene P. The research presented is an important pathfinder for next-generation telescopes.
Intel's Larrabee @cite_23 (to be released) is another promising architecture. Larrabee will be a hybrid between a GPU and a multi-core CPU. It will be compatible with the x86 architecture, but will have 4-way simultaneous multi-threading, 512-bit wide vector units, shuffle and multiply-add instructions, and special texturing hardware. Larrabee will use in-order execution, and will have coherent caches. Unlike current GPUs, but similar to the Cell B.E. , Larrabee will have a ring bus for communication between cores and for memory transactions.
{ "cite_N": [ "@cite_23" ], "mid": [ "2169150396" ], "abstract": [ "This paper presents a many-core visual computing architecture code named Larrabee, a new software rendering pipeline, a manycore programming model, and performance analysis for several applications. Larrabee uses multiple in-order x86 CPU cores that are augmented by a wide vector processor unit, as well as some fixed function logic blocks. This provides dramatically higher performance per watt and per unit of area than out-of-order CPUs on highly parallel workloads. It also greatly increases the flexibility and programmability of the architecture as compared to standard GPUs. A coherent on-die 2nd level cache allows efficient inter-processor communication and high-bandwidth local data access by CPU cores. Task scheduling is performed entirely with software in Larrabee, rather than in fixed function logic. The customizable software graphics rendering pipeline for this architecture uses binning in order to reduce required memory bandwidth, minimize lock contention, and increase opportunities for parallelism relative to standard GPUs. The Larrabee native programming model supports a variety of highly parallel applications that use irregular data structures. Performance analysis on those applications demonstrates Larrabee's potential for a broad range of parallel computation." ] }
1702.00844
2062539278
A recent development in radio astronomy is to replace traditional dishes with many small antennas. The signals are combined to form one large, virtual telescope. The enormous data streams are cross-correlated to filter out noise. This is especially challenging, since the computational demands grow quadratically with the number of data streams. Moreover, the correlator is not only computationally intensive, but also very I O intensive. The LOFAR telescope, for instance, will produce over 100 terabytes per day. The future SKA telescope will even require in the order of exaflops, and petabits s of I O. A recent trend is to correlate in software instead of dedicated hardware, to increase flexibility and to reduce development efforts. We evaluate the correlator algorithm on multi-core CPUs and many-core architectures, such as NVIDIA and ATI GPUs, and the Cell B.E. The correlator is a streaming, real-time application, and is much more I O intensive than applications that are typically implemented on many-core hardware today. We compare with the LOFAR production correlator on an IBM Blue Gene P supercomputer. We investigate performance, power efficiency, and programmability. We identify several important architectural problems which cause architectures to perform suboptimally. Our findings are applicable to data-intensive applications in general. The processing power and memory bandwidth of current GPUs are highly imbalanced for correlation purposes. While the production correlator on the Blue Gene P achieves a superb 96 of the theoretical peak performance, this is only 16 on ATI GPUs, and 32 on NVIDIA GPUs. The Cell B.E. processor, in contrast, achieves an excellent 92 . We found that the Cell B.E. and NVIDIA GPUs are the most energy-efficient solutions, they run the correlator at least 4 times more energy efficiently than the Blue Gene P. The research presented is an important pathfinder for next-generation telescopes.
Another interesting architecture to implement correlators are FPGAs @cite_21 . LOFAR's on-station correlators are also implemented with FPGAs. Solutions with FPGAs combine good performance with flexibility. A disadvantage is that FPGAs are relatively difficult to program efficiently. Also, we want to run more than just the correlator on our hardware. LOFAR is the first of a new generation of software telescopes, and how the processing is done best is still the topic of research, both in astronomy and computer science. We perform the initial processing steps on FPGAs already, but find that this solution is not flexible enough for the rest of the pipeline. For LOFAR, currently twelve different processing pipelines are planned. For example, we would like to do the calibration of the instrument and pulsar detection online on the same hardware, before storing the data to disk. We even need to support multiple different observations simultaneously. All these issues together require enormous flexibility from the processing solution. Therefore, we restrict us to many-cores, and leave application-specific instructions and FPGAs as future work. Once the processing pipelines are fully understood, future instruments, such as the SKA, will likely use ASICs.
{ "cite_N": [ "@cite_21" ], "mid": [ "2156412335" ], "abstract": [ "This paper describes a correlator that is optimized for the Xilinx Virtex-4 SX FPGA, and its application in the SKAMP radio telescope at the Molonglo Radio Observatory. The digital backend of the SKAMP telescope consists of more than 800 Virtex-4 FPGAs. Correlation is performed between each and every pairing of antenna inputs, so the SKAMP telescope, with its 384 inputs, has approximately 74,000 antenna correlations; with 100 MHz of input bandwidth from each antenna this requires real-time processing of more than 7 tera complex multiply-accumulates per second. The correlation cell described takes advantage of the hard IP blocks found within the Virtex-4 FPGA to perform one 4+4-bit complex correlation per cycle at a clock rate exceeding 256 MHz. At the core of each cell is an efficient 4-bit signed complex multiplier, implemented using the 18-bit signed multiplier of the Virtex-4 DSP slice, and a short term accumulator, implemented using the adjacent Block RAM. Nearly 30,000 correlation cells are instantiated across 192 Virtex-4 SX35 devices in order to process all the data from the SKAMP telescope." ] }
1702.00844
2062539278
A recent development in radio astronomy is to replace traditional dishes with many small antennas. The signals are combined to form one large, virtual telescope. The enormous data streams are cross-correlated to filter out noise. This is especially challenging, since the computational demands grow quadratically with the number of data streams. Moreover, the correlator is not only computationally intensive, but also very I O intensive. The LOFAR telescope, for instance, will produce over 100 terabytes per day. The future SKA telescope will even require in the order of exaflops, and petabits s of I O. A recent trend is to correlate in software instead of dedicated hardware, to increase flexibility and to reduce development efforts. We evaluate the correlator algorithm on multi-core CPUs and many-core architectures, such as NVIDIA and ATI GPUs, and the Cell B.E. The correlator is a streaming, real-time application, and is much more I O intensive than applications that are typically implemented on many-core hardware today. We compare with the LOFAR production correlator on an IBM Blue Gene P supercomputer. We investigate performance, power efficiency, and programmability. We identify several important architectural problems which cause architectures to perform suboptimally. Our findings are applicable to data-intensive applications in general. The processing power and memory bandwidth of current GPUs are highly imbalanced for correlation purposes. While the production correlator on the Blue Gene P achieves a superb 96 of the theoretical peak performance, this is only 16 on ATI GPUs, and 32 on NVIDIA GPUs. The Cell B.E. processor, in contrast, achieves an excellent 92 . We found that the Cell B.E. and NVIDIA GPUs are the most energy-efficient solutions, they run the correlator at least 4 times more energy efficiently than the Blue Gene P. The research presented is an important pathfinder for next-generation telescopes.
@cite_2 describe an auto-tuning framework for multi-cores. The framework can automatically perform different low-level optimizations to increase performance. However, GPUs are not considered in this framework. We performed all optimizations manually, which is possible in our case, since the algorithm is relatively straightforward. More important, we found that in our case, algorithmic changes are required to achieve good performance. Examples include the use of different tile sizes, and vectorizing over the different polarizations instead of the inner time loop.
{ "cite_N": [ "@cite_2" ], "mid": [ "1965447552" ], "abstract": [ "We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications." ] }
1702.00844
2062539278
A recent development in radio astronomy is to replace traditional dishes with many small antennas. The signals are combined to form one large, virtual telescope. The enormous data streams are cross-correlated to filter out noise. This is especially challenging, since the computational demands grow quadratically with the number of data streams. Moreover, the correlator is not only computationally intensive, but also very I O intensive. The LOFAR telescope, for instance, will produce over 100 terabytes per day. The future SKA telescope will even require in the order of exaflops, and petabits s of I O. A recent trend is to correlate in software instead of dedicated hardware, to increase flexibility and to reduce development efforts. We evaluate the correlator algorithm on multi-core CPUs and many-core architectures, such as NVIDIA and ATI GPUs, and the Cell B.E. The correlator is a streaming, real-time application, and is much more I O intensive than applications that are typically implemented on many-core hardware today. We compare with the LOFAR production correlator on an IBM Blue Gene P supercomputer. We investigate performance, power efficiency, and programmability. We identify several important architectural problems which cause architectures to perform suboptimally. Our findings are applicable to data-intensive applications in general. The processing power and memory bandwidth of current GPUs are highly imbalanced for correlation purposes. While the production correlator on the Blue Gene P achieves a superb 96 of the theoretical peak performance, this is only 16 on ATI GPUs, and 32 on NVIDIA GPUs. The Cell B.E. processor, in contrast, achieves an excellent 92 . We found that the Cell B.E. and NVIDIA GPUs are the most energy-efficient solutions, they run the correlator at least 4 times more energy efficiently than the Blue Gene P. The research presented is an important pathfinder for next-generation telescopes.
A software-managed cache is used on the Cell B.E. processor. GPUs typically have a small amount of shared memory that can be used in a similar way @cite_8 . An important difference is that in the Cell B.E. the memory is private for a thread, while with GPUs all threads on a multiprocessor share the memory. The available memory per thread is also much smaller. We applied the technique described in @cite_8 , but found it did not increase performance for our application.
{ "cite_N": [ "@cite_8" ], "mid": [ "2117228145" ], "abstract": [ "We present a technique for designing memory-bound algorithms with high data reuse on Graphics Processing Units (GPUs) equipped with close-to-ALU software-managed memory. The approach is based on the efficient use of this memory through the implementation of a software-managed cache. We also present an analytical model for performance analysis of such algorithms. We apply this technique to the implementation of the GPU-based solver of the sum-product or marginalize a product of functions (MPF) problem, which arises in a wide variety of real-life applications in artificial intelligence, statistics, image processing, and digital communications. Our motivation to accelerate MPF originated in the context of the analysis of genetic diseases, which in some cases requires years to complete on modern CPUs. Computing MPF is similar to computing the chain matrix product of multi-dimensional matrices, but is more difficult due to a complex data-dependent access pattern, high data reuse, and a low compute-to-memory access ratio. Our GPU-based MPF solver achieves up to 2700-fold speedup on random data and 270-fold on real-life genetic analysis datasets on GeForce 8800GTX GPU from NVIDIA over the optimized CPU version on an Intel 2.4GHz Core 2 with a 4MB L2 cache." ] }
1702.00844
2062539278
A recent development in radio astronomy is to replace traditional dishes with many small antennas. The signals are combined to form one large, virtual telescope. The enormous data streams are cross-correlated to filter out noise. This is especially challenging, since the computational demands grow quadratically with the number of data streams. Moreover, the correlator is not only computationally intensive, but also very I O intensive. The LOFAR telescope, for instance, will produce over 100 terabytes per day. The future SKA telescope will even require in the order of exaflops, and petabits s of I O. A recent trend is to correlate in software instead of dedicated hardware, to increase flexibility and to reduce development efforts. We evaluate the correlator algorithm on multi-core CPUs and many-core architectures, such as NVIDIA and ATI GPUs, and the Cell B.E. The correlator is a streaming, real-time application, and is much more I O intensive than applications that are typically implemented on many-core hardware today. We compare with the LOFAR production correlator on an IBM Blue Gene P supercomputer. We investigate performance, power efficiency, and programmability. We identify several important architectural problems which cause architectures to perform suboptimally. Our findings are applicable to data-intensive applications in general. The processing power and memory bandwidth of current GPUs are highly imbalanced for correlation purposes. While the production correlator on the Blue Gene P achieves a superb 96 of the theoretical peak performance, this is only 16 on ATI GPUs, and 32 on NVIDIA GPUs. The Cell B.E. processor, in contrast, achieves an excellent 92 . We found that the Cell B.E. and NVIDIA GPUs are the most energy-efficient solutions, they run the correlator at least 4 times more energy efficiently than the Blue Gene P. The research presented is an important pathfinder for next-generation telescopes.
describe a GPU correlator for the Murchison Widefield Array (MWA) @cite_14 . They optimize their code by tiling the correlator triangle in one dimension (a technique described by Harris et.al. @cite_15 ), while tiling in two dimensions, as we described in this paper, is much more efficient. For instance, a 2x2 tile requires the same amount of operations as a 1x4 tile, but performs fewer memory operations (see table ). For larger tiles, the arithmetic intensity of two-dimensional tiles is even better. Also, the MWA GPU version does not use the texture cache, but shared memory. We found that this was significantly slower. Their claim that their GPU implementation is 68 times faster than their CPU implementation is highly biased, since their CPU implementation is not optimized, single threaded, and does not use SSE. As a result, our CPU version is 48 times faster than their CPU version, while our GPU version is 4.2 times faster than their GPU version (even though our data rates are four times as high due to our larger sample sizes). Hence, their GPU implementation is only 1.4 times faster than an optimized CPU implementation, not 68 times.
{ "cite_N": [ "@cite_15", "@cite_14" ], "mid": [ "2019380198", "2008739821" ], "abstract": [ "The increasing array size of radio astronomy interferometers is causing the associated computation to scale quadratically with the number of array signals. Consequently, efficient usage of alternate processing architectures should be explored in order to meet this computational challenge. Affordable parallel processors have been made available to the general scientific community in the form of the commodity graphics card. This work investigates the use of the Graphics Processing Unit in the parallelisation of the combined conjugate multiply and accumulation stage of a correlator for a radio astronomy array. Using NVIDIA’s Compute Unified Device Architecture, our testing shows processing speeds from one to two orders of magnitude faster than a Central Processing Unit approach.", "Modern graphics processing units (GPUs) are inexpensive commodity hardware that offer Tflop s theoretical computing capacity. GPUs are well suited to many compute-intensive tasks including digital signal processing. We describe the implementation and performance of a GPU-based digital correlator for radio astronomy. The correlator is implemented using the NVIDIA CUDA development environment. We evaluate three design options on two generations of NVIDIA hardware. The different designs utilize the internal registers, shared memory, and multiprocessors in different ways. We find that optimal performance is achieved with the design that minimizes global memory reads on recent generations of hardware. The GPU-based correlator outperforms a single-threaded CPU equivalent by a factor of 60 for a 32-antenna array, and runs on commodity PC hardware. The extra compute capability provided by the GPU maximizes the correlation capability of a PC while retaining the fast development time associated with using standard hardware, networking, and programming languages. In this way, a GPU-based correlation system represents a middle ground in design space between high performance, custom-built hardware, and pure CPU-based software correlation. The correlator was deployed at the Murchison Widefield Array 32-antenna prototype system where it ran in real time for extended periods. We briefly describe the data capture, streaming, and correlation system for the prototype array." ] }
1702.00700
2951946228
In this paper we present an approach to extract ordered timelines of events, their participants, locations and times from a set of multilingual and cross-lingual data sources. Based on the assumption that event-related information can be recovered from different documents written in different languages, we extend the Cross-document Event Ordering task presented at SemEval 2015 by specifying two new tasks for, respectively, Multilingual and Cross-lingual Timeline Extraction. We then develop three deterministic algorithms for timeline extraction based on two main ideas. First, we address implicit temporal relations at document level since explicit time-anchors are too scarce to build a wide coverage timeline extraction system. Second, we leverage several multilingual resources to obtain a single, inter-operable, semantic representation of events across documents and across languages. The result is a highly competitive system that strongly outperforms the current state-of-the-art. Nonetheless, further analysis of the results reveals that linking the event mentions with their target entities and time-anchors remains a difficult challenge. The systems, resources and scorers are freely available to facilitate its use and guarantee the reproducibility of results.
Track A received three runs from two participants: the WHUNLP and SPINOZAVU teams. Both approaches were based on applying a pipeline of linguistic processors including Named Entity Recognition, Event and Nominal Coreference Resolution, Named Entity Disambiguation, and temporal processing . The SPINOZAVU system was further developed in @cite_18 .
{ "cite_N": [ "@cite_18" ], "mid": [ "2119998101" ], "abstract": [ "This paper describes the system SPINOZA VU developed for the SemEval 2015 Task 4: Cross Document TimeLines. The system integrates output from the NewsReader Natural Language Processing pipeline and is designed following an entity based model. The poor performance of the submitted runs are mainly a consequence of error propagation. Nevertheless, the error analysis has shown that the interpretation module behind the system performs correctly. An out of competition version of the system has fixed some errors and obtained competitive results. Therefore, we consider the system an important step towards a more complex task such as storyline extraction." ] }
1702.00700
2951946228
In this paper we present an approach to extract ordered timelines of events, their participants, locations and times from a set of multilingual and cross-lingual data sources. Based on the assumption that event-related information can be recovered from different documents written in different languages, we extend the Cross-document Event Ordering task presented at SemEval 2015 by specifying two new tasks for, respectively, Multilingual and Cross-lingual Timeline Extraction. We then develop three deterministic algorithms for timeline extraction based on two main ideas. First, we address implicit temporal relations at document level since explicit time-anchors are too scarce to build a wide coverage timeline extraction system. Second, we leverage several multilingual resources to obtain a single, inter-operable, semantic representation of events across documents and across languages. The result is a highly competitive system that strongly outperforms the current state-of-the-art. Nonetheless, further analysis of the results reveals that linking the event mentions with their target entities and time-anchors remains a difficult challenge. The systems, resources and scorers are freely available to facilitate its use and guarantee the reproducibility of results.
Compared to previous works on Track A of the SemEval 2015 Timeline extraction task, our approach differs in several important ways. Firstly, it addresses the extraction of implicit information to provide a better time-anchoring . More specifically, we are inspired by recent works on Implicit Semantic Role Labelling (ISRL) and, specially, on @cite_11 who adapted ISRL to focus on modifiers, including temporal arguments, instead of core arguments or roles. Given that not training data is provided, we developed a deterministic algorithm for timeline extraction loosely inspired by @cite_4 . Secondly, we extend the monolingual approach to make it multi- and cross-lingual, which constitutes a novel system on its own. Finally, our approach outperforms every other previous approach on the task, almost doubling the score of the next best system.
{ "cite_N": [ "@cite_4", "@cite_11" ], "mid": [ "2126864682", "2117829688" ], "abstract": [ "This paper presents a novel deterministic algorithm for implicit Semantic Role Labeling. The system exploits a very simple but relevant discursive property, the argument coherence over different instances of a predicate. The algorithm solves the implicit arguments sequentially, exploiting not only explicit but also the implicit arguments previously solved. In addition, we empirically demonstrate that the algorithm obtains very competitive and robust performances with respect to supervised approaches that require large amounts of costly training data.", "This paper presents a methodology to infer implicit semantic relations from verbargument structures. An annotation effort shows implicit relations boost the amount of meaning explicitly encoded for verbs. Experimental results with automatically obtained parse trees and verb-argument structures demonstrate that inferring implicit relations is a doable task." ] }
1702.00567
2951549320
Data fusion has played an important role in data mining because high-quality data is required in a lot of applications. As on-line data may be out-of-date and errors in the data may propagate with copying and referring between sources, it is hard to achieve satisfying results with merely applying existing data fusion methods to fuse Web data. In this paper, we make use of the crowd to achieve high-quality data fusion result. We design a framework selecting a set of tasks to ask crowds in order to improve the confidence of data. Since data are correlated and crowds may provide incorrect answers, how to select a proper set of tasks to ask the crowd is a very challenging problem. In this paper, we design an approximation solution to address this challenge since we prove that the problem is at NP-hard. To further improve the efficiency, we design a pruning strategy and a preprocessing method, which effectively improve the performance of the proposed approximation solution. Furthermore, we find that under certain scenarios, we are not interested in all the facts, but only a specific set of facts. Thus, for these specific scenarios, we also develop another approximation solution which is much faster than the general approximation solution. We verify the solutions with extensive experiments on a real crowdsourcing platform.
On the other hand, related researches have been studying active learning via crowds. Active learning is a form of supervised machine learning, in which a learning algorithm is able to interact with the experts (or some other information source) to obtain the desired outputs at new data points. Thus, the goal of active learning is to improve the accuracy of classifiers as much as possible through selecting limited data to label. A widely used technical report on active learning is @cite_31 . In particular, @cite_24 @cite_4 proposed active learning methods specially designed for crowd-sourced databases. Our method follows the general principle of active learning. When there is a need, we sample data from crowd with an estimation of the information gain.
{ "cite_N": [ "@cite_24", "@cite_31", "@cite_4" ], "mid": [ "1546421159", "", "2188373600" ], "abstract": [ "Crowd-sourcing has become a popular means of acquiring labeled data for a wide variety of tasks where humans are more accurate than computers, e.g., labeling images, matching objects, or analyzing sentiment. However, relying solely on the crowd is often impractical even for data sets with thousands of items, due to time and cost constraints of acquiring human input (which cost pennies and minutes per label). In this paper, we propose algorithms for integrating machine learning into crowd-sourced databases, with the goal of allowing crowd-sourcing applications to scale, i.e., to handle larger datasets at lower costs. The key observation is that, in many of the above tasks, humans and machine learning algorithms can be complementary, as humans are often more accurate but slow and expensive, while algorithms are usually less accurate, but faster and cheaper. Based on this observation, we present two new active learning algorithms to combine humans and algorithms together in a crowd-sourced database. Our algorithms are based on the theory of non-parametric bootstrap, which makes our results applicable to a broad class of machine learning models. Our results, on three real-life datasets collected with Amazon's Mechanical Turk, and on 15 well-known UCI data sets, show that our methods on average ask humans to label one to two orders of magnitude fewer items to achieve the same accuracy as a baseline that labels random images, and two to eight times fewer questions than previous active learning schemes.", "", "Recognizing human activities from wearable sensor data is an important problem, particularly for health and eldercare applications. However, collecting sufficient labeled training data is challenging, especially since interpreting IMU traces is difficult for human annotators. Recently, crowdsourcing through services such as Amazon's Mechanical Turk has emerged as a promising alternative for annotating such data, with active learning (Cohn, Ghahramani, and Jordan 1996) serving as a natural method for affordably selecting an appropriate subset of instances to label. Unfortunately, since most active learning strategies are greedy methods that select the most uncertain sample, they are very sensitive to annotation errors (which corrupt a significant fraction of crowdsourced labels). This paper proposes methods for robust active learning under these conditions. Specifically, we make three contributions: 1) we obtain better initial labels by asking labelers to solve a related task; 2) we propose a new principled method for selecting instances in active learning that is more robust to annotation noise; 3) we estimate confidence scores for labels acquired from MTurk and ask workers to relabel samples that receive low scores under this metric. The proposed method is shown to significantly outperform existing techniques both under controlled noise conditions and in real active learning scenarios. The resulting method trains classifiers that are close in accuracy to those trained using ground-truth data." ] }
1702.00648
2780529078
Robust Principal Component Analysis (RPCA) aims at recovering a low-rank subspace from grossly corrupted high-dimensional (often visual) data and is a cornerstone in many machine learning and computer vision applications. Even though RPCA has been shown to be very successful in solving many rank minimisation problems, there are still cases where degenerate or suboptimal solutions are obtained. This is likely to be remedied by taking into account of domain-dependent prior knowledge. In this paper, we propose two models for the RPCA problem with the aid of side information on the low-rank structure of the data. The versatility of the proposed methods is demonstrated by applying them to four applications, namely background subtraction, facial image denoising, face and facial expression recognition. Experimental results on synthetic and five real world datasets indicate the robustness and effectiveness of the proposed methods on these application domains, largely outperforming six previous approaches.
A generalisation of the above was proposed as Principal Component Pursuit with Features (PCPF) in @cite_24 where further row spaces @math were assumed to be available with @math , and
{ "cite_N": [ "@cite_24" ], "mid": [ "2463344124" ], "abstract": [ "The robust principal component analysis (robust PCA) problem has been considered in many machine learning applications, where the goal is to decompose the data matrix to a low rank part plus a sparse residual. While current approaches are developed by only considering the low rank plus sparse structure, in many applications, side information of row and or column entities may also be given, and it is still unclear to what extent could such information help robust PCA. Thus, in this paper, we study the problem of robust PCA with side information, where both prior structure and features of entities are exploited for recovery. We propose a convex problem to incorporate side information in robust PCA and show that the low rank matrix can be exactly recovered via the proposed method under certain conditions. In particular, our guarantee suggests that a substantial amount of low rank matrices, which cannot be recovered by standard robust PCA, become recoverable by our proposed method. The result theoretically justifies the effectiveness of features in robust PCA. In addition, we conduct synthetic experiments as well as a real application on noisy image classification to show that our method also improves the performance in practice by exploiting side information." ] }
1702.00648
2780529078
Robust Principal Component Analysis (RPCA) aims at recovering a low-rank subspace from grossly corrupted high-dimensional (often visual) data and is a cornerstone in many machine learning and computer vision applications. Even though RPCA has been shown to be very successful in solving many rank minimisation problems, there are still cases where degenerate or suboptimal solutions are obtained. This is likely to be remedied by taking into account of domain-dependent prior knowledge. In this paper, we propose two models for the RPCA problem with the aid of side information on the low-rank structure of the data. The versatility of the proposed methods is demonstrated by applying them to four applications, namely background subtraction, facial image denoising, face and facial expression recognition. Experimental results on synthetic and five real world datasets indicate the robustness and effectiveness of the proposed methods on these application domains, largely outperforming six previous approaches.
@cite_18 @cite_8 incorporate structural knowledge into RPCA by adding spectral graph regularisation. Given the graph Laplacian @math of each data similarity graph, Robust PCA on Graphs (RPCAG) and Fast Robust PCA on Graphs (FRPCAG) add an additional tr @math term to the PCP objective for the low-rank component @math . The main drawback of the above mentioned models is that the side information needs to be accurate and noiseless, which is not trivial in practical scenarios.
{ "cite_N": [ "@cite_18", "@cite_8" ], "mid": [ "2950248682", "2161041254" ], "abstract": [ "Principal Component Analysis (PCA) is the most widely used tool for linear dimensionality reduction and clustering. Still it is highly sensitive to outliers and does not scale well with respect to the number of data samples. Robust PCA solves the first issue with a sparse penalty term. The second issue can be handled with the matrix factorization model, which is however non-convex. Besides, PCA based clustering can also be enhanced by using a graph of data similarity. In this article, we introduce a new model called \"Robust PCA on Graphs\" which incorporates spectral graph regularization into the Robust PCA framework. Our proposed model benefits from 1) the robustness of principal components to occlusions and missing values, 2) enhanced low-rank recovery, 3) improved clustering property due to the graph smoothness assumption on the low-rank matrix, and 4) convexity of the resulting optimization problem. Extensive experiments on 8 benchmark, 3 video and 2 artificial datasets with corruptions clearly reveal that our model outperforms 10 other state-of-the-art models in its clustering and low-rank recovery tasks.", "Mining useful clusters from high dimensional data have received significant attention of the computer vision and pattern recognition community in the recent years. Linear and nonlinear dimensionality reduction has played an important role to overcome the curse of dimensionality. However, often such methods are accompanied with three different problems: high computational complexity (usually associated with the nuclear norm minimization), nonconvexity (for matrix factorization methods), and susceptibility to gross corruptions in the data. In this paper, we propose a principal component analysis (PCA) based solution that overcomes these three issues and approximates a low-rank recovery method for high dimensional datasets. We target the low-rank recovery by enforcing two types of graph smoothness assumptions, one on the data samples and the other on the features by designing a convex optimization problem. The resulting algorithm is fast, efficient, and scalable for huge datasets with O(n log(n)) computational complexity in the number of data samples. It is also robust to gross corruptions in the dataset as well as to the model parameters. Clustering experiments on 7 benchmark datasets with different types of corruptions and background separation experiments on 3 video datasets show that our proposed model outperforms 10 state-of-the-art dimensionality reduction models. Our theoretical analysis proves that the proposed model is able to recover approximate low-rank representations with a bounded error for clusterable data." ] }
1702.00758
2586811659
Learning to hash has been widely applied to approximate nearest neighbor search for large-scale multimedia retrieval, due to its computation efficiency and retrieval quality. Deep learning to hash, which improves retrieval quality by end-to-end representation learning and hash encoding, has received increasing attention recently. Subject to the ill-posed gradient difficulty in the optimization with sign activations, existing deep learning to hash methods need to first learn continuous representations and then generate binary hash codes in a separated binarization step, which suffer from substantial loss of retrieval quality. This work presents HashNet, a novel deep architecture for deep learning to hash by continuation method with convergence guarantees, which learns exactly binary hash codes from imbalanced similarity data. The key idea is to attack the ill-posed gradient problem in optimizing deep networks with non-smooth binary activations by continuation method, in which we begin from learning an easier network with smoothed activation function and let it evolve during the training, until it eventually goes back to being the original, difficult to optimize, deep network with the sign activation function. Comprehensive empirical evidence shows that HashNet can generate exactly binary hash codes and yield state-of-the-art multimedia retrieval performance on standard benchmarks.
Existing learning to hash methods can be organized into two categories: unsupervised hashing and supervised hashing. We refer readers to @cite_25 for a comprehensive survey.
{ "cite_N": [ "@cite_25" ], "mid": [ "1870428314" ], "abstract": [ "Similarity search (nearest neighbor search) is a problem of pursuing the data items whose distances to a query item are the smallest from a large database. Various methods have been developed to address this problem, and recently a lot of efforts have been devoted to approximate search. In this paper, we present a survey on one of the main solutions, hashing, which has been widely studied since the pioneering work locality sensitive hashing. We divide the hashing algorithms two main categories: locality sensitive hashing, which designs hash functions without exploring the data distribution and learning to hash, which learns hash functions according the data distribution, and review them from various aspects, including hash function design and distance measure and search scheme in the hash coding space." ] }
1702.00758
2586811659
Learning to hash has been widely applied to approximate nearest neighbor search for large-scale multimedia retrieval, due to its computation efficiency and retrieval quality. Deep learning to hash, which improves retrieval quality by end-to-end representation learning and hash encoding, has received increasing attention recently. Subject to the ill-posed gradient difficulty in the optimization with sign activations, existing deep learning to hash methods need to first learn continuous representations and then generate binary hash codes in a separated binarization step, which suffer from substantial loss of retrieval quality. This work presents HashNet, a novel deep architecture for deep learning to hash by continuation method with convergence guarantees, which learns exactly binary hash codes from imbalanced similarity data. The key idea is to attack the ill-posed gradient problem in optimizing deep networks with non-smooth binary activations by continuation method, in which we begin from learning an easier network with smoothed activation function and let it evolve during the training, until it eventually goes back to being the original, difficult to optimize, deep network with the sign activation function. Comprehensive empirical evidence shows that HashNet can generate exactly binary hash codes and yield state-of-the-art multimedia retrieval performance on standard benchmarks.
Unsupervised hashing methods learn hash functions that encode data points to binary codes by training from unlabeled data. Typical learning criteria include reconstruction error minimization @cite_14 @cite_18 @cite_2 and graph learning @cite_42 @cite_0 . While unsupervised methods are more general and can be trained without semantic labels or relevance information, they are subject to the semantic gap dilemma @cite_19 that high-level semantic description of an object differs from low-level feature descriptors. Supervised methods can incorporate semantic labels or relevance information to mitigate the semantic gap and improve the hashing quality significantly. Typical supervised methods include Binary Reconstruction Embedding (BRE) @cite_27 , Minimal Loss Hashing (MLH) @cite_30 and Hamming Distance Metric Learning @cite_35 . Supervised Hashing with Kernels (KSH) @cite_23 generates hash codes by minimizing the Hamming distances across similar pairs and maximizing the Hamming distances across dissimilar pairs.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_18", "@cite_14", "@cite_42", "@cite_0", "@cite_19", "@cite_27", "@cite_23", "@cite_2" ], "mid": [ "2221852422", "2113307832", "2084363474", "205159212", "", "2251864938", "2130660124", "2164338181", "1992371516", "2124509324" ], "abstract": [ "We propose a method for learning similarity-preserving hash functions that map high-dimensional data onto binary codes. The formulation is based on structured prediction with latent variables and a hinge-like loss function. It is efficient to train for large datasets, scales well to large code lengths, and outperforms state-of-the-art methods.", "Motivated by large-scale multimedia applications we propose to learn mappings from high-dimensional data to binary codes that preserve semantic similarity. Binary codes are well suited to large-scale applications as they are storage efficient and permit exact sub-linear kNN search. The framework is applicable to broad families of mappings, and uses a flexible form of triplet ranking loss. We overcome discontinuous optimization of the discrete mappings by minimizing a piecewise-smooth upper bound on empirical loss, inspired by latent structural SVMs. We develop a new loss-augmented inference algorithm that is quadratic in the code length. We show strong retrieval performance on CIFAR-10 and MNIST, with promising classification results using no more than kNN on the binary codes.", "This paper addresses the problem of learning similarity-preserving binary codes for efficient retrieval in large-scale image collections. We propose a simple and efficient alternating minimization scheme for finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube. This method, dubbed iterative quantization (ITQ), has connections to multi-class spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). Our experiments show that the resulting binary coding schemes decisively outperform several other state-of-the-art methods.", "A dental model trimmer having an easily replaceable abrasive surfaced member. The abrasive surfaced member is contained within a housing and is releasably coupled onto a back plate assembly which is driven by a drive motor. The housing includes a releasably coupled cover plate providing access to the abrasive surfaced member. An opening formed in the cover plate exposes a portion of the abrasive surface so that a dental model workpiece can be inserted into the opening against the abrasive surface to permit work on the dental model workpiece. A tilting work table beneath the opening supports the workpiece during the operation. A stream of water is directed through the front cover onto the abrasive surface and is redirected against this surface by means of baffles positioned inside the cover plate. The opening includes a beveled boundary and an inwardly directed lip permitting angular manipulation of the workpiece, better visibility of the workpiece and maximum safety.", "", "Hashing is becoming increasingly popular for efficient nearest neighbor search in massive databases. However, learning short codes that yield good search performance is still a challenge. Moreover, in many cases real-world data lives on a low-dimensional manifold, which should be taken into account to capture meaningful nearest neighbors. In this paper, we propose a novel graph-based hashing method which automatically discovers the neighborhood structure inherent in the data to learn appropriate compact codes. To make such an approach computationally feasible, we utilize Anchor Graphs to obtain tractable low-rank adjacency matrices. Our formulation allows constant time hashing of a new data point by extrapolating graph Laplacian eigenvectors to eigenfunctions. Finally, we describe a hierarchical threshold learning procedure in which each eigenfunction yields multiple bits, leading to higher search accuracy. Experimental comparison with the other state-of-the-art methods on two large datasets demonstrates the efficacy of the proposed method.", "Presents a review of 200 references in content-based image retrieval. The paper starts with discussing the working conditions of content-based retrieval: patterns of use, types of pictures, the role of semantics, and the sensory gap. Subsequent sections discuss computational steps for image retrieval systems. Step one of the review is image processing for retrieval sorted by color, texture, and local geometry. Features for retrieval are discussed next, sorted by: accumulative and global features, salient points, object and shape features, signs, and structural combinations thereof. Similarity of pictures and objects in pictures is reviewed for each of the feature types, in close connection to the types and means of feedback the user of the systems is capable of giving by interaction. We briefly discuss aspects of system engineering: databases, system architecture, and evaluation. In the concluding section, we present our view on: the driving force of the field, the heritage from computer vision, the influence on computer vision, the role of similarity and of interaction, the need for databases, the problem of evaluation, and the role of the semantic gap.", "Fast retrieval methods are increasingly critical for many large-scale analysis tasks, and there have been several recent methods that attempt to learn hash functions for fast and accurate nearest neighbor searches. In this paper, we develop an algorithm for learning hash functions based on explicitly minimizing the reconstruction error between the original distances and the Hamming distances of the corresponding binary embeddings. We develop a scalable coordinate-descent algorithm for our proposed hashing objective that is able to efficiently learn hash functions in a variety of settings. Unlike existing methods such as semantic hashing and spectral hashing, our method is easily kernelized and does not require restrictive assumptions about the underlying distribution of the data. We present results over several domains to demonstrate that our method outperforms existing state-of-the-art techniques.", "Recent years have witnessed the growing popularity of hashing in large-scale vision problems. It has been shown that the hashing quality could be boosted by leveraging supervised information into hash function learning. However, the existing supervised methods either lack adequate performance or often incur cumbersome model training. In this paper, we propose a novel kernel-based supervised hashing model which requires a limited amount of supervised information, i.e., similar and dissimilar data pairs, and a feasible training cost in achieving high quality hashing. The idea is to map the data to compact binary codes whose Hamming distances are minimized on similar pairs and simultaneously maximized on dissimilar pairs. Our approach is distinct from prior works by utilizing the equivalence between optimizing the code inner products and the Hamming distances. This enables us to sequentially and efficiently train the hash functions one bit at a time, yielding very short yet discriminative codes. We carry out extensive experiments on two image benchmarks with up to one million samples, demonstrating that our approach significantly outperforms the state-of-the-arts in searching both metric distance neighbors and semantically similar neighbors, with accuracy gains ranging from 13 to 46 .", "This paper introduces a product quantization-based approach for approximate nearest neighbor search. The idea is to decompose the space into a Cartesian product of low-dimensional subspaces and to quantize each subspace separately. A vector is represented by a short code composed of its subspace quantization indices. The euclidean distance between two vectors can be efficiently estimated from their codes. An asymmetric version increases precision, as it computes the approximate distance between a vector and a code. Experimental results show that our approach searches for nearest neighbors efficiently, in particular in combination with an inverted file system. Results for SIFT and GIST image descriptors show excellent search accuracy, outperforming three state-of-the-art approaches. The scalability of our approach is validated on a data set of two billion vectors." ] }
1702.00758
2586811659
Learning to hash has been widely applied to approximate nearest neighbor search for large-scale multimedia retrieval, due to its computation efficiency and retrieval quality. Deep learning to hash, which improves retrieval quality by end-to-end representation learning and hash encoding, has received increasing attention recently. Subject to the ill-posed gradient difficulty in the optimization with sign activations, existing deep learning to hash methods need to first learn continuous representations and then generate binary hash codes in a separated binarization step, which suffer from substantial loss of retrieval quality. This work presents HashNet, a novel deep architecture for deep learning to hash by continuation method with convergence guarantees, which learns exactly binary hash codes from imbalanced similarity data. The key idea is to attack the ill-posed gradient problem in optimizing deep networks with non-smooth binary activations by continuation method, in which we begin from learning an easier network with smoothed activation function and let it evolve during the training, until it eventually goes back to being the original, difficult to optimize, deep network with the sign activation function. Comprehensive empirical evidence shows that HashNet can generate exactly binary hash codes and yield state-of-the-art multimedia retrieval performance on standard benchmarks.
However, existing deep learning to hash methods only learn continuous codes @math and need a binarization post-step to generate binary codes @math . By continuous relaxation, these methods essentially solve an optimization problem @math that deviates significantly from the hashing objective @math , because they cannot keep the codes exactly binary after convergence. Denote by @math the quantization error function by binarizing continuous codes @math into binary codes @math . Prior methods control the quantization error in two ways: @math through continuous optimization @cite_21 @cite_17 ; @math through discrete optimization on @math but continuous optimization on @math (the continuous optimization is used for out-of-sample extension as discrete optimization cannot be extended to the test data) @cite_41 . However, since @math cannot be minimized to zero, there is a large gap between continuous codes and binary codes. To directly optimize @math , we must adopt as the function deep networks, which enables generation of exactly binary codes but introduces the problem. This work is the first effort to learn sign-activated deep networks by continuation method, which can directly optimize @math for deep learning to hash.
{ "cite_N": [ "@cite_41", "@cite_21", "@cite_17" ], "mid": [ "2464915613", "", "2207125444" ], "abstract": [ "In this paper, we present a new hashing method to learn compact binary codes for highly efficient image retrieval on large-scale datasets. While the complex image appearance variations still pose a great challenge to reliable retrieval, in light of the recent progress of Convolutional Neural Networks (CNNs) in learning robust image representation on various vision tasks, this paper proposes a novel Deep Supervised Hashing (DSH) method to learn compact similarity-preserving binary code for the huge body of image data. Specifically, we devise a CNN architecture that takes pairs of images (similar dissimilar) as training inputs and encourages the output of each image to approximate discrete values (e.g. +1 -1). To this end, a loss function is elaborately designed to maximize the discriminability of the output space by encoding the supervised information from the input image pairs, and simultaneously imposing regularization on the real-valued outputs to approximate the desired discrete values. For image retrieval, new-coming query images can be easily encoded by propagating through the network and then quantizing the network outputs to binary codes representation. Extensive experiments on two large scale datasets CIFAR-10 and NUS-WIDE show the promising performance of our method compared with the state-of-the-arts.", "", "Recent years have witnessed wide application of hashing for large-scale image retrieval. However, most existing hashing methods are based on handcrafted features which might not be optimally compatible with the hashing procedure. Recently, deep hashing methods have been proposed to perform simultaneous feature learning and hash-code learning with deep neural networks, which have shown better performance than traditional hashing methods with hand-crafted features. Most of these deep hashing methods are supervised whose supervised information is given with triplet labels. For another common application scenario with pairwise labels, there have not existed methods for simultaneous feature learning and hash-code learning. In this paper, we propose a novel deep hashing method, called deep pairwise-supervised hashing (DPSH), to perform simultaneous feature learning and hashcode learning for applications with pairwise labels. Experiments on real datasets show that our DPSH method can outperform other methods to achieve the state-of-the-art performance in image retrieval applications." ] }
1702.00748
2951016929
Recent progress in applying machine learning for jet physics has been built upon an analogy between calorimeters and images. In this work, we present a novel class of recursive neural networks built instead upon an analogy between QCD and natural languages. In the analogy, four-momenta are like words and the clustering history of sequential recombination jet algorithms is like the parsing of a sentence. Our approach works directly with the four-momenta of a variable-length set of particles, and the jet-based tree structure varies on an event-by-event basis. Our experiments highlight the flexibility of our method for building task-specific jet embeddings and show that recursive architectures are significantly more accurate and data efficient than previous image-based networks. We extend the analogy from individual jets (sentences) to full events (paragraphs), and show for the first time an event-level classifier operating on all the stable particles produced in an LHC event.
Neural networks in particle physics have a long history. They have been used in the past for many tasks, including early work on quark-gluon discrimination @cite_24 @cite_7 , particle identification @cite_28 , Higgs tagging @cite_25 or track identification @cite_35 . In most of these, neural networks appear as shallow multi-layer perceptrons where input features were designed by experts to incorporate domain knowledge. More recently, the success of deep convolutional networks has triggered a new body of work in jet physics, shifting the paradigm from engineering input features to learning them automatically from raw data, e.g., as in these works treating jets as images @cite_13 @cite_39 @cite_6 @cite_3 @cite_1 @cite_12 @cite_34 @cite_10 . Our work builds instead upon an analogy between QCD and natural languages, hence complementing the set of algorithms for jet physics with techniques initially developed for natural language processing @cite_0 @cite_20 @cite_38 @cite_18 @cite_8 @cite_32 . In addition, our approach does not delegate the full modeling task to the machine. It allows to incorporate domain knowledge in terms of the network architecture, specifically by structuring the recursion stack for the embedding directly from QCD-inspired jet algorithms (see Sec. )
{ "cite_N": [ "@cite_35", "@cite_3", "@cite_10", "@cite_20", "@cite_38", "@cite_18", "@cite_8", "@cite_39", "@cite_7", "@cite_28", "@cite_32", "@cite_6", "@cite_34", "@cite_25", "@cite_12", "@cite_1", "@cite_24", "@cite_0", "@cite_13" ], "mid": [ "2500716537", "2779710223", "2586557507", "1423339008", "71795751", "2172140247", "2950635152", "", "", "", "2250877157", "2787586057", "2563484019", "", "2836521289", "", "2057126609", "2104518905", "2047792789" ], "abstract": [ "", "We introduce the energy flow polynomials: a complete set of jet substructure observables which form a discrete linear basis for all infrared- and collinear-safe observables. Energy flow polynomials are multiparticle energy correlators with specific angular structures that are a direct consequence of infrared and collinear safety. We establish a powerful graph-theoretic representation of the energy flow polynomials which allows us to design efficient algorithms for their computation. Many common jet observables are exact linear combinations of energy flow polynomials, and we demonstrate the linear spanning nature of the energy flow basis by performing regression for several common jet observables. Using linear classification with energy flow polynomials, we achieve excellent performance on three representative jet tagging problems: quark gluon discrimination, boosted W tagging, and boosted top tagging. The energy flow basis provides a systematic framework for complete investigations of jet substructure using linear methods.", "Machine learning based on convolutional neural networks can be used to study jet images from the LHC. Top tagging in fat jets offers a well-defined framework to establish our DeepTop approach and compare its performance to QCD-based top taggers. We first optimize a network architecture to identify top quarks in Monte Carlo simulations of the Standard Model production channel. Using standard fat jets we then compare its performance to a multivariate QCD-based top tagger. We find that both approaches lead to comparable performance, establishing convolutional networks as a promising new approach for multivariate hypothesis-based top tagging.", "Recursive structure is commonly found in the inputs of different modalities such as natural scene images or natural language sentences. Discovering this recursive structure helps us to not only identify the units that an image or sentence contains but also how they interact to form a whole. We introduce a max-margin structure prediction architecture based on recursive neural networks that can successfully recover such structure both in complex scene images as well as sentences. The same algorithm can be used both to provide a competitive syntactic parser for natural language sentences from the Penn Treebank and to outperform alternative approaches for semantic scene segmentation, annotation and classification. For segmentation and annotation our algorithm obtains a new level of state-of-the-art performance on the Stanford background dataset (78.1 ). The features from the image parse tree outperform Gist descriptors for scene classification by 4 .", "We introduce a novel machine learning framework based on recursive autoencoders for sentence-level prediction of sentiment label distributions. Our method learns vector space representations for multi-word phrases. In sentiment prediction tasks these representations outperform other state-of-the-art approaches on commonly used datasets, such as movie reviews, without using any pre-defined sentiment lexica or polarity shifting rules. We also evaluate the model's ability to predict sentiment distributions on a new dataset based on confessions from the experience project. The dataset consists of personal user stories annotated with multiple labels which, when aggregated, form a multinomial distribution that captures emotional reactions. Our algorithm can more accurately predict distributions over such labels compared to several competitive baselines.", "Neural machine translation is a relatively new approach to statistical machine translation based purely on neural networks. The neural machine translation models often consist of an encoder and a decoder. The encoder extracts a fixed-length representation from a variable-length input sentence, and the decoder generates a correct translation from this representation. In this paper, we focus on analyzing the properties of the neural machine translation using two models; RNN Encoder--Decoder and a newly proposed gated recursive convolutional neural network. We show that the neural machine translation performs relatively well on short sentences without unknown words, but its performance degrades rapidly as the length of the sentence and the number of unknown words increase. Furthermore, we find that the proposed gated recursive convolutional network learns a grammatical structure of a sentence automatically.", "In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases.", "", "", "", "Recently, neural network based sentence modeling methods have achieved great progress. Among these methods, the recursive neural networks (RecNNs) can effectively model the combination of the words in sentence. However, RecNNs need a given external topological structure, like syntactic tree. In this paper, we propose a gated recursive neural network (GRNN) to model sentences, which employs a full binary tree (FBT) structure to control the combinations in recursive structure. By introducing two kinds of gates, our model can better model the complicated combinations of features. Experiments on three text classification datasets show the effectiveness of our model.", "We introduce a new and highly efficient tagger for hadronically decaying top quarks, based on a deep neural network working with Lorentz vectors and the Minkowski metric. With its novel machine learning setup and architecture it allows us to identify boosted top quarks not only from calorimeter towers, but also including tracking information. We show how the performance of our tagger compares with QCD-inspired and image-recognition approaches and find that it significantly increases the performance for strongly boosted top quarks.", "Artificial intelligence offers the potential to automate challenging data-processing tasks in collider physics. To establish its prospects, we explore to what extent deep learning with convolutional neural networks can discriminate quark and gluon jets better than observables designed by physicists. Our approach builds upon the paradigm that a jet can be treated as an image, with intensity given by the local calorimeter deposits. We supplement this construction by adding color to the images, with red, green and blue intensities given by the transverse momentum in charged particles, transverse momentum in neutral particles, and pixel-level charged particle counts. Overall, the deep networks match or outperform traditional jet variables. We also find that, while various simulations produce different quark and gluon jets, the neural networks are surprisingly insensitive to these differences, similar to traditional observables. This suggests that the networks can extract robust physical information from imperfect simulations.", "", "Jets from boosted heavy particles have a typical angular scale which can be used to distinguish it from QCD jets. We introduce a machine learning strategy for jet substructure analysis using a spectral function on the angular scale. The angular spectrum allows us to scan energy deposits over the angle between a pair of particles in a highly visual way. We set up an artificial neural network (ANN) to find out characteristic shapes of the spectra of the jets from heavy particle decays. By taking the discrimination of Higgs jets from QCD jets as an example, we show that the ANN based on the angular spectrum has similar performance to existing taggers. In addition, some improvement is seen in the case that additional extra radiations occur. Notably, the new algorithm automatically combines the information of the multi-point correlations in the jet.", "", "Using a neural-network classifier we are able to separate gluon from quark jets originating from Monte Carlo--generated ital e sup + ital e sup minus events with 85 --90 accuracy.", "While neural networks are very successfully applied to the processing of fixed-length vectors and variable-length sequences, the current state of the art does not allow the efficient processing of structured objects of arbitrary shape (like logical terms, trees or graphs). We present a connectionist architecture together with a novel supervised learning scheme which is capable of solving inductive inference tasks on complex symbolic structures of arbitrary size. The most general structures that can be handled are labeled directed acyclic graphs. The major difference of our approach compared to others is that the structure-representations are exclusively tuned for the intended inference task. Our method is applied to tasks consisting in the classification of logical terms. These range from the detection of a certain subterm to the satisfaction of a specific unification pattern. Compared to previously known approaches we obtained superior results in that domain.", "We introduce a novel approach to jet tagging and classification through the use of techniques inspired by computer vision. Drawing parallels to the problem of facial recognition in images, we define a jet-image using calorimeter towers as the elements of the image and establish jet-image preprocessing methods. For the jet-image processing step, we develop a discriminant for classifying the jet-images derived using Fisher discriminant analysis. The effectiveness of the technique is shown within the context of identifying boosted hadronic W boson decays with respect to a background of quark- and gluon- initiated jets. Using Monte Carlo simulation, we demonstrate that the performance of this technique introduces additional discriminating power over other substructure approaches, and gives significant insight into the internal structure of jets." ] }
1702.00340
2953091513
Locating the demanded content is one of the major challenges in Information-Centric Networking (ICN). This process is known as content discovery. To facilitate content discovery, in this paper we focus on Named Data Networking (NDN) and propose a novel routing scheme for content discovery, called Bloom Filter-based Routing (BFR), which is fully distributed, content oriented, and topology agnostic at the intra-domain level. In BFR, origin servers advertise their content objects using Bloom filters. We compare the performance of the proposed BFR with flooding and shortest path content discovery approaches. BFR outperforms its counterparts in terms of the average round-trip delay, while it is shown to be very robust to false positive reports from Bloom filters. Also, BFR is much more robust than shortest path routing to topology changes. BFR strongly outperforms flooding and performs almost equal with shortest path routing with respect to the normalized communication costs for data retrieval and total communication overhead for forwarding Interests. All the three approaches achieve similar mean hit distance. The signalling overhead for content advertisement in BFR is much lower than the signalling overhead for calculating shortest paths in the shortest path approach. Finally, BFR requires small storage overhead for maintaining content advertisements.
In @cite_0 , SCAN is proposed as a routing scheme for content-aware networks. The main disadvantage of SCAN is that it uses IP routing as a fall-back solution, meaning that cache routers perform both content and IP routing, i.e. , they maintain content routing tables as well as IP routing tables. Therefore, SCAN is not a fully content-oriented routing scheme.
{ "cite_N": [ "@cite_0" ], "mid": [ "2145269148" ], "abstract": [ "Since Internet routers are not aware of the contents being forwarded, the same content file is often delivered multiple times inefficiently. Similarly, users cannot exploit a nearby copy of the content of interest unless the content file is serviced by costly content delivery networks. Prior studies on the content-aware routing for efficient content delivery suffer from the scalability problem due to a large number of contents. We propose a scalable content routing, dubbed SCAN, which can exploit nearby and multiple content copies for the efficient delivery. SCAN exchanges the information of the cached contents using Bloom filter. Compared with IP routing, SCAN can offer reduced delivery latency, reduced traffic volume, and load balancing among links." ] }
1702.00340
2953091513
Locating the demanded content is one of the major challenges in Information-Centric Networking (ICN). This process is known as content discovery. To facilitate content discovery, in this paper we focus on Named Data Networking (NDN) and propose a novel routing scheme for content discovery, called Bloom Filter-based Routing (BFR), which is fully distributed, content oriented, and topology agnostic at the intra-domain level. In BFR, origin servers advertise their content objects using Bloom filters. We compare the performance of the proposed BFR with flooding and shortest path content discovery approaches. BFR outperforms its counterparts in terms of the average round-trip delay, while it is shown to be very robust to false positive reports from Bloom filters. Also, BFR is much more robust than shortest path routing to topology changes. BFR strongly outperforms flooding and performs almost equal with shortest path routing with respect to the normalized communication costs for data retrieval and total communication overhead for forwarding Interests. All the three approaches achieve similar mean hit distance. The signalling overhead for content advertisement in BFR is much lower than the signalling overhead for calculating shortest paths in the shortest path approach. Finally, BFR requires small storage overhead for maintaining content advertisements.
NLSR @cite_4 is considered as one of the most prominent routing-based solutions for NDN. It is a link state routing protocol, which requires frequent pulling of routing updates. NLSR routing updates contain information about both topology and content name prefixes. In NLSR, nodes run the Dijkstra algorithm to find the shortest path from each of the faces for any incoming Interest using full information about the topology and the content prefixes that exist in the network. Compared to NLSR, our scheme does not require any knowledge about the topology, while it permits the origin servers to propagate compact content advertisements using BFs.
{ "cite_N": [ "@cite_4" ], "mid": [ "2136758000" ], "abstract": [ "This paper presents the design of the Named-data Link State Routing protocol (NLSR), a routing protocol for Named Data Networking (NDN). Since NDN uses names to identify and retrieve data, NLSR propagates reachability to name prefixes instead of IP prefixes. Moreover, NLSR differs from IP-based link-state routing protocols in two fundamental ways. First, NLSR uses Interest Data packets to disseminate routing updates, directly benefiting from NDN's data authenticity. Second, NLSR produces a list of ranked forwarding options for each name prefix to facilitate NDN's adaptive forwarding strategies. In this paper we discuss NLSR's main design choices on (1) a hierarchical naming scheme for routers, keys, and routing updates, (2) a hierarchical trust model for routing within a single administrative domain, (3) a hop-by-hop synchronization protocol to replace the traditional network-wide flooding for routing update dissemination, and (4) a simple way to rank multiple forwarding options. Compared with IP-based link state routing, NLSR offers more efficient update dissemination, built-in update authentication, and native support of multipath forwarding." ] }
1702.00361
2952748296
The paper proposes a surprisingly simple characterization of a large class of models of distributed computing, via an agreement function: for each set of processes, the function determines the best level of set consensus these processes can reach. We show that the task computability of a large class of fair adversaries that includes, in particular superset-closed and symmetric one, is precisely captured by agreement functions.
Adversarial models were introduced by in @cite_0 . With respect to colorless tasks, Herlihy and Rajsbaum @cite_7 characterized a class @cite_24 adversaries (closed under the superset operation) via their minimal core sizes. Still with respect to colorless tasks, Gafni and Kuznetsov @cite_10 derived a characterization of general adversary using its function @math . A side result of this present paper is an extension of the characterization in @cite_10 to any (not necessarily colorless) tasks.
{ "cite_N": [ "@cite_0", "@cite_24", "@cite_10", "@cite_7" ], "mid": [ "2618013937", "2405839733", "", "1975658854" ], "abstract": [ "At the heart of distributed computing lies the fundamental result that the level of agreement that can be obtained in an asynchronous shared memory model where t processes can crash is exactly t + 1. In other words, an adversary that can crash any subset of size at most t can prevent the processes from agreeing on t values. But what about all the other 22n−1−(n+1) adversaries that are not uniform in this sense and might crash certain combination of processes and not others? This paper presents a precise way to classify all adversaries. We introduce the notion of disagreement power: the biggest integer k for which the adversary can prevent processes from agreeing on k values. We show how to compute the disagreement power of an adversary and derive n equivalence classes of adversaries.", "Traditionally, models of fault-tolerant distributed computing assume that failures are “uniform”: processes are equally probable to fail and a failure of one process does not affect reliability of the others. In real systems, however, processes may not be equally reliable. Moreover, failures may be correlated because of software or hardware features shared by subsets of processes. In this paper, we survey recent results addressing the question of what can and what cannot be computed in systems with non-identical and non-independent failures", "", "If one model of computation can simulate another, then the existence (or non-existence) of an algorithm in the simulated model reduces to a related question about the simulating model. The BG-simulation algorithm uses this approach to prove that k-set agreement cannot be solved when t processes can crash, 1≤t≤k, by reduction to the wait-free case, where it is known that n+1 processes cannot solve n-set agreement, and similarly for any other colorless task. We give a definition, expressed in the language of combinatorial topology, for what it means for one model of distributed computation to simulate another with respect to the ability to solve colorless tasks. This definition is not linked to specific models or specific protocols. We show how to exploit elementary topological arguments to show when a simulation exists, without the need for an explicit construction. We use this approach to generalize the BG-simulation and to unify a number of simulation relations linking various models, some previously known, some not." ] }
1702.00361
2952748296
The paper proposes a surprisingly simple characterization of a large class of models of distributed computing, via an agreement function: for each set of processes, the function determines the best level of set consensus these processes can reach. We show that the task computability of a large class of fair adversaries that includes, in particular superset-closed and symmetric one, is precisely captured by agreement functions.
Taubenfeld introduced in @cite_12 the notion of a symmetric progress conditions, which is equivalent to our symmetric adversaries.
{ "cite_N": [ "@cite_12" ], "mid": [ "1792996693" ], "abstract": [ "Understanding the effect of different progress conditions on the computability of distributed systems is an important and exciting research direction. For a system with n processes, we define exponentially many new progress conditions and explore their properties and strength. We cover all the known, symmetric and asymmetric, progress conditions and many new interesting conditions. Together with our technical results, the new definitions provide a deeper understanding of synchronization and concurrency." ] }
1702.00198
2529227260
Curated web archive collections contain focused digital contents which are collected by archiving organizations to provide a representative sample covering specific topics and events to preserve them for future exploration and analysis. In this paper, we discuss how to best support collaborative construction and exploration of these collections through the ArchiveWeb system. ArchiveWeb has been developed using an iterative evaluation-driven design-based research approach, with considerable user feedback at all stages. This paper describes the functionalities of our current prototype for searching, constructing, exploring and discussing web archive collections, as well as feedback on this prototype from seven archiving organizations, and our plans for improving the next release of the system.
Tools for supporting search and exploration in web archives are still limited @cite_0 . Most desired search functionalities in web archives are fulltext search with good ranking, followed by URL search @cite_13 . A recent survey showed that 89 provide URL search access and 79 @cite_4 . Some existing projects that provide limited support for web archive research are discussed below.
{ "cite_N": [ "@cite_0", "@cite_13", "@cite_4" ], "mid": [ "300178524", "", "2097411811" ], "abstract": [ "The World Wide Web is becoming a source of information for researchers, who are more aware of the possibilities for collections of Internet content as resources. Some have begun creating archives of web content for social science and humanities research. However, there is a growing gulf between policies shared between global and national institutions creating web archives and the practices of researchers making use of the archives. Each set of stakeholders finds the others’ web archiving contributions less applicable to their own field. Institutions find the contributions of researchers to be too narrow to meet the needs of the institution’s audience, and researchers find the contributions of institutions to be too broad to meet the needs of their research methods. Resources are extended to advance both institutional and researcher tools, but the gulf between the two is persistent. Institutions generally produce web archives that are broad in scope but with limited access and enrichment tools. The design of common access interfaces, such as the Internet Archive’s Wayback Machine, limit access points to archives to only URL and date. This narrow access limits the ways in which web archives can be valuable for exploring research questions in the humanities and social sciences. Individual scholars, in catering to their own disciplinary and methodological needs, produce web archives that are narrow in scope, and whose access and enrichment tools are personalized to work within the boundaries of the project for which the web archive was built. There is no way to explore a subset of an archive by topic, event, or idea. The current search paradigm in web archiving access tools is built primarily on retrieval, not discovery. We suggest that there is a need for extensible tools to enhance access to and enrichment of web archives to make them more readily reusable and so, more valuable for both institutions and researchers, and that annotation activities can serve as one potential guide for development of such tools to bridge the divide.", "", "Web archiving has been gaining interest and recognized importance for modern societies around the world. However, for web archivists it is frequently difficult to demonstrate this fact, for instance, to funders. This study provides an updated and global overview of web archiving. The obtained results showed that the number of web archiving initiatives significantly grew after 2003 and they are concentrated on developed countries. We statistically analyzed metrics, such as, the volume of archived data, archive file formats or number of people engaged. Web archives all together must process more data than any web search engine. Considering the complexity and large amounts of data involved in web archiving, the results showed that the assigned resources are scarce. A Wikipedia page was created to complement the presented work and be collaboratively kept up-to-date by the community." ] }
1702.00198
2529227260
Curated web archive collections contain focused digital contents which are collected by archiving organizations to provide a representative sample covering specific topics and events to preserve them for future exploration and analysis. In this paper, we discuss how to best support collaborative construction and exploration of these collections through the ArchiveWeb system. ArchiveWeb has been developed using an iterative evaluation-driven design-based research approach, with considerable user feedback at all stages. This paper describes the functionalities of our current prototype for searching, constructing, exploring and discussing web archive collections, as well as feedback on this prototype from seven archiving organizations, and our plans for improving the next release of the system.
All of the above tools and interfaces help support the exploration and search of web archives for individual users and researchers. In addition, ArchiveWeb aims at supporting the collaborative exploration of web archives. Previous research on helping users keep track of their resources include tools that provide better search and organizational facilities based on metadata time @cite_12 or tagging @cite_1 . Our system provides similar organizational functionalities refined through several learning communities and previous work, LearnWeb @cite_9 , thus gaining advantage from several years of development and user feedback in that context. ArchiveWeb builds on the LearnWeb platform which already supports collaborative sensemaking @cite_5 @cite_8 by allowing users to share and collaboratively work on resources retrieved from various web sources @cite_11 @cite_10 @cite_6 @cite_15 .
{ "cite_N": [ "@cite_8", "@cite_9", "@cite_1", "@cite_6", "@cite_5", "@cite_15", "@cite_10", "@cite_12", "@cite_11" ], "mid": [ "2135880665", "2068538256", "2034476817", "2022555286", "2163475640", "2102958620", "1559288273", "2112175905", "2169942798" ], "abstract": [ "Making sense of a body of data is a common activity in any kind of analysis. Sensemaking is the process of searching for a representation and encoding data in that representation to answer task-specific questions. Different operations during sensemaking require different cognitive and external resources. Representations are chosen and changed to reduce the cost of operations in an information processing task. The power of these representational shifts is generally under-appreciated as is the relation between sensemaking and information retrieval. We analyze sensemaking tasks and develop a model of the cost structure of sensemaking. We discuss implications for the integrated design of user interfaces, representational tools, and information retrieval systems.", "This paper discusses the development of LearnWeb2.0, a search and collaboration environment for supporting searching, organizing, and sharing distributed resources, and our pedagogical setup based on the multiliteracies approach. In LearnWeb2.0, collaborative and active learning is supported through project-focused search and aggregation, with discussion and comments directly linked to the resources. We are developing the LearnWeb2.0 platform through an iterative evaluation-driven design-based research approach - this paper describes the first iteration and part of the second one. In the first iteration, we developed LearnWeb2.0 and evaluated it in two Content and Language Integrated Learning (CLIL) courses We followed the multiliteracies approach, using authentic content from a variety of sources and contexts to provide important input for CLIL. We present evaluation design and results for both courses, and discuss how the differences in both scenarios influenced student performance and satisfaction. In the second iteration, we improved LearnWeb2.0 based on these experiences - we describe improvements as well as problems addressed. Finally, we sketch the evaluation planned for the second cycle, and close with a reflection of our experiences with the design-based research approach for developing a collaborative learning environment, and on multiliteracies as a suitable approach for CLIL.", "Systems for fast search of personal information are rapidly becoming ubiquitous. Such systems promise to dramatically improve personal information management, yet most are modeled on Web search in which users know very little about the content that they are searching. We describe the design and deployment of a system called Phlat that optimizes search for personal information with an intuitive interface that merges search and browsing through a variety of associative and contextual cues. In addition, Phlat supports a unified tagging (labeling) scheme for organizing personal content across storage systems (files, email, etc.). The system has been deployed to hundreds of employees within our organization. We report on both quantitative and qualitative aspects of system use. Phlat is available as a free download at http: research.microsoft.com adapt phlat.", "Today's Web browsers provide limited support for rich information-seeking and information-sharing scenarios. A survey we conducted of 204 knowledge workers at a large technology company has revealed that a large proportion of users engage in searches that include collaborative activities. We present the results of the survey, and then review the implications of these findings for designing new Web search interfaces that provide tools for sharing.", "Search engine researchers typically depict search as the solitary activity of an individual searcher. In contrast, results from our critical-incident survey of 150 users on Amazon's Mechanical Turk service suggest that social interactions play an important role throughout the search process. Our main contribution is that we have integrated models from previous work in sensemaking and information seeking behavior to present a canonical social model of user activities before, during, and after search, suggesting where in the search process both explicitly and implicitly shared information may be valuable to individual searchers.", "Studies of search habits reveal that people engage in many search tasks involving collaboration with others, such as travel planning, organizing social events, or working on a homework assignment. However, current Web search tools are designed for a single user, working alone. We introduce SearchTogether, a prototype that enables groups of remote users to synchronously or asynchronously collaborate when searching the Web. We describe an example usage scenario, and discuss the ways SearchTogether facilitates collaboration by supporting awareness, division of labor, and persistence. We then discuss the findings of our evaluation of SearchTogether, analyzing which aspects of its design enabled successful collaboration among study participants.", "In the last few years, social tagging systems have become a standard application of the World Wide Web. These systems can be considered as shared external knowledge structures of users on the Internet. In this paper, we describe how social tagging systems relate to individual semantic memory structures and how social tags affect individual processes of learning and information foraging. Furthermore, we present an experimental online study aimed at evaluating this interaction of external and internal structures of spreading activation. We report on effects of social tagging systems as visualized collective knowledge representations on individual processes of information search and learning.", "Most information retrieval technologies are designed to facilitate information discovery. However, much knowledge work involves finding and re-using previously seen information. We describe the design and evaluation of a system, called Stuff I've Seen (SIS), that facilitates information re-use. This is accomplished in two ways. First, the system provides a unified index of information that a person has seen, whether it was seen as email, web page, document, appointment, etc. Second, because the information has been seen before, rich contextual cues can be used in the search interface. The system has been used internally by more than 230 employees. We report on both qualitative and quantitative aspects of system use. Initial findings show that time and people are important retrieval cues. Users find information more easily using SIS, and use other search tools less frequently after installation.", "Web search is often viewed as a solitary task; however, there are many situations in which groups of people gather around a single computer to jointly search for information online. We present the findings of interviews with teachers, librarians, and developing world researchers that provide details about users' collaborative search habits in shared-computer settings, revealing several limitations of this practice. We then introduce CoSearch, a system we developed to improve the experience of co-located collaborative Web search by leveraging readily available devices such as mobile phones and extra mice. Finally, we present an evaluation comparing CoSearch to status quo collaboration approaches, and show that CoSearch enabled distributed control and division of labor, thus reducing the frustrations associated with shared-computer searches, while still preserving the positive aspects of communication and collaboration associated with joint computer use." ] }
1701.09177
2585379441
We propose an effective method to solve the event sequence clustering problems based on a novel Dirichlet mixture model of a special but significant type of point processes --- Hawkes process. In this model, each event sequence belonging to a cluster is generated via the same Hawkes process with specific parameters, and different clusters correspond to different Hawkes processes. The prior distribution of the Hawkes processes is controlled via a Dirichlet distribution. We learn the model via a maximum likelihood estimator (MLE) and propose an effective variational Bayesian inference algorithm. We specifically analyze the resulting EM-type algorithm in the context of inner-outer iterations and discuss several inner iteration allocation strategies. The identifiability of our model, the convergence of our learning method, and its sample complexity are analyzed in both theoretical and empirical ways, which demonstrate the superiority of our method to other competitors. The proposed method learns the number of clusters automatically and is robust to model misspecification. Experiments on both synthetic and real-world data show that our method can learn diverse triggering patterns hidden in asynchronous event sequences and achieve encouraging performance on clustering purity and consistency.
A temporal point process @cite_8 is a random process whose realization consists of an event sequence @math with time stamps @math and event types @math . It can be equivalently represented as @math counting processes @math , where @math is the number of type- @math events occurring at or before time @math . A way to characterize point processes is via the intensity function @math , where @math collects historical events of all types before time @math . It is the expected instantaneous rate of happening type- @math events given the history, which captures the phenomena of interests, i.e., self-triggering @cite_37 or self-correcting @cite_50 .
{ "cite_N": [ "@cite_37", "@cite_50", "@cite_8" ], "mid": [ "2069849731", "2267033539", "" ], "abstract": [ "SUMMARY In recent years methods of data analysis for point processes have received some attention, for example, by Cox & Lewis (1966) and Lewis (1964). In particular Bartlett (1963a,b) has introduced methods of analysis based on the point spectrum. Theoretical models are relatively sparse. In this paper the theoretical properties of a class of processes with particular reference to the point spectrum or corresponding covariance density functions are discussed. A particular result is a self-exciting process with the same second-order properties as a certain doubly stochastic process. These are not distinguishable by methods of data analysis based on these properties.", "Producing attractive trailers for videos needs human expertise and creativity, and hence is challenging and costly. Different from video summarization that focuses on capturing storylines or important scenes, trailer generation aims at producing trailers that are attractive so that viewers will be eager to watch the original video. In this work, we study the problem of automatic trailer generation, in which an attractive trailer is produced given a video and a piece of music. We propose a surrogate measure of video attractiveness named fixation variance, and learn a novel self-correcting point process-based attractiveness model that can effectively describe the dynamics of attractiveness of a video. Furthermore, based on the attractiveness model learned from existing training trailers, we propose an efficient graph-based trailer generation algorithm to produce a max-attractiveness trailer. Experiments demonstrate that our approach outperforms the state-of-the-art trailer generators in terms of both quality and efficiency.", "" ] }
1701.09177
2585379441
We propose an effective method to solve the event sequence clustering problems based on a novel Dirichlet mixture model of a special but significant type of point processes --- Hawkes process. In this model, each event sequence belonging to a cluster is generated via the same Hawkes process with specific parameters, and different clusters correspond to different Hawkes processes. The prior distribution of the Hawkes processes is controlled via a Dirichlet distribution. We learn the model via a maximum likelihood estimator (MLE) and propose an effective variational Bayesian inference algorithm. We specifically analyze the resulting EM-type algorithm in the context of inner-outer iterations and discuss several inner iteration allocation strategies. The identifiability of our model, the convergence of our learning method, and its sample complexity are analyzed in both theoretical and empirical ways, which demonstrate the superiority of our method to other competitors. The proposed method learns the number of clusters automatically and is robust to model misspecification. Experiments on both synthetic and real-world data show that our method can learn diverse triggering patterns hidden in asynchronous event sequences and achieve encouraging performance on clustering purity and consistency.
Traditional methods mainly focus on clustering synchronous (or aggregated) time series with discrete time-lagged variables @cite_21 @cite_41 @cite_35 . These methods rely on probabilistic mixture models @cite_36 , extracting features from sequential data and then learning clusters via a Gaussian mixture model (GMM) @cite_39 @cite_19 . Recently, a mixture model of Markov chains is proposed in @cite_26 , which learns potential clusters from aggregate data. For asynchronous event sequences, most of the existing clustering methods can be categorized into feature-based methods, clustering event sequences from learned or predefined features. Typical examples include the Gaussian process-base multi-task learning method in @cite_3 and the multi-task multi-dimensional Hawkes processes in @cite_24 . Focusing on Hawkes processes, the feature-based mixture models in @cite_43 @cite_44 @cite_12 combine Hawkes processes with Dirichlet processes @cite_47 @cite_30 . However, these methods aim at modeling clusters of events or topics hidden in event sequences (i.e., sub-sequence clustering), which cannot learn clusters of event sequences. To our knowledge, the model-based clustering method for event sequences has been rarely considered.
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_26", "@cite_41", "@cite_36", "@cite_21", "@cite_3", "@cite_39", "@cite_24", "@cite_19", "@cite_43", "@cite_44", "@cite_47", "@cite_12" ], "mid": [ "2142590786", "2097747115", "2343329762", "2094373571", "", "2111263072", "2165291667", "", "2401267531", "2056243712", "2002413290", "2127434196", "2127498532", "2137644567" ], "abstract": [ "We introduce a new nonparametric clustering model which combines the recently proposed distance-dependent Chinese restaurant process (dd-CRP) and non-linear, spectral methods for dimensionality reduction. Our model retains the ability of nonparametric methods to learn the number of clusters from data. At the same time it addresses two key limitations of nonparametric Bayesian methods: modeling data that are not exchangeable and have many correlated features. Spectral methods use the similarity between documents to map them into a low-dimensional spectral space where we then compare several clustering methods. Our experiments on handwritten digits and text documents show that nonparametric methods such as the CRP or dd-CRP can perform as well as or better than k-means and also recover the true number of clusters. We improve the performance of the dd-CRP in spectral space by incorporating the original similarity matrix in its prior. This simple modification results in better performance than all other methods we compared to. We offer a new formulation and first experimental evaluation of a general Gibbs sampler for mixture modeling with distance-dependent CRPs.", "Time series clustering has been shown effective in providing useful information in various domains. There seems to be an increased interest in time series clustering as part of the effort in temporal data mining research. To provide an overview, this paper surveys and summarizes previous works that investigated the clustering of time series data in various application domains. The basics of time series clustering are presented, including general-purpose clustering algorithms commonly used in time series clustering studies, the criteria for evaluating the performance of the clustering results, and the measures to determine the similarity dissimilarity between two time series being compared, either in the forms of raw data, extracted features, or some model parameters. The past researchs are organized into three groups depending upon whether they work directly with the raw data either in the time or frequency domain, indirectly with features extracted from the raw data, or indirectly with models built from the raw data. The uniqueness and limitation of previous research are discussed and several possible topics for future research are identified. Moreover, the areas that time series clustering have been applied to are also summarized, including the sources of data used. It is hoped that this review will serve as the steppingstone for those interested in advancing this area of research.", "Statistical models based on Markov chains, especially mixtures of Markov chains, have recently been studied and demonstrated to be effective in various data mining applications such as tourist flow analysis, animal migration modeling, and transportation administration. Nevertheless, the research so far has mainly focused on analyzing data at individual levels. Due to security and privacy reasons, however, the observations in practice usually consist of coarse-grained statistics of individual data, a.k.a. aggregate data, rendering learning mixtures of Markov chains an even more challenging problem. In this work, we show that this challenging problem, although intractable in its original form, can be solved approximately by posing structural constraints on the transition matrices. The proposed structural constraints include specifying active state sets corresponding to the chains and adding a pairwise sparse regularization term on transition matrices. Based on these two structural constraints, we propose a constrained least-squares method to learn mixtures of Markov chains. We further develop a novel iterative algorithm that decomposes the overall problem into a set of convex subproblems and solves each subproblem efficiently, making it possible to effectively learn mixtures of Markov chains from aggregate data. We propose a framework for generating synthetic data and analyze the complexity of our algorithm. Additionally, the empirical results of the convergence and the robustness of our algorithm are also presented. These results demonstrate the effectiveness and efficiency of the proposed algorithm, comparing with traditional methods. Experimental results on real-world data sets further validate that our algorithm can be used to solve practical problems.", "This paper presents a test of hypotheses to compare two stationary time series as well as an accompanying classification procedure that uses this test of hypotheses to cluster stationary time series. Our hypotheses testing procedure, which unlike the existing tests, does not require the time series to be independent, is based on the differences between estimated parameters of the autoregressive models that are fitted to the time series. The classification procedure is based on the p-value of the test that is applied to every pair of given time series.", "", "A new method is presented to get an insight into univariate time series data. The problem addressed is how to identify patterns and trends on multiple time scales (days, weeks, seasons) simultaneously. The solution presented is to cluster similar daily data patterns, and to visualize the average patterns as graphs and the corresponding days on a calendar. This presentation provides a quick insight into both standard and exceptional patterns. Furthermore, it is well suited to interactive exploration. Two applications, numbers of employees present and energy consumption, are presented.", "Point process data are commonly observed in fields like healthcare and the social sciences. Designing predictive models for such event streams is an under-explored problem, due to often scarce training data. In this work we propose a multitask point process model, leveraging information from all tasks via a hierarchical Gaussian process (GP). Nonparametric learning functions implemented by a GP, which map from past events to future rates, allow analysis of flexible arrival patterns. To facilitate efficient inference, we propose a sparse construction for this hierarchical model, and derive a variational Bayes method for learning and inference. Experimental results are shown on both synthetic data and as well as real electronic health-records data.", "", "We propose a Multi-task Multi-dimensional Hawkes Process (MMHP) for modeling event sequences where there exist multiple triggering patterns within sequences and structures across sequences. MMHP is able to model the dynamics of multiple sequences jointly by imposing structural constraints and thus systematically uncover clustering structure among sequences. We propose an effective and robust optimization algorithm to learn MMHP models, which takes advantage of alternating direction method of multipliers (ADMM), majorization minimization and Euler-Lagrange equations. Our experimental results demonstrate that MMHP performs well on both synthetic and real data.", "This article is concerned with variable selection for cluster analysis. The problem is regarded as a model selection problem in the model-based cluster analysis context. A general model generalizing the model of Raftery and Dean (2006) is proposed to specify the role of each variable. This model does not need any prior assumptions about the link between the selected and discarded variables. Models are compared with BIC. Variables role is obtained through an algorithm embedding two backward stepwise variable selection algorithms for clustering and linear regression. The consistency of the resulting criterion is proved under regularity conditions. Numerical experiments on simulated datasets and a genomics application highlight the interest of the proposed variable selection procedure.", "In many applications in social network analysis, it is important to model the interactions and infer the influence between pairs of actors, leading to the problem of dyadic event modeling which has attracted increasing interests recently. In this paper we focus on the problem of dyadic event attribution, an important missing data problem in dyadic event modeling where one needs to infer the missing actor-pairs of a subset of dyadic events based on their observed timestamps. Existing works either use fixed model parameters and heuristic rules for event attribution, or assume the dyadic events across actor-pairs are independent. To address those shortcomings we propose a probabilistic model based on mixtures of Hawkes processes that simultaneously tackles event attribution and network parameter inference, taking into consideration the dependency among dyadic events that share at least one actor. We also investigate using additive models to incorporate regularization to avoid overfitting. Our experiments on both synthetic and real-world data sets on international armed conflicts suggest that the proposed new method is capable of significantly improve accuracy when compared with the state-of-the-art for dyadic event attribution.", "Diffusion network inference and meme tracking have been two key challenges in viral diffusion. This paper shows that these two tasks can be addressed simultaneously with a probabilistic model involving a mixture of mutually exciting point processes. A fast learning algorithms is developed based on mean-field variational inference with budgeted diffusion bandwidth. The model is demonstrated with applications to the diffusion of viral texts in (1) online social networks (e.g., Twitter) and (2) the blogosphere on the Web.", "Dirichlet process (DP) mixture models are the cornerstone of non- parametric Bayesian statistics, and the development of Monte-Carlo Markov chain (MCMC) sampling methods for DP mixtures has enabled the application of non- parametric Bayesian methods to a variety of practical data analysis problems. However, MCMC sampling can be prohibitively slow, and it is important to ex- plore alternatives. One class of alternatives is provided by variational methods, a class of deterministic algorithms that convert inference problems into optimization problems (Opper and Saad 2001; Wainwright and Jordan 2003). Thus far, varia- tional methods have mainly been explored in the parametric setting, in particular within the formalism of the exponential family (Attias 2000; Ghahramani and Beal 2001; 2003). In this paper, we present a variational inference algorithm for DP mixtures. We present experiments that compare the algorithm to Gibbs sampling algorithms for DP mixtures of Gaussians and present an application to a large-scale image analysis problem.", "Clusters in document streams, such as online news articles, can be induced by their textual contents, as well as by the temporal dynamics of their arriving patterns. Can we leverage both sources of information to obtain a better clustering of the documents, and distill information that is not possible to extract using contents only? In this paper, we propose a novel random process, referred to as the Dirichlet-Hawkes process, to take into account both information in a unified framework. A distinctive feature of the proposed model is that the preferential attachment of items to clusters according to cluster sizes, present in Dirichlet processes, is now driven according to the intensities of cluster-wise self-exciting temporal point processes, the Hawkes processes. This new model establishes a previously unexplored connection between Bayesian Nonparametrics and temporal Point Processes, which makes the number of clusters grow to accommodate the increasing complexity of online streaming contents, while at the same time adapts to the ever changing dynamics of the respective continuous arrival time. We conducted large-scale experiments on both synthetic and real world news articles, and show that Dirichlet-Hawkes processes can recover both meaningful topics and temporal dynamics, which leads to better predictive performance in terms of content perplexity and arrival time of future documents." ] }
1701.08879
2585159303
We propose a new approach for interaction in Virtual Reality (VR) using mobile robots as proxies for haptic feedback. This approach allows VR users to have the experience of sharing and manipulating tangible physical objects with remote collaborators. Because participants do not directly observe the robotic proxies, the mapping between them and the virtual objects is not required to be direct. In this paper, we describe our implementation, various scenarios for interaction, and a preliminary user study.
Recently more and more research has arisen focused on the intersection between haptic feedback and collaboration. InForm @cite_1 proposed utilizing shape displays in multiple ways to manipulate by actuating physical objects. Tangible Bits was proposed to empower collaboration by manipulating physical objects at 1997 @cite_7 , and the idea was extended in 2008 @cite_14 . PSyBench, a physical shared workspace, presents a new approach to enhance remote collaboration and communication, based on the idea of Tangible Interfaces at 1998 @cite_10 . The concept of synchronized distributed physical objects was mentioned in PSyBench @cite_10 , which demonstrated the potential of physical remote collaboration. One contribution in this paper is to show how people can experience consistent physical feedback over distance, regardless of the physical configuration of the corresponding remote space. PSyBench @cite_9 only had 1-to-1 mapping while we extended the mapping style and kind of scenarios. Also objects could not be lifted and displayed the same movement without VR support. InForm @cite_1 did not support collaboration and the materials are fixed on the table, while we offer a more lightweight approach. SnakeCharmer @cite_17 had similar ideas about one-to-many mapping. However, we support collaboration and wireless.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_9", "@cite_1", "@cite_10", "@cite_17" ], "mid": [ "2042141825", "2149891956", "1998818664", "2143131345", "2127177812", "2286501983" ], "abstract": [ "Tangible user interfaces (TUIs) provide physical form to digital information and computation, facilitating the direct manipulation of bits. Our goal in TUI development is to empower collaboration, learning, and design by using digital technology and at the same time taking advantage of human abilities to grasp and manipulate physical objects and materials. This paper discusses a model of TUI, key properties, genres, applications, and summarizes the contributions made by the Tangible Media Group and other researchers since the publication of the first Tangible Bits paper at CHI 1997. http: tangible.media.mit.edu", "This paper presents our vision of Human Computer Interaction (HCI): \"Tangible Bits.\" Tangible Bits allows users to \"grasp & manipulate\" bits in the center of users’ attention by coupling the bits with everyday physical objects and architectural surfaces. Tangible Bits also enables users to be aware of background bits at the periphery of human perception using ambient display media such as light, sound, airflow, and water movement in an augmented space. The goal of Tangible Bits is to bridge the gaps between both cyberspace and the physical environment, as well as the foreground and background of human activities. This paper describes three key concepts of Tangible Bits: interactive surfaces; the coupling of bits with graspable physical objects; and ambient media for background awareness. We illustrate these concepts with three prototype systems ‐ the metaDESK, transBOARD and ambientROOM ‐ to identify underlying research issues.", "We present interaction techniques for tangible tabletop interfaces that use active, motorized tangibles, what we call Tangible Bots. Tangible Bots can reflect changes in the digital model and assist users by haptic feedback, by correcting errors, by multi-touch control, and by allowing efficient interaction with multiple tangibles. A first study shows that Tangible Bots are usable for fine-grained manipulation (e.g., rotating tangibles to a particular orientation); for coarse movements, Tangible Bots become useful only when several tangibles are controlled simultaneously. Participants prefer Tangible Bots and find them less taxing than passive, non-motorized tangibles. A second study focuses on usefulness by studying how electronic musicians use Tangible Bots to create music with a tangible tabletop application. We conclude by discussing the further potential of active tangibles, and their relative benefits over passive tangibles and multi-touch.", "Past research on shape displays has primarily focused on rendering content and user interface elements through shape output, with less emphasis on dynamically changing UIs. We propose utilizing shape displays in three different ways to mediate interaction: to facilitate by providing dynamic physical affordances through shape change, to restrict by guiding users with dynamic physical constraints, and to manipulate by actuating physical objects. We outline potential interaction techniques and introduce Dynamic Physical Affordances and Constraints with our inFORM system, built on top of a state-of-the-art shape display, which provides for variable stiffness rendering and real-time user input through direct touch and tangible interaction. A set of motivating examples demonstrates how dynamic affordances, constraints and object actuation can create novel interaction possibilities.", "Current systems for real-time distributed CSCW are largely rooted in traditional GUI-based groupware and voice video conferencing methodologies. In these approaches, interactions are limited to visual and auditory media, and shared environments are confined to the digital world. This paper presents a new approach to enhance remote collaboration and communication, based on the idea of Tangible Interfaces, which places a greater emphasis on touch and physicality. The approach is grounded in a concept called Synchronized Distributed Physical Objects, which employs telemanipulation technology to create the illusion that distant users are interacting with shared physical objects. We describe two applications of this approach: PSyBench, a physical shared workspace, and inTouch, a device for haptic interpersonal communication.", "Augmented and virtual reality have the potential of being indistinguishable from the real world. Holographic displays, including head mounted units, support this vision by creating rich stereoscopic scenes, with objects that appear to float in thin air - often within arm's reach. However, one has but to reach out and grasp nothing but air to destroy the suspension of disbelief. Snake-charmer is an attempt to provide physical form to virtual objects by revisiting the concept of Robotic Graphics or Encountered-type Haptic interfaces with current commodity hardware. By means of a robotic arm, Snake-charmer brings physicality to a virtual scene and explores what it means to truly interact with an object. We go beyond texture and position simulation and explore what it means to have a physical presence inside a virtual scene. We demonstrate how to render surface characteristics beyond texture and position, including temperature; how to physically move objects; and how objects can physically interact with the user's hand. We analyze our implementation, present the performance characteristics, and provide guidance for the construction of future physical renderers." ] }
1701.08879
2585159303
We propose a new approach for interaction in Virtual Reality (VR) using mobile robots as proxies for haptic feedback. This approach allows VR users to have the experience of sharing and manipulating tangible physical objects with remote collaborators. Because participants do not directly observe the robotic proxies, the mapping between them and the virtual objects is not required to be direct. In this paper, we describe our implementation, various scenarios for interaction, and a preliminary user study.
Haptic feedback has been mentioned frequently in VR training, especially in the medical field. The sense of touch is the earliest developed in human embryology and is believed to be essential for practice @cite_5 @cite_0 . Robot-assisted minimally invasive surgery (RMIS) holds great promise for improving the accuracy and dexterity of a surgeon @cite_13 . Haptic feedback has potential benefits not only in training, but in other interactions as well. Reality based interaction was proposed for post-WIMP interfaces @cite_4 . Tangible interactions are observable with both visual and haptic modalities that could help people utilize basic knowledge about the behavior of our physical world @cite_6 .
{ "cite_N": [ "@cite_4", "@cite_6", "@cite_0", "@cite_5", "@cite_13" ], "mid": [ "1997447598", "2032933675", "2031576182", "2057244156", "2009180946" ], "abstract": [ "We are in the midst of an explosion of emerging human-computer interaction techniques that redefine our understanding of both computers and interaction. We propose the notion of Reality-Based Interaction (RBI) as a unifying concept that ties together a large subset of these emerging interaction styles. Based on this concept of RBI, we provide a framework that can be used to understand, compare, and relate current paths of recent HCI research as well as to analyze specific interaction designs. We believe that viewing interaction through the lens of RBI provides insights for design and uncovers gaps or opportunities for future research.", "We identify and present a major interaction approach for tangible user interfaces based upon systems of tokens and constraints. In these interfaces, tokens are discrete physical objects which represent digital information. Constraints are confining regions that are mapped to digital operations. These are frequently embodied as structures that mechanically channel how tokens can be manipulated, often limiting their movement to a single degree of freedom. Placing and manipulating tokens within systems of constraints can be used to invoke and control a variety of computational interpretations.We discuss the properties of the token+constraint approach; consider strengths that distinguish them from other interface approaches; and illustrate the concept with eleven past and recent supporting systems. We present some of the conceptual background supporting these interfaces, and consider them in terms of 's [2002] five questions for sensing-based interaction. We believe this discussion supports token+constraint systems as a powerful and promising approach for sensing-based interaction.", "Background Simulation tools offer the opportunity for the acquisition of surgical skill in the preclinical setting. Potential educational, safety, cost, and outcome benefits have brought increasing attention to this area in recent years. Utility in ongoing assessment and documentation of surgical skill, and in documenting proficiency and competency by standardized metrics, is another potential application of this technology. Significant work is yet to be done in validating simulation tools in the teaching of endoscopic, laparoscopic, and other surgical skills. Early data suggest face and construct validity, and the potential for clinical benefit, from simulation-based preclinical skills development. The purpose of this review is to highlight the status of simulation in surgical education, including available simulator options, and to briefly discuss the future impact of these modalities on surgical training.", "Explains the need for haptics (feeling of touch) in medical simulation systems. Describes a variety of laparoscopic training systems and other surgical simulators. Highlights the Reachin Technologies AB Application Programming Interface (API) which is a software tool that significantly speeds up the development of surgical simulators. Copyright © 2004 Robotic Publications Ltd.", "Purpose of Review Robot-assisted minimally invasive surgery (RMIS) holds great promise for improving the accuracy and dexterity of a surgeon while minimizing trauma to the patient. However, widespread clinical success with RMIS has been marginal. It is hypothesized that the lack of haptic (force and tactile) feedback presented to the surgeon is a limiting factor. This review explains the technical challenges of creating haptic feedback for robot-assisted surgery and provides recent results that evaluate the effectiveness of haptic feedback in mock surgical tasks." ] }
1701.08879
2585159303
We propose a new approach for interaction in Virtual Reality (VR) using mobile robots as proxies for haptic feedback. This approach allows VR users to have the experience of sharing and manipulating tangible physical objects with remote collaborators. Because participants do not directly observe the robotic proxies, the mapping between them and the virtual objects is not required to be direct. In this paper, we describe our implementation, various scenarios for interaction, and a preliminary user study.
Manipulating multiple virtual objects is always a challenge, in that precisely-located haptic proxy objects are required. @cite_8 proposed multiple approaches to align physical and virtual objects. Redirected touching @cite_11 considered the inflexibility of passive haptic displays and introduced a deliberate inconsistency between real hands and virtual hands. In redirected touching, a single real object could provide haptic feedback for virtual objects of various shapes to enrich the mapping between virtual objects and physical proxies.
{ "cite_N": [ "@cite_11", "@cite_8" ], "mid": [ "2055870384", "2405777275" ], "abstract": [ "There is an increasing interest in deployable virtual military training systems. Haptic feedback for these training systems can enable users to interact more naturally with the training environment, but is difficult to deploy. Passive haptic feedback is very compelling, but it is also inflexible. Changes made to virtual objects can require time-consuming changes to their physical passive-haptic counterparts. This poster explores the possibility of mapping many differently shaped virtual objects onto one physical object by warping virtual space and exploiting the dominance of the visual system. A first implementation that maps different virtual objects onto dynamically captured physical geometry is presented, and potential applications to deployable military trainers are discussed.", "Manipulating a virtual object with appropriate passive haptic cues provides a satisfying sense of presence in virtual reality. However, scaling such experiences to support multiple virtual objects is a challenge as each one needs to be accompanied with a precisely-located haptic proxy object. We propose a solution that overcomes this limitation by hacking human perception. We have created a framework for repurposing passive haptics, called haptic retargeting, that leverages the dominance of vision when our senses conflict. With haptic retargeting, a single physical prop can provide passive haptics for multiple virtual objects. We introduce three approaches for dynamically aligning physical and virtual objects: world manipulation, body manipulation and a hybrid technique which combines both world and body manipulation. Our study results indicate that all our haptic retargeting techniques improve the sense of presence when compared to typical wand-based 3D control of virtual objects. Furthermore, our hybrid haptic retargeting achieved the highest satisfaction and presence scores while limiting the visible side-effects during interaction." ] }
1701.08879
2585159303
We propose a new approach for interaction in Virtual Reality (VR) using mobile robots as proxies for haptic feedback. This approach allows VR users to have the experience of sharing and manipulating tangible physical objects with remote collaborators. Because participants do not directly observe the robotic proxies, the mapping between them and the virtual objects is not required to be direct. In this paper, we describe our implementation, various scenarios for interaction, and a preliminary user study.
C-Slate presented a new vision-based system, which combined bimanual and tangible interaction and the sharing of remote gestures and physical objects as a new approach to remote collaboration @cite_12 . @cite_2 tried an augmented reality way to control distant objects without feedback. Also with shared workspaces that can capture and remotely render the shapes of people and objects, users can experience haptic feedback based on shaped displays @cite_15 .
{ "cite_N": [ "@cite_15", "@cite_12", "@cite_2" ], "mid": [ "2015637289", "2159839630", "2005864097" ], "abstract": [ "We propose a new approach to Physical Telepresence, based on shared workspaces with the ability to capture and remotely render the shapes of people and objects. In this paper, we describe the concept of shape transmission, and propose interaction techniques to manipulate remote physical objects and physical renderings of shared digital content. We investigate how the representation of user's body parts can be altered to amplify their capabilities for teleoperation. We also describe the details of building and testing prototype Physical Telepresence workspaces based on shape displays. A preliminary evaluation shows how users are able to manipulate remote objects, and we report on our observations of several different manipulation techniques that highlight the expressive nature of our system.", "We introduce C-Slate, a new vision-based system, which utilizes stereo cameras above a commercially available tablet technology to support remote collaboration. The horizontally mounted tablet provides the user with high resolution stylus input, which is augmented by multi-touch interaction and recognition of untagged everyday physical objects using new stereo vision and machine learning techniques. This provides a novel and interesting interactive tabletop arrangement, capable of supporting a variety of fluid multi-touch interactions, including symmetric and asymmetric bimanual input, coupled with the potential for incorporating tangible objects into the user interface. When used in a remote context, these features are combined with the ability to see visual representations of remote users' hands and remote physical objects placed on top of the surface. This combination of bimanual and tangible interaction and sharing of remote gestures and physical objects provides a new way to collaborate remotely, complementing existing channels such as audio and video conferencing.", "An increase number of gesture sensors and AR(augmented reality) displays are emerging these years. There are more and more study working on spatial interaction. In this work, we focus on evaluating the feasibility of hand-based interaction designed for AR glasses to do basic manipulation on 3D virtual objects in AR applications with current interaction technology. We design a user study with 10 volunteers to discuss whether and how to use natural user interfaces like gestures, with the help of 3D menus, to finish the tasks of manipulating AR scenes. The interface including 3D menus presents the advantages of usability and merits of intuition; However, it also exposes some issues like ergonomic discomfort and limitation like imprecise input. So our results indicate the kind of interactive device and technology are valuable in natural interaction without extra knowledge, though they are limited in serious applications now." ] }
1701.08991
2949729733
Osteoarthritis (OA) is a common musculoskelet al condition typically diagnosed from radiographic assessment after clinical examination. However, a visual evaluation made by a practitioner suffers from subjectivity and is highly dependent on the experience. Computer-aided diagnostics (CAD) could improve the objectivity of knee radiographic examination. The first essential step of knee OA CAD is to automatically localize the joint area. However, according to the literature this task itself remains challenging. The aim of this study was to develop novel and computationally efficient method to tackle the issue. Here, three different datasets of knee radiographs were used (n = 473 93 77) to validate the overall performance of the method. Our pipeline consists of two parts: anatomically-based joint area proposal and their evaluation using Histogram of Oriented Gradients and the pre-trained Support Vector Machine classifier scores. The obtained results for the used datasets show the mean intersection over the union equal to: 0.84, 0.79 and 0.78. Using a high-end computer, the method allows to automatically annotate conventional knee radiographs within 14-16ms and high resolution ones within 170ms. Our results demonstrate that the developed method is suitable for large-scale analyses.
In the literature, multiple approaches have been used to localize ROIs within radiographs: manual @cite_3 @cite_2 , semi-automatic @cite_24 and in a fully automatic manner @cite_12 @cite_14 @cite_15 . To the best of our knowledge, only the studies by Anthony @cite_12 and Shamir @cite_15 focused on knee joint area localization. While the problem of knee joint area localization can be implicitly solved by annotating the anatomical bone landmarks using deformable models @cite_0 @cite_23 , it would be unfeasible to perform large-scale studies since the use of these algorithms is computationally costly. Currently, despite the presence of multiple approaches, no studies have reported so far their applicability among different datasets. This cross-dataset validation is crucial for the development of clinically applicable solutions.
{ "cite_N": [ "@cite_14", "@cite_3", "@cite_24", "@cite_0", "@cite_23", "@cite_2", "@cite_15", "@cite_12" ], "mid": [ "2033519774", "", "2006609734", "2113023402", "1971017792", "2149496581", "2159988341", "2521048164" ], "abstract": [ "Osteoarthritic (OA) changes in knee joints can be assessed by analyzing the structure of trabecular bone (TB) in the tibia. This analysis is performed on TB regions selected manually by a human operator on x-ray images. Manual selection is time-consuming, tedious, and expensive. Even if a radiologist expert or highly trained person is available to select regions, high inter- and intraobserver variabilities are still possible. A fully automated image segmentation method was, therefore, developed to select the bone regions for numerical analyses of changes in bone structures. The newly developed method consists of image preprocessing, delineation of cortical bone plates (active shape model), and location of regions of interest (ROI). The method was trained on an independent set of 40 x-ray images. Automatically selected regions were compared to the ''gold standard'' that contains ROIs selected manually by a radiologist expert on 132 x-ray images. All images were acquired from subjects locked in a standardized standing position using a radiography rig. The size of each ROI is 12.8x12.8 mm. The automated method results showed a good agreement with the gold standard [similarity index (SI)=0.83 (medial) and 0.81 (lateral) and the offset=[-1.78, 1.27]x[-0.65,0.26] mm (medial) and [-2.15, 1.59]x[-0.58, 0.52] mm (lateral)]. Blandmore » and Altman plots were constructed for fractal signatures, and changes of fractal dimensions (FD) to region offsets calculated between the gold standard and automatically selected regions were calculated. The plots showed a random scatter and the 95 confidence intervals were (-0.006, 0.008) and (-0.001, 0.011). The changes of FDs to region offsets were less than 0.035. Previous studies showed that differences in FDs between non-OA and OA bone regions were greater than 0.05. ROIs were also selected by a second radiologist and then evaluated. Results indicated that the newly developed method could replace a human operator and produces bone regions with an accuracy that is sufficient for fractal analyses of bone texture.« less", "", "The progression of osteoarthritis (OA) can be monitored by measuring the minimum joint space width (mJSW) between the edges of the femoral condyle and the tibial plateau on radiographs of the knee. This is generally performed by a trained physician using a graduated magnifying lens and is prone to the subjectivity and variation associated with observer measurement. We have developed software that performs this measurement automatically on digitized radiographs. The test data consisted of 180 digitized radiographs of the knee (90 duplicate acquisitions) from 18 normal (nonarthritic) subjects and 38 images from 10 subjects with OA. These were digitized and manually cropped so that the images were free of nonanatomical structures and the knee was approximately centered. The software first determined the edge of the femoral condyle on 400 μm pixel subsampled images. Contours marking the location of the tibial plateau in the medial compartment were found on 100 μm images using the femoral edge as a reference. The algorithm was trained using an independent but similar data set and using a jackknife approach with the test data. The results were compared to contours drawn by a trained reader and the duplicate acquisitions were used to measure the reproducibility of the mJSW measurement. The reproducibility was 0.16 mm and 0.18 mm for normal and osteoarthritic knees, respectively, representing an improvement of approximately a factor of 2 over manual measurement. The algorithm also showed excellent agreement with the hand-drawn contours and with mJSW determined by the manual method.", "Statistical shape models are often learned from examples based on landmark correspondences between annotated examples. A method is proposed for learning such models from contours with inconsistent bifurcations and loops. It is evaluated on the task of segmenting tibial contours in knee radiographs. Results are presented using various features, distance weighted K nearest neighbours and differing eigenspace shape constraints.", "Summary Objective To evaluate the accuracy and sensitivity of a fully automatic shape model matching (FASMM) system to derive statistical shape models (SSMs) of the proximal femur from non-standardised anteroposterior (AP) pelvic radiographs. Design AP pelvic radiographs obtained with informed consent and appropriate ethical approval were available for 1105 subjects with unilateral hip osteoarthritis (OA) who had been recruited previously for The arcOGEN Study. The FASMM system was applied to capture the shape of the unaffected (i.e., without signs of radiographic OA) proximal femur from these radiographs. The accuracy and sensitivity of the FASMM system in calculating geometric measurements of the proximal femur and in shape representation were evaluated relative to validated manual methods. Results De novo application of the FASMM system had a mean point-to-curve error of less than 0.9 mm in 99 of images ( n = 266). Geometric measurements generated by the FASMM system were as accurate as those obtained manually. The analysis of the SSMs generated by the FASMM system for male and female subject groups identified more significant differences (in five of 17 SSM modes after Bonferroni adjustment) in their global proximal femur shape than those obtained from the analysis of conventional geometric measurements. Multivariate gender-classification accuracy was higher when using SSM mode values (76.3 ) than when using conventional hip geometric measurements (71.8 ). Conclusions The FASMM system rapidly and accurately generates a global SSM of the proximal femur from radiographs of varying quality and resolution. This system will facilitate complex morphometric analysis of global shape variation across large datasets. The FASMM system could be adapted to generate SSMs from the radiographs of other skelet al structures such as the hand, knee or pelvis.", "Purpose: The purpose of this study is to develop a dissimilarity measure for the classification of trabecular bone (TB) texture in knee radiographs. Problems associated with the traditional extraction and selection of texture features and with the invariance to imaging conditions such as image size, anisotropy, noise, blur, exposure, magnification, and projection angle were addressed. Methods: In the method developed, called a signature dissimilarity measure (SDM), a sum of earth mover's distances calculated for roughness and orientation signatures is used to quantify dissimilarities between textures. Scale-space theory was used to ensure scale and rotation invariance. The effects of image size, anisotropy, noise, and blur on the SDM developed were studied using computer generated fractal texture images. The invariance of the measure to image exposure, magnification, and projection angle was studied using x-ray images of human tibia head. For the studies, Mann-Whitney tests with significance level of 0.01 were used. A comparison study between the performances of a SDM based classification system and other two systems in the classification of Brodatz textures and the detection of knee osteoarthritis (OA) were conducted. The other systems are based on weighted neighbor distance using compound hierarchy of algorithms representing morphology (WND-CHARM) and local binarymore » patterns (LBP). Results: Results obtained indicate that the SDM developed is invariant to image exposure (2.5-30 mA s), magnification (x1.00-x1.35), noise associated with film graininess and quantum mottle ( 64x64 pixels). However, the measure is sensitive to changes in projection angle (>5 deg.), image anisotropy (>30 deg.), and blur generated by a regular film screen. For the classification of Brodatz textures, the SDM based system produced comparable results to the LBP system. For the detection of knee OA, the SDM based system achieved 78.8 classification accuracy and outperformed the WND-CHARM system (64.2 ). Conclusions: The SDM is well suited for the classification of TB texture images in knee OA detection and may be useful for the texture classification of medical images in general.« less", "Summary Objective To determine whether computer-based analysis can detect features predictive of osteoarthritis (OA) development in radiographically normal knees. Method A systematic computer-aided image analysis method weighted neighbor distances using a compound hierarchy of algorithms representing morphology (WND-CHARM) was used to analyze pairs of weight-bearing knee X-rays. Initial X-rays were all scored as normal Kellgren–Lawrence (KL) grade 0, and on follow-up approximately 20 years later either developed OA (defined as KL grade=2) or remained normal. Results The computer-aided method predicted whether a knee would change from KL grade 0 to grade 3 with 72 accuracy ( P P Conclusion Radiographic features detectable using a computer-aided image analysis method can predict the future development of radiographic knee OA.", "This paper proposes a new approach to automatically quantify the severity of knee osteoarthritis (OA) from radiographs using deep convolutional neural networks (CNN). Clinically, knee OA severity is assessed using Kellgren & Lawrence (KL) grades, a five point scale. Previous work on automatically predicting KL grades from radiograph images were based on training shallow classifiers using a variety of hand engineered features. We demonstrate that classification accuracy can be significantly improved using deep convolutional neural network models pre-trained on ImageNet and fine-tuned on knee OA images. Furthermore, we argue that it is more appropriate to assess the accuracy of automatic knee OA severity predictions using a continuous distance-based evaluation metric like mean squared error than it is to use classification accuracy. This leads to the formulation of the prediction of KL grades as a regression problem and further improves accuracy. Results on a dataset of X-ray images and KL grades from the Osteoarthritis Initiative (OAI) show a sizable improvement over the current state-of-the-art." ] }
1701.08991
2949729733
Osteoarthritis (OA) is a common musculoskelet al condition typically diagnosed from radiographic assessment after clinical examination. However, a visual evaluation made by a practitioner suffers from subjectivity and is highly dependent on the experience. Computer-aided diagnostics (CAD) could improve the objectivity of knee radiographic examination. The first essential step of knee OA CAD is to automatically localize the joint area. However, according to the literature this task itself remains challenging. The aim of this study was to develop novel and computationally efficient method to tackle the issue. Here, three different datasets of knee radiographs were used (n = 473 93 77) to validate the overall performance of the method. Our pipeline consists of two parts: anatomically-based joint area proposal and their evaluation using Histogram of Oriented Gradients and the pre-trained Support Vector Machine classifier scores. The obtained results for the used datasets show the mean intersection over the union equal to: 0.84, 0.79 and 0.78. Using a high-end computer, the method allows to automatically annotate conventional knee radiographs within 14-16ms and high resolution ones within 170ms. Our results demonstrate that the developed method is suitable for large-scale analyses.
Finally, it should be mentioned, that the problem of joint area localization is not limited to only knee radiographs. The attention of the research community is also focused on hand radiographs, where OA occurs as well. It has been recently shown how the anatomic structure of the image can be utilized to annotate hand radiographic images @cite_16 . However, to the best of authors' knowledge, there are no studies in the knee domain where such information is used to segment or annotate the image. In this study, we show how such information can be used to annotate knee radiographs.
{ "cite_N": [ "@cite_16" ], "mid": [ "1974851519" ], "abstract": [ "The measurement of joint space width (JSW) in hand x-ray images of patients suffering from Rheumatoid Arthritis (RA) is a time consuming task for radiologists. Manual assessment lacks accuracy and is observer-dependent, which hinders an accurate evaluation of joint degeneration in early diagnosis and follow-up studies. Automatic analysis of the JSW is crucial with regard to standardization, sensitivity, and reproducibility. In this paper, we focus on both joint location and joint margin detection. For the evaluation, five hand radiographs from RA patients, in which the joints have been manually delineated, are used. All finger joints are located correctly with margins differing 0.1 mm on average from the manual delineation." ] }
1701.08985
2952200063
We propose a deep multitask architecture for (DMHS), including , in . The system computes the figure-ground segmentation, semantically identifies the human body parts at pixel level, and estimates the 2d and 3d pose of the person. The model supports the joint training of all components by means of multi-task losses where early processing stages recursively feed into advanced ones for increasingly complex calculations, accuracy and robustness. The design allows us to tie a complete training protocol, by taking advantage of multiple datasets that would otherwise restrictively cover only some of the model components: complex 2d image data with no body part labeling and without associated 3d ground truth, or complex 3d data with limited 2d background variability. In detailed experiments based on several challenging 2d and 3d datasets (LSP, HumanEva, Human3.6M), we evaluate the sub-structures of the model, the effect of various types of training data in the multitask loss, and demonstrate that state-of-the-art results can be achieved at all processing levels. We also show that in the wild our monocular RGB architecture is perceptually competitive to a state-of-the art (commercial) Kinect system based on RGB-D data.
We share with @cite_9 @cite_33 @cite_39 the interest in building models that integrate 2d and 3d reasoning. We propose a fully trainable discriminative model for human recognition and reconstruction at 2d and 3d levels. We do not estimate human body shape, but we do estimate figure-ground segmentation, the semantic segmentation of the human body parts, as well as the 2d and 3d pose. The system is trainable, end-to-end, by means of multitask losses that can leverage the complementary properties of existing 2d and 3d human datasets. The model is fully automatic in the sense that both the human detection and body part segmentation and the 2d and 3d estimates are the result of recurrent stages of processing in a homogeneous, easy to understand and computationally efficient architecture. Our approach is complementary to @cite_33 @cite_39 : our model can benefit from a final optimization-based refinement and it would be useful to estimate the human body shape. In contrast, @cite_33 @cite_39 can benefit from the semantic segmentation of the human body parts for their shape fitting, and could use the accurate fully automatic 2d and 3d pose estimates we produce as initialization for their 3d to 2d refinement process.
{ "cite_N": [ "@cite_9", "@cite_33", "@cite_39" ], "mid": [ "2052747804", "2483862638", "2951940669" ], "abstract": [ "Recently, the emergence of Kinect systems has demonstrated the benefits of predicting an intermediate body part labeling for 3D human pose estimation, in conjunction with RGB-D imagery. The availability of depth information plays a critical role, so an important question is whether a similar representation can be developed with sufficient robustness in order to estimate 3D pose from RGB images. This paper provides evidence for a positive answer, by leveraging (a) 2D human body part labeling in images, (b) second-order label-sensitive pooling over dynamically computed regions resulting from a hierarchical decomposition of the body, and (c) iterative structured-output modeling to contextualize the process based on 3D pose estimates. For robustness and generalization, we take advantage of a recent large-scale 3D human motion capture dataset, Human3.6M[18] that also has human body part labeling annotations available with images. We provide extensive experimental studies where alternative intermediate representations are compared and report a substantial 33 error reduction over competitive discriminative baselines that regress 3D human pose against global HOG features.", "We describe the first method to automatically estimate the 3D pose of the human body as well as its 3D shape from a single unconstrained image. We estimate a full 3D mesh and show that 2D joints alone carry a surprising amount of information about body shape. The problem is challenging because of the complexity of the human body, articulation, occlusion, clothing, lighting, and the inherent ambiguity in inferring 3D from 2D. To solve this, we first use a recently published CNN-based method, DeepCut, to predict (bottom-up) the 2D body joint locations. We then fit (top-down) a recently published statistical body shape model, called SMPL, to the 2D joints. We do so by minimizing an objective function that penalizes the error between the projected 3D model joints and detected 2D joints. Because SMPL captures correlations in human shape across the population, we are able to robustly fit it to very little data. We further leverage the 3D model to prevent solutions that cause interpenetration. We evaluate our method, SMPLify, on the Leeds Sports, HumanEva, and Human3.6M datasets, showing superior pose accuracy with respect to the state of the art.", "Markerless motion capture algorithms require a 3D body with properly personalized skeleton dimension and or body shape and appearance to successfully track a person. Unfortunately, many tracking methods consider model personalization a different problem and use manual or semi-automatic model initialization, which greatly reduces applicability. In this paper, we propose a fully automatic algorithm that jointly creates a rigged actor model commonly used for animation - skeleton, volumetric shape, appearance, and optionally a body surface - and estimates the actor's motion from multi-view video input only. The approach is rigorously designed to work on footage of general outdoor scenes recorded with very few cameras and without background subtraction. Our method uses a new image formation model with analytic visibility and analytically differentiable alignment energy. For reconstruction, 3D body shape is approximated as Gaussian density field. For pose and shape estimation, we minimize a new edge-based alignment energy inspired by volume raycasting in an absorbing medium. We further propose a new statistical human body model that represents the body surface, volumetric Gaussian density, as well as variability in skeleton shape. Given any multi-view sequence, our method jointly optimizes the pose and shape parameters of this model fully automatically in a spatiotemporal way." ] }
1701.08799
2584366139
We consider the Threshold Activation Problem (TAP): given social network @math and positive threshold @math , find a minimum-size seed set @math that can trigger expected activation of at least @math . We introduce the first scalable, parallelizable algorithm with performance guarantee for TAP suitable for datasets with millions of nodes and edges; we exploit the bicriteria nature of solutions to TAP to allow the user to control the running time versus accuracy of our algorithm through a parameter @math : given @math , with probability @math our algorithm returns a solution @math with expected activation greater than @math , and the size of the solution @math is within factor @math of the optimal size. The algorithm runs in time @math , where @math , @math , refer to the number of nodes, edges in the network. The performance guarantee holds for the general triggering model of internal influence and also incorporates external influence, provided a certain condition is met on the cost-effectivity of seed selection.
As compared with IM, much less effort has been devoted to scalable solutions to TAP while maintaining performance guarantees; @cite_13 studied the TAP problem with monotonic and submodular models of influence propagation; their bicriteria guarantees differ from ours, and provide no method of efficient sampling required for scalability. @cite_3 considered external influence in a viral marketing context. However, their model of external influence is much less general than ours. Furthermore, they restrict external influence to only pass through seed consumers and have no discussion of sampling, scalability, or the TAP problem. @cite_17 @cite_4 studied methods to restrain propagation in social networks.
{ "cite_N": [ "@cite_4", "@cite_13", "@cite_3", "@cite_17" ], "mid": [ "2541209345", "", "2112351105", "2096619434" ], "abstract": [ "We propose DOCA (Detecting Overlapping Community Algorithm), a connection-based algorithm for discovering high quality overlapping community structures in social networks. Our proposed method is fast, very limited parameter dependent and only requires local knowledge about the network topology. Furthermore, the community structures discovered by DOCA are deterministic, i.e., no fuzzy community assignments are produced. DOCA's performance is certified by extensive experiments on real-world traces including Enron communication network, ArXiv citation and Astro physics collaboration networks as well as Face book and Foursquare social networks. The demonstrative benchmark with other detection methods highlights the efficiency of DOCA when discovering community structures of large-scale networks. By using DOCA to analyze the community structures of real datasets, we find that overlapping communities occur naturally and quite frequently, especially for top largest communities. In addition, overlapped nodes tend to be active users who participate in multiple communities at the same time. This happens not only on social networks but also on collaboration, citation and communication networks.", "", "In this paper, we propose the amphibious influence maximization (AIM) model that combines traditional marketing via content providers and viral marketing to consumers in social networks in a single framework. In AIM, a set of content providers and consumers form a bipartite network while consumers also form their social network, and influence propagates from the content providers to consumers and among consumers in the social network following the independent cascade model. An advertiser needs to select a subset of seed content providers and a subset of seed consumers, such that the influence from the seed providers passing through the seed consumers could reach a large number of consumers in the social network in expectation. We prove that the AIM problem is NP-hard to approximate to within any constant factor via a reduction from Feige's k-prover proof system for 3-SAT5. We also give evidence that even when the social network graph is trivial (i.e. has no edges), a polynomial time constant factor approximation for AIM is unlikely. However, when we assume that the weighted bi-adjacency matrix that describes the influence of content providers on consumers is of constant rank, a common assumption often used in recommender systems, we provide a polynomial-time algorithm that achieves approximation ratio of (1-1 e-e)3 for any (polynomially small) e > 0. Our algorithmic results still hold for a more general model where cascades in social network follow a general monotone and submodular function.", "With the introduction of the World Wide Web and online social networks, people now have sought ways to socialize and make new friends online over a greater distance. Popular social network sites such as Facebook, Twitter and Bebo have witnessed rapid increases in space and the number of online users over a short period of time. However, alongside with these fast expands comes the threat of malicious softwares such as viruses, worms or false information propagation. In this paper, we propose a novel adaptive method for containing worm propagation on dynamic social networks. Our approach first takes into account the network community structure and adaptively keeps it updated as the social network evolves, and then contains worm propagation by distributing patches to most influential users selected from the network communities. To evaluate the performance of our approach we test it on Facebook network dataset [17] and compare the infection rates on several cases with the recent social-based method introduced in [21]. Experimental results show that our approach not only performs faster but also achieves lower infection rates than the social-based method on dynamic social networks." ] }
1701.08935
2766660042
This paper proposes an efficient method for computing selected generalized eigenpairs of a sparse Hermitian definite matrix pencil (A, B). Based on Zolotarev's best rational function approximations of the signum function and conformal mapping techniques, we construct the best rational function approximation of a rectangular function supported on an arbitrary interval. This new best rational function approximation is applied to construct spectrum filters of (A, B). Combining fast direct solvers and the shift-invariant GMRES, a hybrid fast algorithm is proposed to apply spectral filters efficiently. Compared to the state-of-the-art algorithm FEAST, the proposed rational function approximation is proved to be optimal among a larger function class, and the numerical implementation of the proposed method is also faster. The efficiency and stability of the proposed method are demonstrated by numerical examples from computational chemistry.
Many rational filters in the literature were constructed by discretizing the contour integral on the complex plane, with an appropriate quadrature rule (e.g., the Gauss-Legendre quadrature rule @cite_24 , the trapezoidal quadrature rule @cite_40 @cite_6 , and the Zolotarev quadrature rule @cite_3 ). Here @math is a closed contour on the complex plane intersecting the real axis at @math and @math with all desired eigenvalues inside @math and other eigenvalues outside (See Figure (left) for an example). Suppose @math and @math are the quadrature points and weights in the discretization of the contour @math , respectively, the contour integral is discretized as a rational function where @math , and @math for @math . Some other methods advanced with conformal maps @cite_23 @cite_19 and optimization @cite_13 @cite_21 can also provide good rational filters.
{ "cite_N": [ "@cite_21", "@cite_3", "@cite_6", "@cite_24", "@cite_19", "@cite_40", "@cite_23", "@cite_13" ], "mid": [ "", "1598589780", "", "1998906317", "", "1803217602", "2138249740", "624043996" ], "abstract": [ "", "The FEAST method for solving large sparse eigenproblems is equivalent to subspace iteration with an approximate spectral projector and implicit orthogonalization. This relation allows to characterize the convergence of this method in terms of the error of a certain rational approximant to an indicator function. We propose improved rational approximants leading to FEAST variants with faster convergence, in particular, when using rational approximants based on the work of Zolotarev. Numerical experiments demonstrate the possible computational savings especially for pencils whose eigenvalues are not well separated and when the dimension of the search space is only slightly larger than the number of wanted eigenvalues. The new approach improves both convergence robustness and load balancing when FEAST runs on multiple search intervals in parallel.", "", "A new numerical algorithm for solving the symmetric eigenvalue problem is presented. The technique deviates fundamentally from the traditional Krylov subspace iteration based techniques (Arnoldi and Lanczos algorithms) or other Davidson-Jacobi techniques, and takes its inspiration from the contour integration and density matrix representation in quantum mechanics. It will be shown that this new algorithm - named FEAST - exhibits high efficiency, robustness, accuracy and scalability on parallel architectures. Examples from electronic structure calculations of Carbon nanotubes (CNT) are presented, and numerical performances and capabilities are discussed.", "", "Calculating portions of eigenvalues and eigenvectors of matrices or matrix pencils has many applications. An approach to this calculation for Hermitian problems based on a density matrix has been proposed in 2009 and a software package called FEAST has been developed. The density-matrix approach allows FEAST's implementation to exploit a key strength of modern computer architectures, namely, multiple levels of parallelism. Consequently, the software package has been well received and subsequently commercialized. A detailed theoretical analysis of Hermitian FEAST has also been established very recently. This paper generalizes the FEAST algorithm and theory, for the first time, to tackle non-Hermitian problems. Fundamentally, the new algorithm is basic subspace iteration or Bauer bi-iteration, except applied with a novel accelerator based on Cauchy integrals. The resulting algorithm retains the multi-level parallelism of Hermitian FEAST, making it a valuable new tool for large-scale computational science and engineering problems on leading-edge computing platforms.", "New methods are proposed for the numerical evaluation of @math or @math , where @math is a function such as @math or @math with singularities in @math and @math is a matrix with eigenvalues on or near @math . The methods are based on combining contour integrals evaluated by the periodic trapezoid rule with conformal maps involving Jacobi elliptic functions. The convergence is geometric, so that the computation of @math is typically reduced to one or two dozen linear system solves, which can be carried out in parallel.", "Abstract Solving (nonlinear) eigenvalue problems by contour integration, requires an effective discretization for the corresponding contour integrals. In this paper it is shown that good rational filter functions can be computed using (nonlinear least squares) optimization techniques as opposed to designing those functions based on a thorough understanding of complex analysis. The conditions that such an effective filter function should satisfy, are derived and translated in a nonlinear least squares optimization problem solved by optimization algorithms from Tensorlab. Numerical experiments illustrate the validity of this approach." ] }
1701.09123
2396200542
We present a multilingual Named Entity Recognition approach based on a robust and general set of features across languages and datasets. Our system combines shallow local information with clustering semi-supervised features induced on large amounts of unlabeled text. Understanding via empirical experimentation how to effectively combine various types of clustering features allows us to seamlessly export our system to other datasets and languages. The result is a simple but highly competitive system which obtains state of the art results across five languages and twelve datasets. The results are reported on standard shared task evaluation data such as CoNLL for English, Spanish and Dutch. Furthermore, and despite the lack of linguistically motivated features, we also report best results for languages such as Basque and German. In addition, we demonstrate that our method also obtains very competitive results even when the amount of supervised data is cut by half, alleviating the dependency on manually annotated data. Finally, the results show that our emphasis on clustering features is crucial to develop robust out-of-domain models. The system and models are freely available to facilitate its use and guarantee the reproducibility of results.
Traditionally, local features have included contextual and orthographic information, affixes, character-based features, prediction history, etc. As argued by the CoNLL 2003 organizers, no feature set was deemed to be ideal for NERC , although many approaches for English refer to @cite_8 as a useful general approach.
{ "cite_N": [ "@cite_8" ], "mid": [ "2087556608" ], "abstract": [ "This paper describes a robust linear classification system for Named Entity Recognition. A similar system has been applied to the CoNLL text chunking shared task with state of the art performance. By using different linguistic features, we can easily adapt this system to other token-based linguistic tagging problems. The main focus of the current paper is to investigate the impact of various local linguistic features for named entity recognition on the CoNLL-2003 (Tjong Kim Sang and De Meulder, 2003) shared task data. We show that the system performance can be enhanced significantly with some relative simple token-based features that are available for many languages. Although more sophisticated linguistic features will also be helpful, they provide much less improvement than might be expected." ] }
1701.09123
2396200542
We present a multilingual Named Entity Recognition approach based on a robust and general set of features across languages and datasets. Our system combines shallow local information with clustering semi-supervised features induced on large amounts of unlabeled text. Understanding via empirical experimentation how to effectively combine various types of clustering features allows us to seamlessly export our system to other datasets and languages. The result is a simple but highly competitive system which obtains state of the art results across five languages and twelve datasets. The results are reported on standard shared task evaluation data such as CoNLL for English, Spanish and Dutch. Furthermore, and despite the lack of linguistically motivated features, we also report best results for languages such as Basque and German. In addition, we demonstrate that our method also obtains very competitive results even when the amount of supervised data is cut by half, alleviating the dependency on manually annotated data. Finally, the results show that our emphasis on clustering features is crucial to develop robust out-of-domain models. The system and models are freely available to facilitate its use and guarantee the reproducibility of results.
Dictionaries are widely used to inject world knowledge via gazetteer matches as features in machine learning approaches to NERC. The best performing systems carefully compile their own gazetteers from a variety of sources . @cite_6 leverage a collection of 30 gazetteers and matches against each one are weighted as a separate feature. In this way they trust each gazetteer to a different degree. @cite_2 carefully compiled a large collection of English gazetteers extracted from US Census data and Wikipedia and applied them to the process of inducing word embeddings with very good results.
{ "cite_N": [ "@cite_6", "@cite_2" ], "mid": [ "2004763266", "1570587036" ], "abstract": [ "We analyze some of the fundamental design challenges and misconceptions that underlie the development of an efficient and robust NER system. In particular, we address issues such as the representation of text chunks, the inference approach needed to combine local NER decisions, the sources of prior knowledge and how to use them within an NER system. In the process of comparing several solutions to these challenges we reach some surprising conclusions, as well as develop an NER system that achieves 90.8 F1 score on the CoNLL-2003 NER shared task, the best reported result for this dataset.", "Most state-of-the-art approaches for named-entity recognition (NER) use semi supervised information in the form of word clusters and lexicons. Recently neural network-based language models have been explored, as they as a byproduct generate highly informative vector representations for words, known as word embeddings. In this paper we present two contributions: a new form of learning word embeddings that can leverage information from relevant lexicons to improve the representations, and the first system to use neural word embeddings to achieve state-of-the-art results on named-entity recognition in both CoNLL and Ontonotes NER. Our system achieves an F1 score of 90.90 on the test set for CoNLL 2003---significantly better than any previous system trained on public data, and matching a system employing massive private industrial query-log data." ] }
1701.09123
2396200542
We present a multilingual Named Entity Recognition approach based on a robust and general set of features across languages and datasets. Our system combines shallow local information with clustering semi-supervised features induced on large amounts of unlabeled text. Understanding via empirical experimentation how to effectively combine various types of clustering features allows us to seamlessly export our system to other datasets and languages. The result is a simple but highly competitive system which obtains state of the art results across five languages and twelve datasets. The results are reported on standard shared task evaluation data such as CoNLL for English, Spanish and Dutch. Furthermore, and despite the lack of linguistically motivated features, we also report best results for languages such as Basque and German. In addition, we demonstrate that our method also obtains very competitive results even when the amount of supervised data is cut by half, alleviating the dependency on manually annotated data. Finally, the results show that our emphasis on clustering features is crucial to develop robust out-of-domain models. The system and models are freely available to facilitate its use and guarantee the reproducibility of results.
The intuition behind non-local (or global) features is to treat similarly all occurrences of the same named entity in a text. @cite_7 proposed a method to produce the set of named entities for the whole sentence, where the optimal set of named entities for the sentence is the coherent set of named entities which maximizes the summation of confidences of the named entities in the set. @cite_6 developed three types of non-local features, analyzing global dependencies in a window of between 200 and 1000 tokens.
{ "cite_N": [ "@cite_6", "@cite_7" ], "mid": [ "2004763266", "1973702599" ], "abstract": [ "We analyze some of the fundamental design challenges and misconceptions that underlie the development of an efficient and robust NER system. In particular, we address issues such as the representation of text chunks, the inference approach needed to combine local NER decisions, the sources of prior knowledge and how to use them within an NER system. In the process of comparing several solutions to these challenges we reach some surprising conclusions, as well as develop an NER system that achieves 90.8 F1 score on the CoNLL-2003 NER shared task, the best reported result for this dataset.", "This paper presents a Named Entity Extraction (NEE) system for the CoNLL 2002 competition. The two main sub-tasks of the problem, recognition (NER) and classification (NEC), are performed sequentially and independently with separate modules. Both modules are machine learning based systems, which make use of binary AdaBoost classifiers." ] }
1701.09123
2396200542
We present a multilingual Named Entity Recognition approach based on a robust and general set of features across languages and datasets. Our system combines shallow local information with clustering semi-supervised features induced on large amounts of unlabeled text. Understanding via empirical experimentation how to effectively combine various types of clustering features allows us to seamlessly export our system to other datasets and languages. The result is a simple but highly competitive system which obtains state of the art results across five languages and twelve datasets. The results are reported on standard shared task evaluation data such as CoNLL for English, Spanish and Dutch. Furthermore, and despite the lack of linguistically motivated features, we also report best results for languages such as Basque and German. In addition, we demonstrate that our method also obtains very competitive results even when the amount of supervised data is cut by half, alleviating the dependency on manually annotated data. Finally, the results show that our emphasis on clustering features is crucial to develop robust out-of-domain models. The system and models are freely available to facilitate its use and guarantee the reproducibility of results.
@cite_6 used Brown clusters as features obtaining what was at the time the best published result of an English NERC system on the CoNLL 2003 testset. @cite_1 made a rather exhaustive comparison of Brown clusters, Collobert and Weston's embeddings and HLBL embeddings to improve chunking and NERC. They show that in some cases the combination of word representation features was positive but, although they used Ratinov and Roth's (2009) system as starting point, they did not manage to improve over the state of the art. Furthermore, they reported that Brown clustering features performed better than the word embeddings.
{ "cite_N": [ "@cite_1", "@cite_6" ], "mid": [ "2158139315", "2004763266" ], "abstract": [ "If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http: metaoptimize.com projects wordreprs", "We analyze some of the fundamental design challenges and misconceptions that underlie the development of an efficient and robust NER system. In particular, we address issues such as the representation of text chunks, the inference approach needed to combine local NER decisions, the sources of prior knowledge and how to use them within an NER system. In the process of comparing several solutions to these challenges we reach some surprising conclusions, as well as develop an NER system that achieves 90.8 F1 score on the CoNLL-2003 NER shared task, the best reported result for this dataset." ] }
1701.09123
2396200542
We present a multilingual Named Entity Recognition approach based on a robust and general set of features across languages and datasets. Our system combines shallow local information with clustering semi-supervised features induced on large amounts of unlabeled text. Understanding via empirical experimentation how to effectively combine various types of clustering features allows us to seamlessly export our system to other datasets and languages. The result is a simple but highly competitive system which obtains state of the art results across five languages and twelve datasets. The results are reported on standard shared task evaluation data such as CoNLL for English, Spanish and Dutch. Furthermore, and despite the lack of linguistically motivated features, we also report best results for languages such as Basque and German. In addition, we demonstrate that our method also obtains very competitive results even when the amount of supervised data is cut by half, alleviating the dependency on manually annotated data. Finally, the results show that our emphasis on clustering features is crucial to develop robust out-of-domain models. The system and models are freely available to facilitate its use and guarantee the reproducibility of results.
@cite_2 extend the Skip-gram algorithm to learn 50-dimensional lexicon infused phrase embeddings from 22 different gazetteers and the Wikipedia. The resulting embeddings are used as features by scaling them by a hyper-parameter which is a real number tuned on the development data. @cite_2 report best results up to date for English NERC on CoNLL 2003 test data, 90.90 F1.
{ "cite_N": [ "@cite_2" ], "mid": [ "1570587036" ], "abstract": [ "Most state-of-the-art approaches for named-entity recognition (NER) use semi supervised information in the form of word clusters and lexicons. Recently neural network-based language models have been explored, as they as a byproduct generate highly informative vector representations for words, known as word embeddings. In this paper we present two contributions: a new form of learning word embeddings that can leverage information from relevant lexicons to improve the representations, and the first system to use neural word embeddings to achieve state-of-the-art results on named-entity recognition in both CoNLL and Ontonotes NER. Our system achieves an F1 score of 90.90 on the test set for CoNLL 2003---significantly better than any previous system trained on public data, and matching a system employing massive private industrial query-log data." ] }
1701.09123
2396200542
We present a multilingual Named Entity Recognition approach based on a robust and general set of features across languages and datasets. Our system combines shallow local information with clustering semi-supervised features induced on large amounts of unlabeled text. Understanding via empirical experimentation how to effectively combine various types of clustering features allows us to seamlessly export our system to other datasets and languages. The result is a simple but highly competitive system which obtains state of the art results across five languages and twelve datasets. The results are reported on standard shared task evaluation data such as CoNLL for English, Spanish and Dutch. Furthermore, and despite the lack of linguistically motivated features, we also report best results for languages such as Basque and German. In addition, we demonstrate that our method also obtains very competitive results even when the amount of supervised data is cut by half, alleviating the dependency on manually annotated data. Finally, the results show that our emphasis on clustering features is crucial to develop robust out-of-domain models. The system and models are freely available to facilitate its use and guarantee the reproducibility of results.
The best German CoNLL 2003 system (an ensemble) was outperformed by @cite_11 . They trained the Stanford NER system , which uses a linear-chain Conditional Random Field (CRF) with a variety of features, including lemma, POS tag, etc. Crucially, they included distributional similarity'' features in the form of Clark clusters induced from large unlabeled corpora: the Huge German Corpus (HGC) of around 175M tokens of newspaper text and the deWac corpus consisting of 1.71B tokens of web-crawled data. Using the clusters induced from deWac as a form of improved the results over the best CoNLL 2003 system by 4 points in F1.
{ "cite_N": [ "@cite_11" ], "mid": [ "202361227" ], "abstract": [ "We present a freely available optimized Named Entity Recognizer (NER) for German. It alleviates the small size of available NER training corpora for German with distributional generalization features trained on large unlabelled corpora. We vary the size and source of the generalization corpus and find improvements of 6 F1 score (in-domain) and 9 (out-of-domain) over simple supervised training." ] }
1701.09123
2396200542
We present a multilingual Named Entity Recognition approach based on a robust and general set of features across languages and datasets. Our system combines shallow local information with clustering semi-supervised features induced on large amounts of unlabeled text. Understanding via empirical experimentation how to effectively combine various types of clustering features allows us to seamlessly export our system to other datasets and languages. The result is a simple but highly competitive system which obtains state of the art results across five languages and twelve datasets. The results are reported on standard shared task evaluation data such as CoNLL for English, Spanish and Dutch. Furthermore, and despite the lack of linguistically motivated features, we also report best results for languages such as Basque and German. In addition, we demonstrate that our method also obtains very competitive results even when the amount of supervised data is cut by half, alleviating the dependency on manually annotated data. Finally, the results show that our emphasis on clustering features is crucial to develop robust out-of-domain models. The system and models are freely available to facilitate its use and guarantee the reproducibility of results.
The best participant of the English CoNLL 2003 shared task used the results of two externally trained NERC taggers to create an ensemble system . @cite_2 develop a stacked linear-chain CRF system: they train two CRFs with roughly the same features; the second CRF can condition on the predictions made by the first CRF. Their baseline'' system uses a similar local featureset as Ratinov and Roth's (2009) but complemented with gazetteers. Their baseline system combined with their phrase embeddings trained with infused lexicons allow them to report the best CoNLL 2003 result so far.
{ "cite_N": [ "@cite_2" ], "mid": [ "1570587036" ], "abstract": [ "Most state-of-the-art approaches for named-entity recognition (NER) use semi supervised information in the form of word clusters and lexicons. Recently neural network-based language models have been explored, as they as a byproduct generate highly informative vector representations for words, known as word embeddings. In this paper we present two contributions: a new form of learning word embeddings that can leverage information from relevant lexicons to improve the representations, and the first system to use neural word embeddings to achieve state-of-the-art results on named-entity recognition in both CoNLL and Ontonotes NER. Our system achieves an F1 score of 90.90 on the test set for CoNLL 2003---significantly better than any previous system trained on public data, and matching a system employing massive private industrial query-log data." ] }
1701.08787
2952206212
Robustness in response to unexpected events is always desirable for real-world networks. To improve the robustness of any networked system, it is important to analyze vulnerability to external perturbation such as random failures or adversarial attacks occurring to elements of the network. In this paper, we study an emerging problem in assessing the robustness of complex networks: the vulnerability of the clustering of the network to the failure of network elements. Specifically, we identify vertices whose failures will critically damage the network by degrading its clustering, evaluated through the average clustering coefficient. This problem is important because any significant change made to the clustering, resulting from element-wise failures, could degrade network performance such as the ability for information to propagate in a social network. We formulate this vulnerability analysis as an optimization problem, prove its NP-completeness and non-monotonicity, and we offer two algorithms to identify the vertices most important to clustering. Finally, we conduct comprehensive experiments in synthesized social networks generated by various well-known models as well as traces of real social networks. The empirical results over other competitive strategies show the efficacy of our proposed algorithms.
Vulnerability assessment has attracted a large amount of attention from the network science community. Work in the literature can be divided into two categories: Measuring the robustness and manipulating the robustness of a network. In measuring the robustness, different measures and metrics have been proposed such as the graph connectivity , the diameter, relative size of largest components, and average size of the isolated cluster . Other work suggests using the minimum node edge cut or the second smallest non-zero eigenvalue or the Laplacian matrix . In terms of manipulating the robustness, different strategies has been proposed such as , or using graph percolation . Other studies focus on excluding nodes by centrality measures, such as betweeness and the geodesic length , eigenvector , the shortest path between node pairs , or the total pair-wise connectivity . black @cite_0 @cite_1 developed integer programming frameworks to determine the critical nodes that minimize a connectivity metric subject to a budgetary constraint. For more information on network vulnerability assessments, the reader is referred to the surveys and and references therein.
{ "cite_N": [ "@cite_0", "@cite_1" ], "mid": [ "2070207525", "2016935525" ], "abstract": [ "Recent work on the Internet, social networks, and the power grid has addressed the resilience of these networks to either random or targeted deletion of network nodes or links. Such deletions include, for example, the failure of Internet routers or power transmission lines. Percolation models on random graphs provide a simple representation of this process but have typically been limited to graphs with Poisson degree distribution at their vertices. Such graphs are quite unlike real-world networks, which often possess power-law or other highly skewed degree distributions. In this paper we study percolation on graphs with completely general degree distribution, giving exact solutions for a variety of cases, including site percolation, bond percolation, and models in which occupation probabilities depend on vertex degree. We discuss the application of our theory to the understanding of network resilience.", "How does the composition of a population affect the adoption of health behaviors and innovations? Homophily—similarity of social contacts—can increase dyadic-level influence, but it can also force less healthy individuals to interact primarily with one another, thereby excluding them from interactions with healthier, more influential, early adopters. As a result, an important network-level effect of homophily is that the people who are most in need of a health innovation may be among the least likely to adopt it. Despite the importance of this thesis, confounding factors in observational data have made it difficult to test empirically. We report results from a controlled experimental study on the spread of a health innovation through fixed social networks in which the level of homophily was independently varied. We found that homophily significantly increased overall adoption of a new health behavior, especially among those most in need of it." ] }
1701.08921
2950156734
It is essential for a robot to be able to detect revisits or loop closures for long-term visual navigation.A key insight explored in this work is that the loop-closing event inherently occurs sparsely, that is, the image currently being taken matches with only a small subset (if any) of previous images. Based on this observation, we formulate the problem of loop-closure detection as a sparse, convex @math -minimization problem. By leveraging fast convex optimization techniques, we are able to efficiently find loop closures, thus enabling real-time robot navigation. This novel formulation requires no offline dictionary learning, as required by most existing approaches, and thus allows online incremental operation. Our approach ensures a unique hypothesis by choosing only a single globally optimal match when making a loop-closure decision. Furthermore, the proposed formulation enjoys a flexible representation with no restriction imposed on how images should be represented, while requiring only that the representations are "close" to each other when the corresponding images are visually similar. The proposed algorithm is validated extensively using real-world datasets.
The problem of loop-closure detection has been extensively studied in the SLAM literature and many different solutions have been proposed over the years (e.g., see @cite_0 @cite_3 and references therein). In what follows, we briefly overview the work that closely relates to the proposed approach.
{ "cite_N": [ "@cite_0", "@cite_3" ], "mid": [ "2011767579", "2284029970" ], "abstract": [ "Long-term autonomous mobile robot operation requires considering place recognition decisions with great caution. A single incorrect decision that is not detected and reconsidered can corrupt the environment model that the robot is trying to build and maintain. This work describes a consensus-based approach to robust place recognition over time, that takes into account all the available information to detect and remove past incorrect loop closures. The main novelties of our work are: (1) the ability of realizing that, in light of new evidence, an incorrect past loop closing decision has been made; the incorrect information can be removed thus recovering the correct estimation with a novel algorithm; (2) extending our proposal to incremental operation; and (3) handling multi-session, spatially related or unrelated scenarios in a unified manner. We demonstrate our proposal, the RRR algorithm, on different odometry systems, e.g. visual or laser, using different front-end loop-closing techniques. For our experiments we use the efficient graph optimization framework g2o as back-end. We back our claims up with several experiments carried out on real data, in single and multi-session experiments showing better results than those obtained by state-of-the-art methods, comparisons against whom are also presented.", "Visual place recognition is a challenging problem due to the vast range of ways in which the appearance of real-world places can vary. In recent years, improvements in visual sensing capabilities, an ever-increasing focus on long-term mobile robot autonomy, and the ability to draw on state-of-the-art research in other disciplines—particularly recognition in computer vision and animal navigation in neuroscience—have all contributed to significant advances in visual place recognition systems. This paper presents a survey of the visual place recognition research landscape. We start by introducing the concepts behind place recognition—the role of place recognition in the animal kingdom, how a “place” is defined in a robotics context, and the major components of a place recognition system. Long-term robot operations have revealed that changing appearance can be a significant factor in visual place recognition failure; therefore, we discuss how place recognition solutions can implicitly or explicitly account for appearance change within the environment. Finally, we close with a discussion on the future of visual place recognition, in particular with respect to the rapid advances being made in the related fields of deep learning, semantic scene understanding, and video description." ] }
1701.08921
2950156734
It is essential for a robot to be able to detect revisits or loop closures for long-term visual navigation.A key insight explored in this work is that the loop-closing event inherently occurs sparsely, that is, the image currently being taken matches with only a small subset (if any) of previous images. Based on this observation, we formulate the problem of loop-closure detection as a sparse, convex @math -minimization problem. By leveraging fast convex optimization techniques, we are able to efficiently find loop closures, thus enabling real-time robot navigation. This novel formulation requires no offline dictionary learning, as required by most existing approaches, and thus allows online incremental operation. Our approach ensures a unique hypothesis by choosing only a single globally optimal match when making a loop-closure decision. Furthermore, the proposed formulation enjoys a flexible representation with no restriction imposed on how images should be represented, while requiring only that the representations are "close" to each other when the corresponding images are visually similar. The proposed algorithm is validated extensively using real-world datasets.
Some recent work has focused on loop closure under extreme changes in the environment such as different weather and or lighting conditions at different times of the day. For example, proposed the SeqSLAM that is able to localize with drastic lighting and weather changes by matching sequences of images with each other as opposed to single images. introduced the experience-based maps that learn the different appearances of the same place as it gradually changes in order to perform long-term localization. Building upon @cite_41 , also discovered new images to attain better localization. In addition, have explored geometric features such as lines for the task of loop closure detection in both indoor and outdoor scenarios. Note that if the information invariant to such changes can be extracted as in @cite_27 @cite_21 @cite_17 @cite_41 @cite_19 @cite_38 @cite_32 , the proposed formulation can also be used to obtain loop-closure hypotheses. Essentially, in this work we focus on finding loop closures given some discriminative descriptions such as descriptors and whole images, assuming no specific type of image representations.
{ "cite_N": [ "@cite_38", "@cite_41", "@cite_21", "@cite_32", "@cite_19", "@cite_27", "@cite_17" ], "mid": [ "2019336290", "2003919215", "1989484209", "2068341180", "1993892062", "2144824356", "2110405746" ], "abstract": [ "Most visual simultaneous localization and mapping systems use point features as their landmarks and adopt point-based feature descriptors to recognize them. Compared to point landmarks, however, lines have strength in conveying the structural information of the environment. Despite the benefit, they have not been widely used because lines are more difficult in detecting, tracking, and recognizing, and this delayed the use of lines as landmarks. In this paper, we propose a place recognition algorithm using straight line features, which enables reliable loop closure detections in large complex environments under significant illumination changes. A vocabulary tree trained with mean standard-deviation line descriptor is used in finding the candidate matches between keyframes, and a Bayesian filtering framework enables reliable keyframe matching for large-scale loop closures. The proposed algorithm is compared with state-of-the-art point-based methods using scale-invariant feature transform or speeded up robust features. The experimental results show that the proposed method outperforms the others in challenging indoor environments.", "This paper is about long-term navigation in environments whose appearance changes over time, suddenly or gradually. We describe, implement and validate an approach which allows us to incrementally learn a model whose complexity varies naturally in accordance with variation of scene appearance. It allows us to leverage the state of the art in pose estimation to build over many runs, a world model of sufficient richness to allow simple localisation despite a large variation in conditions. As our robot repeatedly traverses its workspace, it accumulates distinct visual experiences that in concert, implicitly represent the scene variation: each experience captures a visual mode. When operating in a previously visited area, we continually try to localise in these previous experiences while simultaneously running an independent vision-based pose estimation system. Failure to localise in a sufficient number of prior experiences indicates an insufficient model of the workspace and instigates the laying down of the live image sequence as a new distinct experience. In this way, over time we can capture the typical time-varying appearance of an environment and the number of experiences required tends to a constant. Although we focus on vision as a primary sensor throughout, the ideas we present here are equally applicable to other sensor modalities. We demonstrate our approach working on a road vehicle operating over a 3-month period at different times of day, in different weather and lighting conditions. We present extensive results analysing different aspects of the system and approach, in total processing over 136,000 frames captured from 37 km of driving.", "We propose a novel method for visual place recognition using bag of words obtained from accelerated segment test (FAST)+BRIEF features. For the first time, we build a vocabulary tree that discretizes a binary descriptor space and use the tree to speed up correspondences for geometrical verification. We present competitive results with no false positives in very different datasets, using exactly the same vocabulary and settings. The whole technique, including feature extraction, requires 22 ms frame in a sequence with 26 300 images that is one order of magnitude faster than previous approaches.", "In this paper, we propose a visual place recognition algorithm which uses only straight line features in challenging outdoor environments. Compared to point features used in most existing place recognition methods, line features are easily found in man-made environments and more robust to environmental changes such as illumination, viewing direction, or occlusion. Candidate matches are found using a vocabulary tree and their geometric consistency is verified by a motion estimation algorithm using line segments. The proposed algorithm operates in real-time, and it is tested with a challenging real-world dataset with more than 10,000 database images acquired in urban driving scenarios.", "In this work, we present a novel approach that allows a robot to improve its own navigation performance through introspection and then targeted data retrieval. It is a step in the direction of life-long learning and adaptation and is motivated by the desire to build robots that have plastic competencies which are not baked in. They should react to and benefit from use. We consider a particular instantiation of this problem in the context of place recognition. Based on a topic-based probabilistic representation for images, we use a measure of perplexity to evaluate how well a working set of background images explain the robot's online view of the world. Offline, the robot then searches an external resource to seek out additional background images that bolster its ability to localize in its environment when used next. In this way the robot adapts and improves performance through use. We demonstrate this approach using data collected from a mobile robot operating in outdoor workspaces.", "This paper describes a probabilistic approach to the problem of recognizing places based on their appearance. The system we present is not limited to localization, but can determine that a new observation comes from a previously unseen place, and so augment its map. Effectively this is a SLAM system in the space of appearance. Our probabilistic approach allows us to explicitly account for perceptual aliasing in the environment—identical but indistinctive observations receive a low probability of having come from the same place. We achieve this by learning a generative model of place appearance. By partitioning the learning problem into two parts, new place models can be learned online from only a single observation of a place. The algorithm complexity is linear in the number of places in the map, and is particularly suitable for online loop closure detection in mobile robotics.", "Learning and then recognizing a route, whether travelled during the day or at night, in clear or inclement weather, and in summer or winter is a challenging task for state of the art algorithms in computer vision and robotics. In this paper, we present a new approach to visual navigation under changing conditions dubbed SeqSLAM. Instead of calculating the single location most likely given a current image, our approach calculates the best candidate matching location within every local navigation sequence. Localization is then achieved by recognizing coherent sequences of these “local best matches”. This approach removes the need for global matching performance by the vision front-end - instead it must only pick the best match within any short sequence of images. The approach is applicable over environment changes that render traditional feature-based techniques ineffective. Using two car-mounted camera datasets we demonstrate the effectiveness of the algorithm and compare it to one of the most successful feature-based SLAM algorithms, FAB-MAP. The perceptual change in the datasets is extreme; repeated traverses through environments during the day and then in the middle of the night, at times separated by months or years and in opposite seasons, and in clear weather and extremely heavy rain. While the feature-based method fails, the sequence-based algorithm is able to match trajectory segments at 100 precision with recall rates of up to 60 ." ] }
1701.08921
2950156734
It is essential for a robot to be able to detect revisits or loop closures for long-term visual navigation.A key insight explored in this work is that the loop-closing event inherently occurs sparsely, that is, the image currently being taken matches with only a small subset (if any) of previous images. Based on this observation, we formulate the problem of loop-closure detection as a sparse, convex @math -minimization problem. By leveraging fast convex optimization techniques, we are able to efficiently find loop closures, thus enabling real-time robot navigation. This novel formulation requires no offline dictionary learning, as required by most existing approaches, and thus allows online incremental operation. Our approach ensures a unique hypothesis by choosing only a single globally optimal match when making a loop-closure decision. Furthermore, the proposed formulation enjoys a flexible representation with no restriction imposed on how images should be represented, while requiring only that the representations are "close" to each other when the corresponding images are visually similar. The proposed algorithm is validated extensively using real-world datasets.
More recently, with the rediscovery of efficient machine learning techniques, Convolutional Neural Networks (CNNs) @cite_18 @cite_26 have been exploited to address loop closure detection @cite_9 @cite_22 . These networks are multi-layered architectures that are typically trained on millions of images for tasks such as object detection and scene classification. The internal representations at each layer are learned from the data itself and therefore can be used as features to replace hand-crafted features. Based on this approach, extract features from different layers in the network and identify the layers that are useful for view-point and illumination invariant place recognition. Moreover, in @cite_22 landmarks are treated as objects by finding object proposals in the images and features are extracted for them using deep networks. These features then allow for view-point invariant place categorization by matching different objects from varied viewpoints. In these CNN-based place categorization techniques, the networks are used as feature extractors followed by some form of matching. In this paper, we show that these deep features can also be utilized in the proposed framework of loop-closure detection.
{ "cite_N": [ "@cite_9", "@cite_18", "@cite_26", "@cite_22" ], "mid": [ "2951399172", "2616180702", "", "1162411702" ], "abstract": [ "After the incredible success of deep learning in the computer vision domain, there has been much interest in applying Convolutional Network (ConvNet) features in robotic fields such as visual navigation and SLAM. Unfortunately, there are fundamental differences and challenges involved. Computer vision datasets are very different in character to robotic camera data, real-time performance is essential, and performance priorities can be different. This paper comprehensively evaluates and compares the utility of three state-of-the-art ConvNets on the problems of particular relevance to navigation for robots; viewpoint-invariance and condition-invariance, and for the first time enables real-time place recognition performance using ConvNets with large maps by integrating a variety of existing (locality-sensitive hashing) and novel (semantic search space partitioning) optimization techniques. We present extensive experiments on four real world datasets cultivated to evaluate each of the specific challenges in place recognition. The results demonstrate that speed-ups of two orders of magnitude can be achieved with minimal accuracy degradation, enabling real-time performance. We confirm that networks trained for semantic place categorization also perform better at (specific) place recognition when faced with severe appearance changes and provide a reference for which networks and layers are optimal for different aspects of the place recognition problem.", "Deep learning algorithms seek to exploit the unknown structure in the input distribution in order to discover good representations, often at multiple levels, with higher-level learned features defined in terms of lower-level features. The objective is to make these higher-level representations more abstract, with their individual features more invariant to most of the variations that are typically present in the training distribution, while collectively preserving as much as possible of the information in the input. Ideally, we would like these representations to disentangle the unknown factors of variation that underlie the training distribution. Such unsupervised learning of representations can be exploited usefully under the hypothesis that the input distribution P(x) is structurally related to some task of interest, say predicting P(y x). This paper focuses on the context of the Unsupervised and Transfer Learning Challenge, on why unsupervised pre-training of representations can be useful, and how it can be exploited in the transfer learning scenario, where we care about predictions on examples that are not from the same distribution as the training distribution.", "", "Place recognition has long been an incompletely solved problem in that all approaches involve significant compromises. Current methods address many but never all of the critical challenges of place recognition – viewpoint-invariance, condition-invariance and minimizing training requirements. Here we present an approach that adapts state-of-the-art object proposal techniques to identify potential landmarks within an image for place recognition. We use the astonishing power of convolutional neural network features to identify matching landmark proposals between images to perform place recognition over extreme appearance and viewpoint variations. Our system does not require any form of training, all components are generic enough to be used off-the-shelf. We present a range of challenging experiments in varied viewpoint and environmental conditions. We demonstrate superior performance to current state-of-the- art techniques. Furthermore, by building on existing and widely used recognition frameworks, this approach provides a highly compatible place recognition system with the potential for easy integration of other techniques such as object detection and semantic scene interpretation." ] }
1701.08921
2950156734
It is essential for a robot to be able to detect revisits or loop closures for long-term visual navigation.A key insight explored in this work is that the loop-closing event inherently occurs sparsely, that is, the image currently being taken matches with only a small subset (if any) of previous images. Based on this observation, we formulate the problem of loop-closure detection as a sparse, convex @math -minimization problem. By leveraging fast convex optimization techniques, we are able to efficiently find loop closures, thus enabling real-time robot navigation. This novel formulation requires no offline dictionary learning, as required by most existing approaches, and thus allows online incremental operation. Our approach ensures a unique hypothesis by choosing only a single globally optimal match when making a loop-closure decision. Furthermore, the proposed formulation enjoys a flexible representation with no restriction imposed on how images should be represented, while requiring only that the representations are "close" to each other when the corresponding images are visually similar. The proposed algorithm is validated extensively using real-world datasets.
It should be noted that in our previous conference publication @cite_29 , we have preliminarily shown that the proposed loop-closing framework is general and can employ most hand-crafted features. Recently, extended this sparse-optimization based framework to an incremental formulation allowing for the use of the previous solution of the sparse optimization to jump start the next one, while further extended it to a multi-step delayed detection of loops (instead of single-step detection as in our prior work @cite_29 ) in order to exploit the structured sparsity of the problem. In this paper, we present more detailed analysis and thorough performance evaluations, including new experiments using deep features and validations in challenging multiple-revisit scenarios, as well as new comparisons against the well-known nearest neighbour (NN) search.
{ "cite_N": [ "@cite_29" ], "mid": [ "2294132961" ], "abstract": [ "It is essential for a robot to be able to detect revisits or loop closures for long-term visual navigation. A key insight is that the loop-closing event inherently occurs sparsely, i.e., the image currently being taken matches with only a small subset (if any) of previous observations. Based on this observation, we formulate the problem of loop-closure detection as a sparse, convex 1-minimization problem. By leveraging on fast convex optimization techniques, we are able to efficiently find loop closures, thus enabling real-time robot navigation. This novel formulation requires no offline dictionary learning, as required by most existing approaches, and thus allows online incremental operation. Our approach ensures a global, unique hypothesis by choosing only a single globally optimal match when making a loop-closure decision. Furthermore, the proposed formulation enjoys a flexible representation, with no restriction imposed on how images should be represented, while requiring only that the representations be close to each other when the corresponding images are visually similar. The proposed algorithm is validated extensively using public real-world datasets." ] }
1701.08644
2951530318
There has been significant interest in studying security games for modeling the interplay of attacks and defenses on various systems involving critical infrastructure, financial system security, political campaigns, and civil safeguarding. However, existing security game models typically either assume additive utility functions, or that the attacker can attack only one target. Such assumptions lead to tractable analysis, but miss key inherent dependencies that exist among different targets in current complex networks. In this paper, we generalize the classical security game models to allow for non-additive utility functions. We also allow attackers to be able to attack multiple targets. We examine such a general security game from a theoretical perspective and provide a unified view. In particular, we show that each security game is equivalent to a combinatorial optimization problem over a set system @math , which consists of defender's pure strategy space. The key technique we use is based on the transformation, projection of a polytope, and the elipsoid method. This work settles several open questions in security game domain and significantly extends the state of-the-art of both the polynomial solvable and NP-hard class of the security game.
One line of research focuses on designing an efficient algorithm to solve such a game. @cite_22 proposes a compact representation technique, in which the security game can be equivalently represented by a polynomial-sized mixed-integer linear programming (MILP) problem. The issue in @cite_22 is that they only determine the optimal solution of the compact game instead of the optimal defender's mixed strategy. To solve this problem, @cite_32 introduces the Birkhoff-von Neumann theorem and show that the defender's mixed strategy can be recovered under a specific condition. @cite_20 proposes a double-oracle algorithm to exactly solve the security game with exponential large representation, which can be regarded as a generalization of traditional column generation technique in solving the large-scale linear programming problem. There are also other works such as the Bayesian security game @cite_26 , the security game with quantal response @cite_31 and the security game with uncertain attacker behavior @cite_16 , etc.
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_32", "@cite_31", "@cite_16", "@cite_20" ], "mid": [ "2110504778", "2107381919", "1780330426", "2734718167", "2029010138", "" ], "abstract": [ "In a class of games known as Stackelberg games, one agent (the leader) must commit to a strategy that can be observed by the other agent (the follower or adversary) before the adversary chooses its own strategy. We consider Bayesian Stackelberg games, in which the leader is uncertain about the types of adversary it may face. Such games are important in security domains, where, for example, a security agent (leader) must commit to a strategy of patrolling certain areas, and a robber (follower) has a chance to observe this strategy over time before choosing its own strategy of where to attack. This paper presents an efficient exact algorithm for finding the optimal strategy for the leader to commit to in these games. This algorithm, DOBSS, is based on a novel and compact mixed-integer linear programming formulation. Compared to the most efficient algorithm known previously for this problem, DOBSS is not only faster, but also leads to higher quality solutions, and does not suffer from problems of infeasibility that were faced by this previous algorithm. Note that DOBSS is at the heart of the ARMOR system that is currently being tested for security scheduling at the Los Angeles International Airport.", "Predictable allocations of security resources such as police officers, canine units, or checkpoints are vulnerable to exploitation by attackers. Recent work has applied game-theoretic methods to find optimal randomized security policies, including a fielded application at the Los Angeles International Airport (LAX). This approach has promising applications in many similar domains, including police patrolling for subway and bus systems, randomized baggage screening, and scheduling for the Federal Air Marshal Service (FAMS) on commercial flights. However, the existing methods scale poorly when the security policy requires coordination of many resources, which is central to many of these potential applications. We develop new models and algorithms that scale to much more complex instances of security games. The key idea is to use a compact model of security games, which allows exponential improvements in both memory and runtime relative to the best known algorithms for solving general Stackelberg games. We develop even faster algorithms for security games under payoff restrictions that are natural in many security domains. Finally, introduce additional realistic scheduling constraints while retaining comparable performance improvements. The empirical evaluation comprises both random data and realistic instances of the FAMS and LAX problems. Our new methods scale to problems several orders of magnitude larger than the fastest known algorithm.", "Recently, algorithms for computing game-theoretic solutions have been deployed in real-world security applications, such as the placement of checkpoints and canine units at Los Angeles International Airport. These algorithms assume that the defender (security personnel) can commit to a mixed strategy, a so-called Stackelberg model. As pointed out by Kiek- (2009), in these applications, generally, multiple resources need to be assigned to multiple targets, resulting in an exponential number of pure strategies for the defender. In this paper, we study how to compute optimal Stackelberg strategies in such games, showing that this can be done in polynomial time in some cases, and is NP-hard in others.", "While three deployed applications of game theory for security have recently been reported, we as a community of agents and AI researchers remain in the early stages of these deployments; there is a continuing need to understand the core principles for innovative security applications of game theory. Towards that end, this paper presents PROTECT, a game-theoretic system deployed by the United States Coast Guard (USCG) in the port of Boston for scheduling their patrols. USCG has termed the deployment of PROTECT in Boston a success, and efforts are underway to test it in the port of New York, with the potential for nationwide deployment. PROTECT is premised on an attacker-defender Stackelberg game model and offers five key innovations. First, this system is a departure from the assumption of perfect adversary rationality noted in previous work, relying instead on a quantal response (QR) model of the adversary's behavior --- to the best of our knowledge, this is the first real-world deployment of the QR model. Second, to improve PROTECT's efficiency, we generate a compact representation of the defender's strategy space, exploiting equivalence and dominance. Third, we show how to practically model a real maritime patrolling problem as a Stackelberg game. Fourth, our experimental results illustrate that PROTECT's QR model more robustly handles real-world uncertainties than a perfect rationality model. Finally, in evaluating PROTECT, this paper for the first time provides real-world data: (i) comparison of human-generated vs PROTECT security schedules, and (ii) results from an Adversarial Perspective Team's (human mock attackers) analysis.", "In a Stackelberg Security Game, a defender commits to a randomized deployment of security resources, and an attacker best-responds by attacking a target that maximizes his utility. While algorithms for computing an optimal strategy for the defender to commit to have had a striking real-world impact, deployed applications require significant information about potential attackers, leading to inefficiencies. We address this problem via an online learning approach. We are interested in algorithms that prescribe a randomized strategy for the defender at each step against an adversarially chosen sequence of attackers, and obtain feedback on their choices (observing either the current attacker type or merely which target was attacked). We design no-regret algorithms whose regret (when compared to the best fixed strategy in hindsight) is polynomial in the parameters of the game, and sublinear in the number of times steps.", "" ] }
1701.08644
2951530318
There has been significant interest in studying security games for modeling the interplay of attacks and defenses on various systems involving critical infrastructure, financial system security, political campaigns, and civil safeguarding. However, existing security game models typically either assume additive utility functions, or that the attacker can attack only one target. Such assumptions lead to tractable analysis, but miss key inherent dependencies that exist among different targets in current complex networks. In this paper, we generalize the classical security game models to allow for non-additive utility functions. We also allow attackers to be able to attack multiple targets. We examine such a general security game from a theoretical perspective and provide a unified view. In particular, we show that each security game is equivalent to a combinatorial optimization problem over a set system @math , which consists of defender's pure strategy space. The key technique we use is based on the transformation, projection of a polytope, and the elipsoid method. This work settles several open questions in security game domain and significantly extends the state of-the-art of both the polynomial solvable and NP-hard class of the security game.
The earlier mentioned works studying the complexity of the security game focus on the game with a single attacker resource. However, none of these works provide a systematic understanding of complexity properties or provide an efficient algorithm for the security game when attacker has multiple resources and utility functions are non-additive. @cite_9 , the authors extend the classic security game model to the scenario of multiple attacker resources. They design a @math states transition algorithm to exactly compute the Nash equilibrium in polynomial time. Such an algorithm is complicated and restricted to the case that the defender's resource is homogenous, i.e., the defender can protect any subset of targets with a cardinality constraint. In the practical scenario such as FAMS, the defender's resources may be heterogenous and solving such a scenario is still an open question in the security game domain.
{ "cite_N": [ "@cite_9" ], "mid": [ "186410217" ], "abstract": [ "Algorithms for finding game-theoretic solutions are now used in several real-world security applications. This work has generally assumed a Stackelberg model where the defender commits to a mixed strategy first. In general two-player normal-form games, Stackelberg strategies are easier to compute than Nash equilibria, though it has recently been shown that in many security games, Stackelberg strategies are also Nash strategies for the defender. However, the work on security games so far assumes that the attacker attacks only a single target. In this paper, we generalize to the case where the attacker attacks multiple targets simultaneously. Here, Stackelberg and Nash strategies for the defender can be truly different. We provide a polynomial-time algorithm for finding a Nash equilibrium. The algorithm gradually increases the number of defender resources and maintains an equilibrium throughout this process. Moreover, we prove that Nash equilibria in security games with multiple attackers satisfy the interchange property, which resolves the problem of equilibrium selection in such games. On the other hand, we show that Stackelberg strategies are actually NP-hard to compute in this context. Finally, we provide experimental results." ] }
1701.08435
2583901669
In this work we propose a simple unsupervised approach for next frame prediction in video. Instead of directly predicting the pixels in a frame given past frames, we predict the transformations needed for generating the next frame in a sequence, given the transformations of the past frames. This leads to sharper results, while using a smaller prediction model. @PARASPLIT In order to enable a fair comparison between different video frame prediction models, we also propose a new evaluation protocol. We use generated frames as input to a classifier trained with ground truth sequences. This criterion guarantees that models scoring high are those producing sequences which preserve discrim- inative features, as opposed to merely penalizing any deviation, plausible or not, from the ground truth. Our proposed approach compares favourably against more sophisticated ones on the UCF-101 data set, while also being more efficient in terms of the number of parameters and computational cost.
In @cite_1 , the authors circumvented this problem by quantizing the space of image patches. While they were able to predict a few high-resolution frames in the future, it seems dissatisfying to impose such a drastic assumption to simplify the prediction task.
{ "cite_N": [ "@cite_1" ], "mid": [ "1568514080" ], "abstract": [ "We propose a strong baseline model for unsupervised feature learning using video data. By learning to predict missing frames or extrapolate future frames from an input video sequence, the model discovers both spatial and temporal correlations which are useful to represent complex deformations and motion patterns. The models we propose are largely borrowed from the language modeling literature, and adapted to the vision domain by quantizing the space of image patches into a large dictionary. We demonstrate the approach on both a filling and a generation task. For the first time, we show that, after training on natural videos, such a model can predict non-trivial motions over short video sequences." ] }
1701.08435
2583901669
In this work we propose a simple unsupervised approach for next frame prediction in video. Instead of directly predicting the pixels in a frame given past frames, we predict the transformations needed for generating the next frame in a sequence, given the transformations of the past frames. This leads to sharper results, while using a smaller prediction model. @PARASPLIT In order to enable a fair comparison between different video frame prediction models, we also propose a new evaluation protocol. We use generated frames as input to a classifier trained with ground truth sequences. This criterion guarantees that models scoring high are those producing sequences which preserve discrim- inative features, as opposed to merely penalizing any deviation, plausible or not, from the ground truth. Our proposed approach compares favourably against more sophisticated ones on the UCF-101 data set, while also being more efficient in terms of the number of parameters and computational cost.
A recent strong result is provided in @cite_6 . This paper describes a model that generates videos which exhibit substantial motion using a motion encoder, an image encoder and a cross convolution part with a decoder. This model also focuses on directly generating the pixels; however, as opposed to dynamic filter networks, the model is trained to generate the difference image for the next time step. By doing this, the model makes a strong implicit assumption that the background is uniform, without any texture, so that the differencing operation captures only the motion for the foreground object. In contrast, our model does not make such assumptions, and it can be applied to natural videos.
{ "cite_N": [ "@cite_6" ], "mid": [ "2470475590" ], "abstract": [ "We study the problem of synthesizing a number of likely future frames from a single input image. In contrast to traditional methods, which have tackled this problem in a deterministic or non-parametric way, we propose a novel approach that models future frames in a probabilistic manner. Our probabilistic model makes it possible for us to sample and synthesize many possible future frames from a single input image. Future frame synthesis is challenging, as it involves low- and high-level image and motion understanding. We propose a novel network structure, namely a Cross Convolutional Network to aid in synthesizing future frames; this network structure encodes image and motion information as feature maps and convolutional kernels, respectively. In experiments, our model performs well on synthetic data, such as 2D shapes and animated game sprites, as well as on real-wold videos. We also show that our model can be applied to tasks such as visual analogy-making, and present an analysis of the learned network representations." ] }
1701.08435
2583901669
In this work we propose a simple unsupervised approach for next frame prediction in video. Instead of directly predicting the pixels in a frame given past frames, we predict the transformations needed for generating the next frame in a sequence, given the transformations of the past frames. This leads to sharper results, while using a smaller prediction model. @PARASPLIT In order to enable a fair comparison between different video frame prediction models, we also propose a new evaluation protocol. We use generated frames as input to a classifier trained with ground truth sequences. This criterion guarantees that models scoring high are those producing sequences which preserve discrim- inative features, as opposed to merely penalizing any deviation, plausible or not, from the ground truth. Our proposed approach compares favourably against more sophisticated ones on the UCF-101 data set, while also being more efficient in terms of the number of parameters and computational cost.
@cite_4 describe a conditional VAE model consisting of three towers, an image tower, an encoder tower and a decoder tower. During training the model is given an input image and a set of trajectories, it is trained to reconstruct these input trajectories. The important difference is that during test time, given an input image, the model simply samples from the prior distribution over Z: the goal is to produce trajectories corresponding to that image, that seem likely given the full data set.
{ "cite_N": [ "@cite_4" ], "mid": [ "2952390294" ], "abstract": [ "In a given scene, humans can often easily predict a set of immediate future events that might happen. However, generalized pixel-level anticipation in computer vision systems is difficult because machine learning struggles with the ambiguity inherent in predicting the future. In this paper, we focus on predicting the dense trajectory of pixels in a scene, specifically what will move in the scene, where it will travel, and how it will deform over the course of one second. We propose a conditional variational autoencoder as a solution to this problem. In this framework, direct inference from the image shapes the distribution of possible trajectories, while latent variables encode any necessary information that is not available in the image. We show that our method is able to successfully predict events in a wide variety of scenes and can produce multiple different predictions when the future is ambiguous. Our algorithm is trained on thousands of diverse, realistic videos and requires absolutely no human labeling. In addition to non-semantic action prediction, we find that our method learns a representation that is applicable to semantic vision tasks." ] }
1701.08435
2583901669
In this work we propose a simple unsupervised approach for next frame prediction in video. Instead of directly predicting the pixels in a frame given past frames, we predict the transformations needed for generating the next frame in a sequence, given the transformations of the past frames. This leads to sharper results, while using a smaller prediction model. @PARASPLIT In order to enable a fair comparison between different video frame prediction models, we also propose a new evaluation protocol. We use generated frames as input to a classifier trained with ground truth sequences. This criterion guarantees that models scoring high are those producing sequences which preserve discrim- inative features, as opposed to merely penalizing any deviation, plausible or not, from the ground truth. Our proposed approach compares favourably against more sophisticated ones on the UCF-101 data set, while also being more efficient in terms of the number of parameters and computational cost.
@cite_7 , and similarly @cite_8 for Robot tasks and @cite_2 for 3D objects, frames of a video game are predicted given an action (transformation) taken by an agent. While the papers show great results, the movement in a natural video cannot be described by a simple action and these methods are therefore not widely applicable.
{ "cite_N": [ "@cite_2", "@cite_7", "@cite_8" ], "mid": [ "2410156224", "2962841471", "2400532028" ], "abstract": [ "We introduce SE3-Nets, which are deep networks designed to model rigid body motion from raw point cloud data. Based only on pairs of depth images along with an action vector and point wise data associations, SE3-Nets learn to segment effected object parts and predict their motion resulting from the applied force. Rather than learning point wise flow vectors, SE3-Nets predict SE3 transformations for different parts of the scene. Using simulated depth data of a table top scene and a robot manipulator, we show that the structure underlying SE3-Nets enables them to generate a far more consistent prediction of object motion than traditional flow based networks.", "Motivated by vision-based reinforcement learning (RL) problems, in particular Atari games from the recent benchmark Aracade Learning Environment (ALE), we consider spatio-temporal prediction problems where future image-frames depend on control variables or actions as well as previous frames. While not composed of natural scenes, frames in Atari games are high-dimensional in size, can involve tens of objects with one or more objects being controlled by the actions directly and many other objects being influenced indirectly, can involve entry and departure of objects, and can involve deep partial observability. We propose and evaluate two deep neural network architectures that consist of encoding, action-conditional transformation, and decoding layers based on convolutional neural networks and recurrent neural networks. Experimental results show that the proposed architectures are able to generate visually-realistic frames that are also useful for control over approximately 100-step action-conditional futures in some games. To the best of our knowledge, this paper is the first to make and evaluate long-term predictions on high-dimensional video conditioned by control inputs.", "A core challenge for an agent learning to interact with the world is to predict how its actions affect objects in its environment. Many existing methods for learning the dynamics of physical interactions require labeled object information. However, to scale real-world interaction learning to a variety of scenes and objects, acquiring labeled data becomes increasingly impractical. To learn about physical object motion without labels, we develop an action-conditioned video prediction model that explicitly models pixel motion, by predicting a distribution over pixel motion from previous frames. Because our model explicitly predicts motion, it is partially invariant to object appearance, enabling it to generalize to previously unseen objects. To explore video prediction for real-world interactive agents, we also introduce a dataset of 59,000 robot interactions involving pushing motions, including a test set with novel objects. In this dataset, accurate prediction of videos conditioned on the robot's future actions amounts to learning a \"visual imagination\" of different futures based on different courses of action. Our experiments show that our proposed method produces more accurate video predictions both quantitatively and qualitatively, when compared to prior methods." ] }
1701.08435
2583901669
In this work we propose a simple unsupervised approach for next frame prediction in video. Instead of directly predicting the pixels in a frame given past frames, we predict the transformations needed for generating the next frame in a sequence, given the transformations of the past frames. This leads to sharper results, while using a smaller prediction model. @PARASPLIT In order to enable a fair comparison between different video frame prediction models, we also propose a new evaluation protocol. We use generated frames as input to a classifier trained with ground truth sequences. This criterion guarantees that models scoring high are those producing sequences which preserve discrim- inative features, as opposed to merely penalizing any deviation, plausible or not, from the ground truth. Our proposed approach compares favourably against more sophisticated ones on the UCF-101 data set, while also being more efficient in terms of the number of parameters and computational cost.
Perhaps most similar to our approach, @cite_0 also separate out motion content and directly model motion and employs the Spatial Transformer network . The biggest difference is that our approach is solely convolutional, which makes training fast and the optimization problem simpler. This also allows the model to scale to larger datasets and images, with only modest memory and computational resources. The model directly outputs full affine transforms instead of pixels (rather than only translations as in equation 3 in @cite_0 ).
{ "cite_N": [ "@cite_0" ], "mid": [ "2175030374" ], "abstract": [ "We describe a new spatio-temporal video autoencoder, based on a classic spatial image autoencoder and a novel nested temporal autoencoder. The temporal encoder is represented by a differentiable visual memory composed of convolutional long short-term memory (LSTM) cells that integrate changes over time. Here we target motion changes and use as temporal decoder a robust optical flow prediction module together with an image sampler serving as built-in feedback loop. The architecture is end-to-end differentiable. At each time step, the system receives as input a video frame, predicts the optical flow based on the current observation and the LSTM memory state as a dense transformation map, and applies it to the current frame to generate the next frame. By minimising the reconstruction error between the predicted next frame and the corresponding ground truth next frame, we train the whole system to extract features useful for motion estimation without any supervision effort. We present one direct application of the proposed framework in weakly-supervised semantic segmentation of videos through label propagation using optical flow." ] }
1701.08435
2583901669
In this work we propose a simple unsupervised approach for next frame prediction in video. Instead of directly predicting the pixels in a frame given past frames, we predict the transformations needed for generating the next frame in a sequence, given the transformations of the past frames. This leads to sharper results, while using a smaller prediction model. @PARASPLIT In order to enable a fair comparison between different video frame prediction models, we also propose a new evaluation protocol. We use generated frames as input to a classifier trained with ground truth sequences. This criterion guarantees that models scoring high are those producing sequences which preserve discrim- inative features, as opposed to merely penalizing any deviation, plausible or not, from the ground truth. Our proposed approach compares favourably against more sophisticated ones on the UCF-101 data set, while also being more efficient in terms of the number of parameters and computational cost.
Prior work relating to the evaluation protocol can be found in @cite_10 . The authors generate images using a set of predefined attributes and later show that they can recover these using a pretrained neural network. Our proposal extends this to videos, which is more complicated since both appearance and motion are needed for correct classification.
{ "cite_N": [ "@cite_10" ], "mid": [ "2189246496" ], "abstract": [ "This paper investigates a problem of generating images from visual attributes. Given the prevalent research for image recognition, the conditional image generation problem is relatively under-explored due to the challenges of learning a good generative model and handling rendering uncertainties in images. To address this, we propose a variety of attribute-conditioned deep variational auto-encoders that enjoy both effective representation learning and Bayesian modeling, from which images can be generated from specified attributes and sampled latent factors. We experiment with natural face images and demonstrate that the proposed models are capable of generating realistic faces with diverse appearance. We further evaluate the proposed models by performing attribute-conditioned image progression, transfer and retrieval. In particular, our generation method achieves superior performance in the retrieval experiment against traditional nearest-neighbor-based methods both qualitatively and quantitatively." ] }
1701.08343
2949521714
In a recent conference paper, we have reported a rhythm transcription method based on a merged-output hidden Markov model (HMM) that explicitly describes the multiple-voice structure of polyphonic music. This model solves a major problem of conventional methods that could not properly describe the nature of multiple voices as in polyrhythmic scores or in the phenomenon of loose synchrony between voices. In this paper we present a complete description of the proposed model and develop an inference technique, which is valid for any merged-output HMMs for which output probabilities depend on past events. We also examine the influence of the architecture and parameters of the method in terms of accuracies of rhythm transcription and voice separation and perform comparative evaluations with six other algorithms. Using MIDI recordings of classical piano pieces, we found that the proposed model outperformed other methods by more than 12 points in the accuracy for polyrhythmic performances and performed almost as good as the best one for non-polyrhythmic performances. This reveals the state-of-the-art methods of rhythm transcription for the first time in the literature. Publicly available source codes are also provided for future comparisons.
The note HMM has been extended to handle polyphonic performances @cite_6 . This is done by representing a polyphonic score as a linear sequence of chords or, more precisely, note clusters consisting of one or more notes. Such score representation is also familiarly used for music analysis @cite_22 and score-performance matching @cite_9 @cite_5 . Chordal notes can be represented as self-transitions in the score model (Fig. ) and their IOIs can be described with a probability distribution with a peak at zero. Polyphonic extension of metrical HMMs is possible in the same way.
{ "cite_N": [ "@cite_5", "@cite_9", "@cite_22", "@cite_6" ], "mid": [ "2097245893", "2150452896", "1897056990", "2130887255" ], "abstract": [ "We study indeterminacies in realization of ornaments and how they can be incorporated in a stochastic performance model applicable for music information processing such as score-performance matching. We point out the importance of temporal information, and propose a hidden Markov model which describes it explicitly and represents ornaments with several state types. Following a review of the indeterminacies, they are carefully incorporated into the model through its topology and parameters, and the state construction for quite general polyphonic scores is explained in detail. By analysing piano performance data, we find significant overlaps in inter-onset-interval distributions of chordal notes, ornaments, and inter-chord events, and the data is used to determine details of the model. The model is applied for score following and offline score-performance matching, yielding highly accurate matching for performances with many ornaments and relatively frequent errors, repeats, and skips.", "The capacity for real-time synchronization and coordination is a common ability among trained musicians performing a music score that presents an interesting challenge for machine intelligence. Compared to speech recognition, which has influenced many music information retrieval systems, music's temporal dynamics and complexity pose challenging problems to common approximations regarding time modeling of data streams. In this paper, we propose a design for a real-time music-to-score alignment system. Given a live recording of a musician playing a music score, the system is capable of following the musician in real time within the score and decoding the tempo (or pace) of its performance. The proposed design features two coupled audio and tempo agents within a unique probabilistic inference framework that adaptively updates its parameters based on the real-time context. Online decoding is achieved through the collaboration of the coupled agents in a Hidden Hybrid Markov semi-Markov framework, where prediction feedback of one agent affects the behavior of the other. We perform evaluations for both real-time alignment and the proposed temporal model. An implementation of the presented system has been widely used in real concert situations worldwide and the readers are encouraged to access the actual system and experiment the results.", "The automated discovery of recurrent patterns in music is a fundamental task in computational music analysis. This paper describes a new method for discovering patterns in the vertical and horizontal dimensions of polyphonic music. A formal representation of music objects is used to structure the musical surface, and several ideas for viewing pieces as successions of vertical structures are examined. A knowledge representation method is used to view pieces as sequences of relationships between music objects, and a pattern discovery algorithm is applied using this view of the Bach chorale harmonizations to find significant recurrent patterns. The method finds a small set of vertical patterns that occur in a large number of pieces in the corpus. Most of these patterns represent specific voice leading formulae within cadences.", "This paper discusses model-based rhythm and tempo analysis of music data in the MIDI format. The data is assumed to be obtained from a module performing multi-pitch analysis of music acoustic signals inside an automatic transcription system. In performed music, observed note lengths and local tempo fluctuate from the nominal note lengths and long-term tempo. Applying the framework of continuous speech recognition to rhythm recognition, we take a probabilistic top-down approach on the joint estimation of rhythm and tempo from the performed onset events in MIDI data. Short-term rhythm patterns are extracted from existing music samples and form a \"rhythm vocabulary.\" Local tempo is represented by a smooth curve. The entire problem is formulated as an integrated optimization problem to maximize a posterior probability, which can be solved by an iterative algorithm which alternately estimates rhythm and tempo. Evaluation of the algorithm through various experiments is also presented." ] }
1701.08393
2952636107
We propose a deep convolutional neural network (CNN) for face detection leveraging on facial attributes based supervision. We observe a phenomenon that part detectors emerge within CNN trained to classify attributes from uncropped face images, without any explicit part supervision. The observation motivates a new method for finding faces through scoring facial parts responses by their spatial structure and arrangement. The scoring mechanism is data-driven, and carefully formulated considering challenging cases where faces are only partially visible. This consideration allows our network to detect faces under severe occlusion and unconstrained pose variations. Our method achieves promising performance on popular benchmarks including FDDB, PASCAL Faces, AFW, and WIDER FACE.
In the last decades, cascade based @cite_44 @cite_3 @cite_18 @cite_26 and deformable part models (DPM) detectors dominate face detection approaches. Viola and Jones @cite_26 introduced fast Haar-like features computation via integral image and boosted cascade classifier. Various studies thereafter follow a similar pipeline. Among the variants, SURF cascade @cite_18 was one of the top performers. Later Chen al @cite_44 demonstrate state-of-the-art face detection performance by learning face detection and face alignment jointly in the same cascade framework. Deformable part models define face as a collection of parts. Latent Support Vector Machine is typically used to find the parts and their relationships. DPM is shown more robust to occlusion than the cascade based methods. A recent study @cite_41 demonstrates good performance with just a vanilla DPM, achieving better results than more sophisticated DPM variants @cite_29 @cite_56 .
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_41", "@cite_29", "@cite_3", "@cite_56", "@cite_44" ], "mid": [ "", "2137401668", "", "2034025266", "2169696215", "2047508432", "204612701" ], "abstract": [ "", "This paper describes a face detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a simple and efficient classifier which is built using the AdaBoost learning algorithm (Freund and Schapire, 1995) to select a small number of critical visual features from a very large set of potential features. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising face-like regions. A set of experiments in the domain of face detection is presented. The system yields face detection performance comparable to the best previous systems (Sung and Poggio, 1998; , 1998; Schneiderman and Kanade, 2000; , 2000). Implemented on a conventional desktop, face detection proceeds at 15 frames per second.", "", "Despite the successes in the last two decades, the state-of-the-art face detectors still have problems in dealing with images in the wild due to large appearance variations. Instead of leaving appearance variations directly to statistical learning algorithms, we propose a hierarchical part based structural model to explicitly capture them. The model enables part subtype option to handle local appearance variations such as closed and open month, and part deformation to capture the global appearance variations such as pose and expression. In detection, candidate window is fitted to the structural model to infer the part location and part subtype, and detection score is then computed based on the fitted configuration. In this way, the influence of appearance variation is reduced. Besides the face model, we exploit the co-occurrence between face and body, which helps to handle large variations, such as heavy occlusions, to further boost the face detection performance. We present a phrase based representation for body detection, and propose a structural context model to jointly encode the outputs of face detector and body detector. Benefit from the rich structural face and body information, as well as the discriminative structural learning algorithm, our method achieves state-of-the-art performance on FDDB, AFW and a self-annotated dataset, under wide comparisons with commercial and academic methods. (C) 2013 Elsevier B.V. All rights reserved.", "Rotation invariant multiview face detection (MVFD) aims to detect faces with arbitrary rotation-in-plane (RIP) and rotation-off-plane (ROP) angles in still images or video sequences. MVFD is crucial as the first step in automatic face processing for general applications since face images are seldom upright and frontal unless they are taken cooperatively. In this paper, we propose a series of innovative methods to construct a high-performance rotation invariant multiview face detector, including the width-first-search (WFS) tree detector structure, the vector boosting algorithm for learning vector-output strong classifiers, the domain-partition-based weak learning method, the sparse feature in granular space, and the heuristic search for sparse feature selection. As a result of that, our multiview face detector achieves low computational complexity, broad detection scope, and high detection accuracy on both standard testing sets and real-life images", "We present a unified model for face detection, pose estimation, and landmark estimation in real-world, cluttered images. Our model is based on a mixtures of trees with a shared pool of parts; we model every facial landmark as a part and use global mixtures to capture topological changes due to viewpoint. We show that tree-structured models are surprisingly effective at capturing global elastic deformation, while being easy to optimize unlike dense graph structures. We present extensive results on standard face benchmarks, as well as a new “in the wild” annotated dataset, that suggests our system advances the state-of-the-art, sometimes considerably, for all three tasks. Though our model is modestly trained with hundreds of faces, it compares favorably to commercial systems trained with billions of examples (such as Google Picasa and face.com).", "We present a new state-of-the-art approach for face detection. The key idea is to combine face alignment with detection, observing that aligned face shapes provide better features for face classification. To make this combination more effective, our approach learns the two tasks jointly in the same cascade framework, by exploiting recent advances in face alignment. Such joint learning greatly enhances the capability of cascade detection and still retains its realtime performance. Extensive experiments show that our approach achieves the best accuracy on challenging datasets, where all existing solutions are either inaccurate or too slow." ] }
1701.08393
2952636107
We propose a deep convolutional neural network (CNN) for face detection leveraging on facial attributes based supervision. We observe a phenomenon that part detectors emerge within CNN trained to classify attributes from uncropped face images, without any explicit part supervision. The observation motivates a new method for finding faces through scoring facial parts responses by their spatial structure and arrangement. The scoring mechanism is data-driven, and carefully formulated considering challenging cases where faces are only partially visible. This consideration allows our network to detect faces under severe occlusion and unconstrained pose variations. Our method achieves promising performance on popular benchmarks including FDDB, PASCAL Faces, AFW, and WIDER FACE.
The first stage of our model is partially inspired by generic object proposal approaches @cite_9 @cite_16 @cite_24 . Generic object proposal generators are commonly used in standard object detection algorithms for providing high-quality and category-independent bounding boxes. These methods typically involve redundant computations over regions that are covered by multiple proposals. To reduce computation, Ren al @cite_14 propose Region Proposal Network (RPN) to generate proposals from high-level response maps in a CNN through a set of predefined anchor boxes. Both generic object proposal and RPN methods do not consider the unique structure and parts on the face. Hence, no mechanism is available to recall faces when the face is only partially visible. These shortcomings motivate us to formulate the new faceness measure to achieve high recall on faces while reducing the number of candidate windows to half the original (compared to the original RPN @cite_14 ).
{ "cite_N": [ "@cite_24", "@cite_14", "@cite_9", "@cite_16" ], "mid": [ "7746136", "2613718673", "", "2088049833" ], "abstract": [ "The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "", "This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html )." ] }
1701.08513
2575980564
Predictive coding is attractive for compression of hyperspectral images onboard of spacecrafts in light of the excellent rate-distortion performance and low complexity of recent schemes. In this letter, we propose a rate control algorithm and integrate it in a lossy extension to the CCSDS-123 lossless compression recommendation. The proposed rate algorithm overhauls our previous scheme by being orders of magnitude faster and simpler to implement, while still providing the same accuracy in terms of output rate and comparable or better image quality.
In @cite_9 the authors simplify the rate control algorithm by eliminating the rate-distortion optimization phase and working on line-by-line basis instead of blocks. This idea is also used in this work, which however solves major drawbacks of the method proposed in @cite_9 . The method proposed in this letter improves over @cite_4 , @cite_16 , @cite_9 by introducing the following novel points: choosing just one quantization step size per spectral line, which avoids complex rate-distortion optimization algorithms (also present in @cite_9 ); on-the-fly estimation of the residual statistics, which avoids running the predictor twice, with significant gains in terms of speed; simpler arithmetic that does not involve squaring operations to compute the statistical parameters; simpler arithmetic speeding up access to the rate LUT; reduced number of LUT lookups.
{ "cite_N": [ "@cite_9", "@cite_16", "@cite_4" ], "mid": [ "2520820497", "2065117103", "2085654430" ], "abstract": [ "Predictive lossy compression has been shown to represent a very flexible framework for lossless and lossy onboard compression of multispectral and hyperspectral images with quality and rate control. In this paper, we improve predictive lossy compression in several ways, using a standard issued by the Consultative Committee on Space Data Systems, namely CCSDS-123, as an example of application. First, exploiting the flexibility in the error control process, we propose a constant-signal-to-noise-ratio algorithm that bounds the maximum relative error between each pixel of the reconstructed image and the corresponding pixel of the original image. This is very useful to avoid low-energy areas of the image being affected by large errors. Second, we propose a new rate control algorithm that has very low complexity and provides performance equal to or better than existing work. Third, we investigate several entropy coding schemes that can speed up the hardware implementation of the algorithm and, at the same time, improve coding efficiency. These advances make predictive lossy compression an extremely appealing framework for onboard systems due to its simplicity, flexibility, and coding efficiency.", "In this paper we propose an efficient architecture for onboard implementation of rate-controlled predictive lossy compression of hyperspectral and multispectral images. In particular, we consider the recent state-of-the-art rate control algorithm for onboard predictive compression [1], and propose an architecture addressing two fundamental aspects of its hardware implementation. Specifically, this architecture overcomes the serial nature of the algorithm, as well as the large memory requirements of the entropy coding stage, achieving a pipelined implementation suitable for high-throughput onboard implementation, at a negligible cost in terms of coding efficiency.", "Predictive coding is attractive for compression on board of spacecraft due to its low computational complexity, modest memory requirements, and the ability to accurately control quality on a pixel-by-pixel basis. Traditionally, predictive compression focused on the lossless and near-lossless modes of operation, where the maximum error can be bounded but the rate of the compressed image is variable. Rate control is considered a challenging problem for predictive encoders due to the dependencies between quantization and prediction in the feedback loop and the lack of a signal representation that packs the signal’s energy into few coefficients. In this paper, we show that it is possible to design a rate control scheme intended for onboard implementation. In particular, we propose a general framework to select quantizers in each spatial and spectral region of an image to achieve the desired target rate while minimizing distortion. The rate control algorithm allows achieving lossy near-lossless compression and any in-between type of compression, e.g., lossy compression with a near-lossless constraint. While this framework is independent of the specific predictor used, in order to show its performance, in this paper, we tailor it to the predictor adopted by the CCSDS-123 lossless compression standard, obtaining an extension that allows performing lossless, near-lossless, and lossy compression in a single package. We show that the rate controller has excellent performance in terms of accuracy in the output rate, rate–distortion characteristics, and is extremely competitive with respect to state-of-the-art transform coding." ] }
1701.08547
2953190515
Optimizing the performance of GPU kernels is challenging for both human programmers and code generators. For example, CUDA programmers must set thread and block parameters for a kernel, but might not have the intuition to make a good choice. Similarly, compilers can generate working code, but may miss tuning opportunities by not targeting GPU models or performing code transformations. Although empirical autotuning addresses some of these challenges, it requires extensive experimentation and search for optimal code variants. This research presents an approach for tuning CUDA kernels based on static analysis that considers fine-grained code structure and the specific GPU architecture features. Notably, our approach does not require any program runs in order to discover near-optimal parameter settings. We demonstrate the applicability of our approach in enabling code autotuners such as Orio to produce competitive code variants comparable with empirical-based methods, without the high cost of experiments.
Several prior efforts have attempted to discover optimal code forms and runtime parameter settings for accelerator-based programming models, typically by taking a domain-specific approach. For instance, Nukada and Matsuoka demonstrated automated tuning for a CUDA-based 3-D FFT library based on selection of optimal number of threads @cite_11 . developed the MAGMA system for dense linear algebra solvers for GPU architectures, which incorporates a DAG representation and empirical-based search process for modeling and optimization @cite_20 . The use of autotuning systems based on program transformations, such as Orio @cite_8 and CHiLL @cite_23 , enable optimization exploration on more general application code and across accelerator architectures @cite_2 . However, the complexity of the optimization space and the cost of empirical search is high. A recent work on autotuning GPU kernels focuses on loop scheduling and is based on the OpenUH compiler @cite_6 . Our approach attempts to leverage more static code analysis to help better inform an autotuning process, thereby reducing the dependence on pure dynamic measurement and analysis to generate performance guidance.
{ "cite_N": [ "@cite_8", "@cite_6", "@cite_23", "@cite_2", "@cite_20", "@cite_11" ], "mid": [ "2121546953", "2495972160", "", "2018568386", "2169150754", "2107483876" ], "abstract": [ "For many scientific applications, significant time is spent in tuning codes for a particular high-performance architecture. Tuning approaches range from the relatively nonintrusive (e.g., by using compiler options) to extensive code modifications that attempt to exploit specific architecture features. Intrusive techniques often result in code changes that are not easily reversible, and can negatively impact readability, maintainability, and performance on different architectures. We introduce an extensible annotation-based empirical tuning system called Orio that is aimed at improving both performance and productivity. It allows software developers to insert annotations in the form of structured comments into their source code to trigger a number of low-level performance optimizations on a specified code fragment. To maximize the performance tuning opportunities, the annotation processing infrastructure is designed to support both architecture-independent and architecture-specific code optimizations. Given the annotated code as input, Orio generates many tuned versions of the same operation and empirically evaluates the alternatives to select the best performing version for production use. We have also enabled the use of the Pluto automatic parallelization tool in conjunction with Orio to generate efficient OpenMP-based parallel code. We describe our experimental results involving a number of computational kernels, including dense array and sparse matrix operations.", "HPC developers aim to deliver the very best performance. To do so they constantly think about memory bandwidth, memory hierarchy, locality, floating point performance, power energy constraints and so on. On the other hand, application scientists aim to write performance portable code while exploiting the rich feature set of the hardware. By providing adequate hints to the compilers in the form of directives appropriate executable code is generated. There are tremendous benefits from using directive-based programming. However, applications are also becoming more and more complex and we need sophisticated tools such as auto-tuning to better explore the optimization space. In applications, loops typically form a major and time-consuming portion of the code. Scheduling these loops involves mapping from the loop iteration space to the underlying platform - for example GPU threads. The user tries different scheduling techniques until the best one is identified. However, this process can be quite tedious and time consuming especially when it is a relatively large application, as the user needs to record the performance of every schedule’s run. This paper aims to offer a better solution by proposing an auto-tuning framework that adopts an analytical model guiding the compiler and the runtime to choose an appropriate schedule for the loops, automatically and determining the launch configuration for each of the loop schedules. Our experiments show that the predicted loop schedule by our framework achieves the speedup of 1.29x on an average against the default loop schedule chosen by the compiler.", "", "Producing high-performance implementations from simple, portable computation specifications is a challenge that compilers have tried to address for several decades. More recently, a relatively stable architectural landscape has evolved into a set of increasingly diverging and rapidly changing CPU and accelerator designs, with the main common factor being dramatic increases in the levels of parallelism available. The growth of architectural heterogeneity and parallelism, combined with the very slow development cycles of traditional compilers, has motivated the development of autotuning tools that can quickly respond to changes in architectures and programming models, and enable very specialized optimizations that are not possible or likely to be provided by mainstream compilers. In this paper we describe the new OpenCL code generator and autotuner OrCL and the introduction of detailed performance measurement into the autotuning process. OrCL is implemented within the Orio autotuning framework, which enables the rapid development of experimental languages and code optimization strategies aimed at achieving good performance on new platforms without rewriting or hand-optimizing critical kernels. The combination of the new OpenCL autotuning and TAU measurement capabilities enables users to consistently evaluate autotuning effectiveness across a range of architectures, including several NVIDIA and AMD accelerators and Intel Xeon Phi processors, and to compare the OpenCL and CUDA code generation capabilities. We present results of autotuning several numerical kernels that typically dominate the execution time of iterative sparse linear system solution and key computations from a 3-D parallel simulation of solid fuel ignition.", "Solving dense linear systems of equations is a fundamental problem in scientific computing. Numerical simulations involving complex systems represented in terms of unknown variables and relations between them often lead to linear systems of equations that must be solved as fast as possible. We describe current efforts toward the development of these critical solvers in the area of dense linear algebra (DLA) for multicore with GPU accelerators. We describe how to code develop solvers to effectively use the high computing power available in these new and emerging hybrid architectures. The approach taken is based on hybridization techniques in the context of Cholesky, LU, and QR factorizations. We use a high-level parallel programming model and leverage existing software infrastructure, e.g. optimized BLAS for CPU and GPU, and LAPACK for sequential CPU processing. Included also are architecture and algorithm-specific optimizations for standard solvers as well as mixed-precision iterative refinement solvers. The new algorithms, depending on the hardware configuration and routine parameters, can lead to orders of magnitude acceleration when compared to the same algorithms on standard multicore architectures that do not contain GPU accelerators. The newly developed DLA solvers are integrated and freely available through the MAGMA library.", "Existing implementations of FFTs on GPUs are optimized for specific transform sizes like powers of two, and exhibit unstable and peaky performance i.e., do not perform as well in other sizes that appear in practice. Our new auto-tuning 3-D FFT on CUDA generates high performance CUDA kernels for FFTs of varying transform sizes, alleviating this problem. Although auto-tuning has been implemented on GPUs for dense kernels such as DGEMM and stencils, this is the first instance that has been applied comprehensively to bandwidth intensive and complex kernels such as 3-D FFTs. Bandwidth intensive optimizations such as selecting the number of threads and inserting padding to avoid bank conflicts on shared memory are systematically applied. Our resulting autotuner is fast and results in performance that essentially beats all 3-D FFT implementations on a single processor to date, and moreover exhibits stable performance irrespective of problem sizes or the underlying GPU hardware." ] }
1701.08341
2585874438
Generic face detection algorithms do not perform very well in the mobile domain due to significant presence of occluded and partially visible faces. One promising technique to handle the challenge of partial faces is to design face detectors based on facial segments. In this paper two such face detectors namely, SegFace and DeepSegFace, are proposed that detect the presence of a face given arbitrary combinations of certain face segments. Both methods use proposals from facial segments as input that are found using weak boosted classifiers. SegFace is a shallow and fast algorithm using traditional features, tailored for situations where real time constraints must be satisfied. On the other hand, DeepSegFace is a more powerful algorithm based on a deep convolutional neutral network (DCNN) architecture. DeepSegFace offers certain advantages over other DCNN-based face detectors as it requires relatively little amount of data to train by utilizing a novel data augmentation scheme and is very robust to occlusion by design. Extensive experiments show the superiority of the proposed methods, specially DeepSegFace, over other state-of-the-art face detectors in terms of precision-recall and ROC curve on two mobile face datasets.
The performance break-through observed after the introduction of Deep Convolutional Neural Networks (DCNN) can be attributed to the availability of large labeled datasets, availability of GPUs, the hierarchical nature of the deep networks and regularization techniques such as dropout space @cite_31 .
{ "cite_N": [ "@cite_31" ], "mid": [ "2160532515" ], "abstract": [ "We present a comprehensive and survey for face detection 'in-the-wild'.We critically describe the advances in the three main families of algorithms.We comment on the performance of the state-of-the-art in the current benchmarks.We outline future research avenues on the topic and beyond. Face detection is one of the most studied topics in computer vision literature, not only because of the challenging nature of face as an object, but also due to the countless applications that require the application of face detection as a first step. During the past 15years, tremendous progress has been made due to the availability of data in unconstrained capture conditions (so-called 'in-the-wild') through the Internet, the effort made by the community to develop publicly available benchmarks, as well as the progress in the development of robust computer vision algorithms. In this paper, we survey the recent advances in real-world face detection techniques, beginning with the seminal Viola-Jones face detector methodology. These techniques are roughly categorized into two general schemes: rigid templates, learned mainly via boosting based methods or by the application of deep neural networks, and deformable models that describe the face by its parts. Representative methods will be described in detail, along with a few additional successful methods that we briefly go through at the end. Finally, we survey the main databases used for the evaluation of face detection algorithms and recent benchmarking efforts, and discuss the future of face detection." ] }
1701.08680
2585616392
Today, wearable internet-of-things (wIoT) devices continuously flood the cloud data centers at an enormous rate. This increases a demand to deploy an edge infrastructure for computing, intelligence, and storage close to the users. The emerging paradigm of fog computing could play an important role to make wIoT more efficient and affordable. Fog computing is known as the cloud on the ground. This paper presents an end-to-end architecture that performs data conditioning and intelligent filtering for generating smart analytics from wearable data. In wIoT, wearable sensor devices serve on one end while the cloud backend offers services on the other end. We developed a prototype of smart fog gateway (a middle layer) using Intel Edison and Raspberry Pi. We discussed the role of the smart fog gateway in orchestrating the process of data conditioning, intelligent filtering, smart analytics, and selective transfer to the cloud for long-term storage and temporal variability monitoring. We benchmarked the performance of developed prototypes on real-world data from smart e-textile gloves. Results demonstrated the usability and potential of proposed architecture for converting the real-world data into useful analytics while making use of knowledge-based models. In this way, the smart fog gateway enhances the end-to-end interaction between wearables (sensor devices) and the cloud.
Widespread use of wearable and internet of things have lead to several interesting applications and also unprecedented challenges. For enhancing the data management and analytics in wIoT, edge computing have emerged. Fog computing is emerging as a tool for such situations. The data generated by wearables are of temporal and spatial nature. Employing edge devices for analysis and visualization of data leads to efficient solution leading to improvement in overall power efficiency. The big data being generated from various applications could be explained by four Vs namely volume, velocity, variety and veracity @cite_10 . The harmony between wIoT and big data could lead to generating valuable analytics from big data. Fog computing holds a great promise to reduce the burden of wearable big data at the edge of the network. Authors in @cite_2 propose a novel BigEAR big data framework, where it identifies mood of the person from various activities such as laughing, singing, crying, arguing and sighing. Our proposed Fog gateway can be made to incorporate such versatile clinical speech processing framework. Such framework is well accounted in literature such as in @cite_7 , where authors present a Fog computing interface @cite_7 for processing clinical speech data.
{ "cite_N": [ "@cite_10", "@cite_7", "@cite_2" ], "mid": [ "2293066912", "2403249872", "2426356326" ], "abstract": [ "This paper promotes the concept of smart and connected communities SCC, which is evolving from the concept of smart cities. SCC are envisioned to address synergistically the needs of remembering the past (preservation and revitalization), the needs of living in the present (livability), and the needs of planning for the future (attainability). Therefore, the vision of SCC is to improve livability, preservation, revitalization, and attainability of a community. The goal of building SCC for a community is to live in the present, plan for the future, and remember the past. We argue that Internet of Things (IoT) has the potential to provide a ubiquitous network of connected devices and smart sensors for SCC, and big data analytics has the potential to enable the move from IoT to real-time control desired for SCC. We highlight mobile crowdsensing and cyber-physical cloud computing as two most important IoT technologies in promoting SCC. As a case study, we present TreSight, which integrates IoT and big data analytics for smart tourism and sustainable cultural heritage in the city of Trento, Italy.", "There is an increasing demand for smart fog-computing gateways as the size of cloud data is growing. This paper presents a Fog computing interface (FIT) for processing clinical speech data. FIT builds upon our previous work on EchoWear, a wearable technology that validated the use of smartwatches for collecting clinical speech data from patients with Parkinson's disease (PD). The fog interface is a low-power embedded system that acts as a smart interface between the smartwatch and the cloud. It collects, stores, and processes the speech data before sending speech features to secure cloud storage. We developed and validated a working prototype of FIT that enabled remote processing of clinical speech data to get speech clinical features such as loudness, short-time energy, zero-crossing rate, and spectral centroid. We used speech data from six patients with PD in their homes for validating FIT. Our results showed the efficacy of FIT as a Fog interface to translate the clinical speech processing chain (CLIP) from a cloud-based backend to a fog-based smart gateway.", "This paper presents a novel BigEAR big data framework that employs psychological audio processing chain (PAPC) to process smartphone-based acoustic big data collected when the user performs social conversations in naturalistic scenarios. The overarching goal of BigEAR is to identify moods of the wearer from various activities such as laughing, singing, crying, arguing, and sighing. These annotations are based on ground truth relevant for psychologists who intend to monitor infer the social context of individuals coping with breast cancer. We pursued a case study on couples coping with breast cancer to know how the conversations affect emotional and social well being. In the state-of-the-art methods, psychologists and their team have to hear the audio recordings for making these inferences by subjective evaluations that not only are time-consuming and costly, but also demand manual data coding for thousands of audio files. The BigEAR framework automates the audio analysis. We computed the accuracy of BigEAR with respect to the ground truth obtained from a human rater. Our approach yielded overall average accuracy of 88.76 on real-world data from couples coping with breast cancer." ] }
1701.08680
2585616392
Today, wearable internet-of-things (wIoT) devices continuously flood the cloud data centers at an enormous rate. This increases a demand to deploy an edge infrastructure for computing, intelligence, and storage close to the users. The emerging paradigm of fog computing could play an important role to make wIoT more efficient and affordable. Fog computing is known as the cloud on the ground. This paper presents an end-to-end architecture that performs data conditioning and intelligent filtering for generating smart analytics from wearable data. In wIoT, wearable sensor devices serve on one end while the cloud backend offers services on the other end. We developed a prototype of smart fog gateway (a middle layer) using Intel Edison and Raspberry Pi. We discussed the role of the smart fog gateway in orchestrating the process of data conditioning, intelligent filtering, smart analytics, and selective transfer to the cloud for long-term storage and temporal variability monitoring. We benchmarked the performance of developed prototypes on real-world data from smart e-textile gloves. Results demonstrated the usability and potential of proposed architecture for converting the real-world data into useful analytics while making use of knowledge-based models. In this way, the smart fog gateway enhances the end-to-end interaction between wearables (sensor devices) and the cloud.
Fog computing as defined in @cite_13 is a model to complement the cloud for decentralizing the concentration of computing resources (for example, servers, storage, applications and services) in data centers towards users for improving the quality of service and their experience. Fog computing @cite_13 is the process of decentralization of computations which is away from the cloud and towards the edge of the network closer to the user. In this way, fog computing increases the quality of service and reduces the latency and frequency of communication between a user and an edge node. Moreover, fog computing can improve the efficiency and performance of application @cite_4 . In @cite_9 authors emphasize on the increasing need for implementing machine learning algorithms including deep learning on resource constrained mobile embedded devices with limited memory and computing power. Authors used a @math Bit network for model compression, this achieves a good trade-off between model size and performance. The scopes and opportunities of Fog computing is enumerable. Fog computing supports a growing variety of applications such as those in the Internet of Things(IoT), Fifth generation (5G) wireless systems, and embedded Artificial Intelligence (AI) @cite_6 .
{ "cite_N": [ "@cite_9", "@cite_13", "@cite_6", "@cite_4" ], "mid": [ "2584322061", "1981094339", "2472333518", "2308814875" ], "abstract": [ "With the rapid proliferation of Internet of Things and intelligent edge devices, there is an increasing need for implementing machine learning algorithms, including deep learning, on resource-constrained mobile embedded devices with limited memory and computation power. Typical large Convolutional Neural Networks (CNNs) need large amounts of memory and computational power, and cannot be deployed on embedded devices efficiently. We present Two-Bit Networks (TBNs) for model compression of CNNs with edge weights constrained to (-2, -1, 1, 2), which can be encoded with two bits. Our approach can reduce the memory usage and improve computational efficiency significantly while achieving good performance in terms of classification accuracy, thus representing a reasonable tradeoff between model size and performance.", "The size of multi-modal, heterogeneous data collected through various sensors is growing exponentially. It demands intelligent data reduction, data mining and analytics at edge devices. Data compression can reduce the network bandwidth and transmission power consumed by edge devices. This paper proposes, validates and evaluates Fog Data, a service-oriented architecture for Fog computing. The center piece of the proposed architecture is a low power embedded computer that carries out data mining and data analytics on raw data collected from various wearable sensors used for telehealth applications. The embedded computer collects the sensed data as time series, analyzes it, and finds similar patterns present. Patterns are stored, and unique patterns are transmited. Also, the embedded computer extracts clinically relevant information that is sent to the cloud. A working prototype of the proposed architecture was built and used to carry out case studies on telehealth big data applications. Specifically, our case studies used the data from the sensors worn by patients with either speech motor disorders or cardiovascular problems. We implemented and evaluated both generic and application specific data mining techniques to show orders of magnitude data reduction and hence transmission power savings. Quantitative evaluations were conducted for comparing various data mining techniques and standard data compression techniques. The obtained results showed substantial improvement in system efficiency using the Fog Data architecture.", "Fog is an emergent architecture for computing, storage, control, and networking that distributes these services closer to end users along the cloud-to-things continuum. It covers both mobile and wireline scenarios, traverses across hardware and software, resides on network edge but also over access networks and among end users, and includes both data plane and control plane. As an architecture, it supports a growing variety of applications, including those in the Internet of Things (IoT), fifth-generation (5G) wireless systems, and embedded artificial intelligence (AI). This survey paper summarizes the opportunities and challenges of fog, focusing primarily in the networking context of IoT.", "Rapid growth in the Internet of Things (IoT) has resulted in a massive growth of data generated by these devices and sensors put on the Internet. Physical-cyber-social (PCS) big data consist of this IoT data, complemented by relevant Web-based and social data of various modalities. Smart data is about exploiting this PCS big data to get deep insights and make it actionable, and making it possible to facilitate building intelligent systems and applications. This article discusses key AI research in semantic computing, cognitive computing, and perceptual computing. Their synergistic use is expected to power future progress in building intelligent systems and applications for rapidly expanding markets in multiple industries. Over the next two years, this column on IoT will explore many challenges and technologies on intelligent use and applications of IoT data." ] }
1701.08608
2571871533
This letter presents a three-dimensional (3-D) visual detection method for the challenging task of detecting peduncles of sweet peppers (Capsicum annuum) in the field. Cutting the peduncle cleanly is one of the most difficult stages of the harvesting process, where the peduncle is the part of the crop that attaches it to the main stem of the plant. Accurate peduncle detection in 3-D space is, therefore, a vital step in reliable autonomous harvesting of sweet peppers, as this can lead to precise cutting while avoiding damage to the surrounding plant. This letter makes use of both color and geometry information acquired from an RGB-D sensor and utilizes a supervised-learning approach for the peduncle detection task. The performance of the proposed method is demonstrated and evaluated by using qualitative and quantitative results [the area-under-the-curve (AUC) of the detection precision-recall curve]. We are able to achieve an AUC of 0.71 for peduncle detection on field-grown sweet peppers. We release a set of manually annotated 3-D sweet pepper and peduncle images to assist the research community in performing further research on this topic.
This section reviews existing methods for detecting peduncles and crops using 3D geometry and visual features. Such techniques are widely used for autonomous crop inspection and detection tasks. Cubero demonstrated the detection of various fruit peduncles using radius and curvature signatures @cite_0 . The Euclidean distance and the angle rate change between each of the points on the contour and the fruit centroid are calculated. The presence of peduncles yields rapid changes in these metrics and can be detected using a specified threshold. Blasco @cite_13 and Ruiz @cite_18 presented peduncle detection of oranges, peaches, and apples using a Bayesian discriminant model of RGB colour information. The size of a colour segmented area was then calculated and assigned to pre-defined classes. These methods are more likely suitable for the quality control and inspection of crop peduncles after the crop have been harvested rather than for harvesting automation, as they require an inspection chamber that provides ideal lighting conditions with a clean background, no occlusions, good viewpoints, and high-quality static imagery.
{ "cite_N": [ "@cite_0", "@cite_18", "@cite_13" ], "mid": [ "1967106076", "2013785102", "2064808659" ], "abstract": [ "The berry size of wine-grapes has often been considered to influence wine composition and quality, as it is related to the skin-to-pulp ratio of the berry and the concentration of skin-located compounds that play a key role in the wine quality. The size and weight of wine-grapes are usually measured by hand, making it a slow, tedious and inaccurate process. This paper focuses on two main objectives aimed at automating this process using image analysis: (1) to develop a fast and accurate method for detecting and removing the pedicel in images of berries, and (2) to accurately determine the size and weight of the berry. A method to detect the peduncle of fruits is presented based on a novel signature of the contour. This method has been developed specifically for grapevine berries, and was later extended and tested with an independent set of other fruits with different shapes and sizes such as peppers, pears, apples or mandarins. Using this approach, the system has been capable of correctly estimating the berry weight (R2 > 0.96) and size (R2 > 0.97) of wine-grapes and of assessing the size of other fruits like mandarins, apples, pears and red peppers (R2 > 0.93). The proven performance of the image analysis methodology developed may be easily implemented in automated inspection systems to accurately estimate the weight of a wide range of fruits including wine-grapes. In this case, the implementation of this system on sorting tables after de-stemming may provide the winemaker with very useful information about the potential quality of the wine.", "Abstract Three image analysis methods were studied and evaluated to solve the problem of removing long stems attached to mechanically harvested oranges: colour segmentation based on linear discriminant analysis, contour curvature analysis, and a thinning process which involves iterating until the stem becomes a skeleton. These techniques are able to determine the presence or absence of a stem with certainty, to locate the stems from random views with more than 90 accuracy and from profile images with an accuracy ranging from 92·4 to 100 depending on the method used. Finally, determination of the length and cutting point of the stem is achieved with only 3·8 of failures.", "Abstract Fruit and vegetables are normally presented to consumers in batches. The homogeneity and appearance of these have significant effect on consumer decision. For this reason, the presentation of agricultural produce is manipulated at various stages from the field to the final consumer and is generally oriented towards the cleaning of the product and sorting by homogeneous categories. The project ESPRIT 3, reference 9230 ‘Integrated system for handling, inspection and packing of fruit and vegetable (SHIVA)’ developed a robotic system for the automatic, non-destructive inspection and handling of fruit. The aim of this paper is to report on the machine vision techniques developed at the Instituto Valenciano de Investigaciones Agrarias for the on-line estimation of the quality of oranges, peaches and apples, and to evaluate the efficiency of these techniques regarding the following quality attributes: size, colour, stem location and detection of external blemishes. The segmentation procedure used, based on a Bayesian discriminant analysis, allowed fruits to be precisely distinguished from the background. Thus, determination of size was properly solved. The colours of the fruits estimated by the system were well correlated with the colorimetric index values that are currently used as standards. Good results were obtained in the location of the stem and the detection of blemishes. The classification system was tested on-line with apples obtaining a good performance when classifying the fruit in batches, and a repeatability in blemish detection and size estimation of 86 and 93 respectively. The precision and repeatability of the system, was found to be similar to those of manual grading." ] }
1701.08608
2571871533
This letter presents a three-dimensional (3-D) visual detection method for the challenging task of detecting peduncles of sweet peppers (Capsicum annuum) in the field. Cutting the peduncle cleanly is one of the most difficult stages of the harvesting process, where the peduncle is the part of the crop that attaches it to the main stem of the plant. Accurate peduncle detection in 3-D space is, therefore, a vital step in reliable autonomous harvesting of sweet peppers, as this can lead to precise cutting while avoiding damage to the surrounding plant. This letter makes use of both color and geometry information acquired from an RGB-D sensor and utilizes a supervised-learning approach for the peduncle detection task. The performance of the proposed method is demonstrated and evaluated by using qualitative and quantitative results [the area-under-the-curve (AUC) of the detection precision-recall curve]. We are able to achieve an AUC of 0.71 for peduncle detection on field-grown sweet peppers. We release a set of manually annotated 3-D sweet pepper and peduncle images to assist the research community in performing further research on this topic.
Strawberry peduncle detection was reported by @cite_29 . The region of interest (ROI) was pre-defined using prior knowledge and the boundary point between a fruit and peduncle was detected using colour information. The inclination of the peduncle - the angle between the vertical axis and the boundary point - was computed. It is possible to easily distinguish the boundary point for red strawberry but is a challenge to exploit this approach for green sweet pepper.
{ "cite_N": [ "@cite_29" ], "mid": [ "2064245748" ], "abstract": [ "We developed a strawberry-harvesting robot, consisting of a cylindrical manipulator, end-effector, machine vision unit, storage unit and travelling unit, for application to an elevated substrate culture. The robot was based on the development concepts of night operation, peduncle handling and task sharing with workers, to overcome the robotic harvesting problems identified by previous studies, such as low work efficiency, low success rate, fruit damage, difficulty of detection in unstable illumination and high cost. In functional tests, the machine vision assessments of fruit maturity agreed with human assessments for the Amaotome and Beni-hoppe cultivars, but the performance for Amaotome was significantly better. Moreover, the machine vision unit correctly detected a peduncle of the target fruit at a rate of 60 . In harvesting tests conducted throughout the harvest season on target fruits with a maturity of 80 or more, the successful harvesting rate of the system was 41.3 when fruits were picked using a suction device before cutting the peduncle, while the rate was 34.9 when fruits were picked without suction. There were no significant differences between the two picking methods in terms of unsuccessful picking rates. The execution time for the successful harvest of a single fruit, including the time taken to transfer the harvested fruit to a tray, was 11.5 s." ] }
1701.08289
2952671661
In this report, we present a new face detection scheme using deep learning and achieve the state-of-the-art detection performance on the well-known FDDB face detetion benchmark evaluation. In particular, we improve the state-of-the-art faster RCNN framework by combining a number of strategies, including feature concatenation, hard negative mining, multi-scale training, model pretraining, and proper calibration of key parameters. As a consequence, the proposed scheme obtained the state-of-the-art face detection performance, making it the best model in terms of ROC curves among all the published methods on the FDDB benchmark.
Face detection has extensively studied in the literature of computer vision. Before 2000, despite many extensive studies, the practical performance of face detection was far from satisfactory until the milestone work proposed by Viola and Jones @cite_13 @cite_12 . In particular, the VJ framework @cite_13 was the first one to apply rectangular Haar-like features in a cascaded Adaboost classifier for achieving real-time face detection. However, it has several critical drawbacks. First of all, its feature size was relatively large. Typically, in a @math detection window, the number of Haar-like features was 160,000 @cite_12 . In addition, it is not able to effectively handle non-frontal faces and faces in the wild.
{ "cite_N": [ "@cite_13", "@cite_12" ], "mid": [ "2164598857", "2137401668" ], "abstract": [ "This paper describes a machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates. This work is distinguished by three key contributions. The first is the introduction of a new image representation called the \"integral image\" which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features from a larger set and yields extremely efficient classifiers. The third contribution is a method for combining increasingly more complex classifiers in a \"cascade\" which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. The cascade can be viewed as an object specific focus-of-attention mechanism which unlike previous approaches provides statistical guarantees that discarded regions are unlikely to contain the object of interest. In the domain of face detection the system yields detection rates comparable to the best previous systems. Used in real-time applications, the detector runs at 15 frames per second without resorting to image differencing or skin color detection.", "This paper describes a face detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a simple and efficient classifier which is built using the AdaBoost learning algorithm (Freund and Schapire, 1995) to select a small number of critical visual features from a very large set of potential features. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising face-like regions. A set of experiments in the domain of face detection is presented. The system yields face detection performance comparable to the best previous systems (Sung and Poggio, 1998; , 1998; Schneiderman and Kanade, 2000; , 2000). Implemented on a conventional desktop, face detection proceeds at 15 frames per second." ] }
1701.08289
2952671661
In this report, we present a new face detection scheme using deep learning and achieve the state-of-the-art detection performance on the well-known FDDB face detetion benchmark evaluation. In particular, we improve the state-of-the-art faster RCNN framework by combining a number of strategies, including feature concatenation, hard negative mining, multi-scale training, model pretraining, and proper calibration of key parameters. As a consequence, the proposed scheme obtained the state-of-the-art face detection performance, making it the best model in terms of ROC curves among all the published methods on the FDDB benchmark.
To address the first problem, much effort has been devoted to coming up with more complicated features like HOG @cite_6 , SIFT, SURF @cite_10 and ACF @cite_23 . For example, in @cite_20 , a new type of feature called NPD was proposed, which was computed as the ratio of the difference between any two pixel intensity values to the sum of their values. Others aimed to speed up the feature selection in a heuristic way @cite_18 @cite_9 . The well known Dlib C++ Library @cite_22 took SVM as the classifier in its face detector. Other approaches, such as random forest, have also been attempted.
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_9", "@cite_6", "@cite_23", "@cite_10", "@cite_20" ], "mid": [ "2118373237", "2115252128", "1984525543", "", "2041497292", "2100807570", "" ], "abstract": [ "Training a cascade-based face detector using boosting and Haar features is computationally expensive, often requiring weeks on single CPU machines. The bottleneck is at training and selecting Haar features for a single weak classifier, currently in minutes. Traditional techniques for training a weak classifier usually run in 0(NT log N), with N examples (approximately 10,000), and T features (approximately 40,000). We present a method to train a weak classifier in time 0(Nd2 + T), where d is the number of pixels of the probed image sub-window (usually from 350 to 500), by using only the statistics of the weighted input data. Experimental results revealed a significantly reduced training time of a weak classifier to the order of seconds. In particular, this method suffers very minimal immerse in training time with very large increases in members of Haar features, enjoying a significant gain in accuracy, even with reduced training time.", "There are many excellent toolkits which provide support for developing machine learning software in Python, R, Matlab, and similar environments. Dlib-ml is an open source library, targeted at both engineers and research scientists, which aims to provide a similarly rich environment for developing machine learning software in the C++ language. Towards this end, dlib-ml contains an extensible linear algebra toolkit with built in BLAS support. It also houses implementations of algorithms for performing inference in Bayesian networks and kernel-based methods for classification, regression, clustering, anomaly detection, and feature ranking. To enable easy use of these tools, the entire library has been developed with contract programming, which provides complete and precise documentation as well as powerful debugging tools.", "Cascades of boosted ensembles have become popular in the object detection community following their highly successful introduction in the face detector of Viola and Jones. Since then, researchers have sought to improve upon the original approach by incorporating new methods along a variety of axes (e.g. alternative boosting methods, feature sets, etc.). Nevertheless, key decisions about how many hypotheses to include in an ensemble and the appropriate balance of detection and false positive rates in the individual stages are often made by user intervention or by an automatic method that produces unnecessarily slow detectors. We propose a novel method for making these decisions, which exploits the shape of the stage ROC curves in ways that have been previously ignored. The result is a detector that is significantly faster than the one produced by the standard automatic method. When this algorithm is combined with a recycling method for reusing the outputs of early stages in later ones and with a retracing method that inserts new early rejection points in the cascade, the detection speed matches that of the best hand-crafted detector. We also exploit joint distributions over several features in weak learning to improve overall detector accuracy, and explore ways to improve training time by aggressively filtering features.", "", "Face detection has drawn much attention in recent decades since the seminal work by Viola and Jones. While many subsequences have improved the work with more powerful learning algorithms, the feature representation used for face detection still can’t meet the demand for effectively and efficiently handling faces with large appearance variance in the wild. To solve this bottleneck, we borrow the concept of channel features to the face detection domain, which extends the image channel to diverse types like gradient magnitude and oriented gradient histograms and therefore encodes rich information in a simple form. We adopt a novel variant called aggregate channel features, make a full exploration of feature design, and discover a multiscale version of features with better performance. To deal with poses of faces in the wild, we propose a multi-view detection approach featuring score re-ranking and detection adjustment. Following the learning pipelines in ViolaJones framework, the multi-view face detector using aggregate channel features surpasses current state-of-the-art detectors on AFW and FDDB testsets, while runs at 42 FPS", "This paper presents a novel learning framework for training boosting cascade based object detector from large scale dataset. The framework is derived from the well-known Viola-Jones (VJ) framework but distinguished by three key differences. First, the proposed framework adopts multi-dimensional SURF features instead of single dimensional Haar features to describe local patches. In this way, the number of used local patches can be reduced from hundreds of thousands to several hundreds. Second, it adopts logistic regression as weak classifier for each local patch instead of decision trees in the VJ framework. Third, we adopt AUC as a single criterion for the convergence test during cascade training rather than the two trade-off criteria (false-positive-rate and hit-rate) in the VJ framework. The benefit is that the false-positive-rate can be adaptive among different cascade stages, and thus yields much faster convergence speed of SURF cascade. Combining these points together, the proposed approach has three good properties. First, the boosting cascade can be trained very efficiently. Experiments show that the proposed approach can train object detectors from billions of negative samples within one hour even on personal computers. Second, the built detector is comparable to the state-of-the-art algorithm not only on the accuracy but also on the processing speed. Third, the built detector is small in model-size due to short cascade stages.", "" ] }
1701.08289
2952671661
In this report, we present a new face detection scheme using deep learning and achieve the state-of-the-art detection performance on the well-known FDDB face detetion benchmark evaluation. In particular, we improve the state-of-the-art faster RCNN framework by combining a number of strategies, including feature concatenation, hard negative mining, multi-scale training, model pretraining, and proper calibration of key parameters. As a consequence, the proposed scheme obtained the state-of-the-art face detection performance, making it the best model in terms of ROC curves among all the published methods on the FDDB benchmark.
Recent years have witnessed the advances of face detection using deep learning methods, which often outperform traditional computer vision methods significantly. For example, @cite_15 presented a method for detecting faces in the wild, which integrates a ConvNet and a 3D mean face model in an end-to-end multi-task discriminative learning framework. Recently, @cite_16 applied the Faster R-CNN @cite_3 , one of state-of-the-art generic object detector, and achieved promising results. In addition, much work has been done to improve the Faster R-CNN architecture. In @cite_1 , joint training conducted on CNN cascade, region proposal network (RPN) and Faster R-CNN has realized end-to-end optimization. @cite_19 combined Faster R-CNN face detection algorithm with hard negative mining and ResNet and achieved significant boosts in detection performance on face detection benchmarks like FDDB. In this work, we propose a new scheme for face detection by improving the Faster RCNN framework.
{ "cite_N": [ "@cite_1", "@cite_3", "@cite_19", "@cite_15", "@cite_16" ], "mid": [ "2473640056", "2953106684", "2477332545", "2951191545", "2438869444" ], "abstract": [ "Cascade has been widely used in face detection, where classifier with low computation cost can be firstly used to shrink most of the background while keeping the recall. The cascade in detection is popularized by seminal Viola-Jones framework and then widely used in other pipelines, such as DPM and CNN. However, to our best knowledge, most of the previous detection methods use cascade in a greedy manner, where previous stages in cascade are fixed when training a new stage. So optimizations of different CNNs are isolated. In this paper, we propose joint training to achieve end-to-end optimization for CNN cascade. We show that the back propagation algorithm used in training CNN can be naturally used in training CNN cascade. We present how jointly training can be conducted on naive CNN cascade and more sophisticated region proposal network (RPN) and fast R-CNN. Experiments on face detection benchmarks verify the advantages of the joint training.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.", "Recently significant performance improvement in face detection was made possible by deeply trained convolutional networks. In this report, a novel approach for training state-of-the-art face detector is described. The key is to exploit the idea of hard negative mining and iteratively update the Faster R-CNN based face detector with the hard negatives harvested from a large set of background examples. We demonstrate that our face detector outperforms state-of-the-art detectors on the FDDB dataset, which is the de facto standard for evaluating face detection algorithms.", "This paper presents a method for face detection in the wild, which integrates a ConvNet and a 3D mean face model in an end-to-end multi-task discriminative learning framework. The 3D mean face model is predefined and fixed (e.g., we used the one provided in the AFLW dataset). The ConvNet consists of two components: (i) The face pro- posal component computes face bounding box proposals via estimating facial key-points and the 3D transformation (rotation and translation) parameters for each predicted key-point w.r.t. the 3D mean face model. (ii) The face verification component computes detection results by prun- ing and refining proposals based on facial key-points based configuration pooling. The proposed method addresses two issues in adapting state- of-the-art generic object detection ConvNets (e.g., faster R-CNN) for face detection: (i) One is to eliminate the heuristic design of prede- fined anchor boxes in the region proposals network (RPN) by exploit- ing a 3D mean face model. (ii) The other is to replace the generic RoI (Region-of-Interest) pooling layer with a configuration pooling layer to respect underlying object structures. The multi-task loss consists of three terms: the classification Softmax loss and the location smooth l1 -losses [14] of both the facial key-points and the face bounding boxes. In ex- periments, our ConvNet is trained on the AFLW dataset only and tested on the FDDB benchmark with fine-tuning and on the AFW benchmark without fine-tuning. The proposed method obtains very competitive state-of-the-art performance in the two benchmarks.", "The Faster R-CNN has recently demonstrated impressive results on various object detection benchmarks. By training a Faster R-CNN model on the large scale WIDER face dataset, we report state-of-the-art results on two widely used face detection benchmarks, FDDB and the recently released IJB-A." ] }
1701.08006
2951363811
Naturalness of warping is gaining extensive attention in image stitching. Recent warps such as SPHP, AANAP and GSP, use a global similarity to effectively mitigate projective distortion (which enlarges regions), however, they necessarily bring in perspective distortion (which generates inconsistency). In this paper, we propose a quasi-homography warp, which balances perspective distortion against projective distortion in the non-overlapping region, to create natural-looking mosaics. Our approach formulates the warp as a solution of a system of bivariate equations, where perspective distortion and projective distortion are characterized as slope preservation and scale linearization respectively. Our proposed warp only relies on a global homography thus is totally parameter-free. A comprehensive experiment shows that quasi-homography outperforms some state-of-the-art warps in urban scenes, including homography, AutoStitch and SPHP. A user study demonstrates that quasi-homography wins most users' favor as well, comparing to homography and SPHP.
Other methods combine image alignment with seam-cutting approaches @cite_9 @cite_31 @cite_27 @cite_12 , to find a locally registered area which is seamlessly blended instead of aligning the overlapping region globally. Gao @cite_34 proposed a seam-driven framework, which searches a homography with minimal seam costs instead of minimal alignment errors on a set of feature correspondences. Zhang and Liu @cite_41 proposed a parallax-tolerant warp, which combines homography and content-preserving warps to locally register images. Lin @cite_24 proposed a seam-guided local alignment warp, which iteratively improves the warp by adaptive feature weighting according to the distance to current seams.
{ "cite_N": [ "@cite_41", "@cite_9", "@cite_24", "@cite_27", "@cite_31", "@cite_34", "@cite_12" ], "mid": [ "", "2143516773", "2518764509", "2077786999", "2001933992", "", "2159570345" ], "abstract": [ "", "Many tasks in computer vision involve assigning a label (such as disparity) to every pixel. A common constraint is that the labels should vary smoothly almost everywhere while preserving sharp discontinuities that may exist, e.g., at object boundaries. These tasks are naturally stated in terms of energy minimization. The authors consider a wide class of energies with various smoothness constraints. Global minimization of these energy functions is NP-hard even in the simplest discontinuity-preserving case. Therefore, our focus is on efficient approximation algorithms. We present two algorithms based on graph cuts that efficiently find a local minimum with respect to two types of large moves, namely expansion moves and swap moves. These moves can simultaneously change the labels of arbitrarily large sets of pixels. In contrast, many standard algorithms (including simulated annealing) use small moves where only one pixel changes its label at a time. Our expansion algorithm finds a labeling within a known factor of the global minimum, while our swap algorithm handles more general energy functions. Both of these algorithms allow important cases of discontinuity preserving energies. We experimentally demonstrate the effectiveness of our approach for image restoration, stereo and motion. On real data with ground truth, we achieve 98 percent accuracy.", "Image stitching with large parallax is a challenging problem. Global alignment usually introduces noticeable artifacts. A common strategy is to perform partial alignment to facilitate the search for a good seam for stitching. Different from existing approaches where the seam estimation process is performed sequentially after alignment, we explicitly use the estimated seam to guide the process of optimizing local alignment so that the seam quality gets improved over each iteration. Furthermore, a novel structure-preserving warping method is introduced to preserve salient curve and line structures during the warping. These measures substantially improve the effectiveness of our method in dealing with a wide range of challenging images with large parallax.", "In this paper we introduce a new algorithm for image and video texture synthesis. In our approach, patch regions from a sample image or video are transformed and copied to the output and then stitched together along optimal seams to generate a new (and typically larger) output. In contrast to other techniques, the size of the patch is not chosen a-priori, but instead a graph cut technique is used to determine the optimal patch region for any given offset between the input and output texture. Unlike dynamic programming, our graph cut technique for seam optimization is applicable in any dimension. We specifically explore it in 2D and 3D to perform video texture synthesis in addition to regular image synthesis. We present approximative offset search techniques that work well in conjunction with the presented patch size optimization. We show results for synthesizing regular, random, and natural images and videos. We also demonstrate how this method can be used to interactively merge different images to generate new scenes.", "We describe an interactive, computer-assisted framework for combining parts of a set of photographs into a single composite picture, a process we call \"digital photomontage.\" Our framework makes use of two techniques primarily: graph-cut optimization, to choose good seams within the constituent images so that they can be combined as seamlessly as possible; and gradient-domain fusion, a process based on Poisson equations, to further reduce any remaining visible artifacts in the composite. Also central to the framework is a suite of interactive tools that allow the user to specify a variety of high-level image objectives, either globally across the image, or locally through a painting-style interface. Image objectives are applied independently at each pixel location and generally involve a function of the pixel values (such as \"maximum contrast\") drawn from that same location in the set of source images. Typically, a user applies a series of image objectives iteratively in order to create a finished composite. The power of this framework lies in its generality; we show how it can be used for a wide variety of applications, including \"selective composites\" (for instance, group photos in which everyone looks their best), relighting, extended depth of field, panoramic stitching, clean-plate production, stroboscopic visualization of movement, and time-lapse mosaics.", "", "This paper presents a technique to automatically stitch multiple images at varying orientations and exposures to create a composite panorama that preserves the angular extent and dynamic range of the inputs. The main contribution of our method is that it allows for large exposure differences, large scene motion or other misregistrations between frames and requires no extra camera hardware. To do this, we introduce a two-step graph cut approach. The purpose of the first step is to fix the positions of moving objects in the scene. In the second step, we fill in the entire available dynamic range. We introduce data costs that encourage consistency and higher signal-to-noise ratios, and seam costs that encourage smooth transitions. Our method is simple to implement and effective. We demonstrate the effectiveness of our approach on several input sets with varying exposures and camera orientations." ] }
1701.08006
2951363811
Naturalness of warping is gaining extensive attention in image stitching. Recent warps such as SPHP, AANAP and GSP, use a global similarity to effectively mitigate projective distortion (which enlarges regions), however, they necessarily bring in perspective distortion (which generates inconsistency). In this paper, we propose a quasi-homography warp, which balances perspective distortion against projective distortion in the non-overlapping region, to create natural-looking mosaics. Our approach formulates the warp as a solution of a system of bivariate equations, where perspective distortion and projective distortion are characterized as slope preservation and scale linearization respectively. Our proposed warp only relies on a global homography thus is totally parameter-free. A comprehensive experiment shows that quasi-homography outperforms some state-of-the-art warps in urban scenes, including homography, AutoStitch and SPHP. A user study demonstrates that quasi-homography wins most users' favor as well, comparing to homography and SPHP.
Many efforts have been devoted to mitigate distortion in the non-overlapping region for creating a natural-looking mosaic. A pioneering work @cite_28 uses spherical or cylindrical warps to produce multi-perspective results to address this problem, but it necessarily curves straight lines.
{ "cite_N": [ "@cite_28" ], "mid": [ "2126060993" ], "abstract": [ "This paper concerns the problem of fully automated panoramic image stitching. Though the 1D problem (single axis of rotation) is well studied, 2D or multi-row stitching is more difficult. Previous approaches have used human input or restrictions on the image sequence in order to establish matching images. In this work, we formulate stitching as a multi-image matching problem, and use invariant local features to find matches between all of the images. Because of this our method is insensitive to the ordering, orientation, scale and illumination of the input images. It is also insensitive to noise images that are not part of a panorama, and can recognise multiple panoramas in an unordered image dataset. In addition to providing more detail, this paper extends our previous work in the area (Brown and Lowe, 2003) by introducing gain compensation and automatic straightening steps." ] }
1701.07842
2952359514
Event-driven programming frameworks, such as Android, are based on components with asynchronous interfaces. The protocols for interacting with these components can often be described by finite-state machines we dub *callback typestates*. Callback typestates are akin to classical typestates, with the difference that their outputs (callbacks) are produced asynchronously. While useful, these specifications are not commonly available, because writing them is difficult and error-prone. Our goal is to make the task of producing callback typestates significantly easier. We present a callback typestate assistant tool, DroidStar, that requires only limited user interaction to produce a callback typestate. Our approach is based on an active learning algorithm, L*. We improved the scalability of equivalence queries (a key component of L*), thus making active learning tractable on the Android system. We use DroidStar to learn callback typestates for Android classes both for cases where one is already provided by the documentation, and for cases where the documentation is unclear. The results show that DroidStar learns callback typestates accurately and efficiently. Moreover, in several cases, the synthesized callback typestates uncovered surprising and undocumented behaviors.
Inferring interfaces using execution traces of client programs using the framework is another common approach @cite_15 @cite_16 @cite_4 @cite_12 @cite_24 @cite_10 @cite_27 @cite_8 . In contrast to dynamic mining, we do not rely on the availability of client applications or a set of execution traces. The algorithm drives the testing.
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_24", "@cite_27", "@cite_15", "@cite_16", "@cite_10", "@cite_12" ], "mid": [ "2166064937", "", "2093938715", "2131954495", "", "2121059325", "", "2074001172" ], "abstract": [ "A software system interacts with its environment through interfaces. Improper handling of exceptional returns from system interfaces can cause robustness problems. Robust- ness of software systems are governed by various tempo- ral properties related to interfaces. Static verification has been shown to be effective in checking these temporal prop- erties. But manually specifying these properties is cum- bersome and requires the knowledge of interface specifica- tions, which are often either unavailable or undocumented. In this paper, we propose a novel framework to automati- cally infer system-specific interface specifications from pro- gram source code. We use a model checker to generate traces related to the interfaces. From these model check- ing traces, we infer interface specification details such as return value on success or failure. Based on these inferred specifications, we translate generically specified interface robustness rules to concrete robustness properties verifi- able by static checking. Hence the generic rules can be specified at an abstract level that needs no knowledge of the source code, system, or interfaces. We implement our framework for an existing static analyzer that employs push down model checking and apply the analyzer to the well known POSIX-API system interfaces. We found 28 robust- ness violations in 10 open source packages using our frame- work.", "", "Program specifications are important for many tasks during software design, development, and maintenance. Among these, temporal specifications are particularly useful. They express formal correctness requirements of an application's ordering of specific actions and events during execution, such as the strict alternation of acquisition and release of locks. Despite their importance, temporal specifications are often missing, incomplete, or described only informally. Many techniques have been proposed that mine such specifications from execution traces or program source code. However, existing techniques mine only simple patterns, or they mine a single complex pattern that is restricted to a particular set of manually selected events. There is no practical, automatic technique that can mine general temporal properties from execution traces. In this paper, we present Javert, the first general specification mining framework that can learn, fully automatically, complex temporal properties from execution traces. The key insight behind Javert is that real, complex specifications can be formed by composing instances of small generic patterns, such as the alternating pattern ((ab)) and the resource usage pattern ((ab c)). In particular, Javert learns simple generic patterns and composes them using sound rules to construct large, complex specifications. We have implemented the algorithm in a practical tool and conducted an extensive empirical evaluation on several open source software projects. Our results are promising; they show that Javert is scalable, general, and precise. It discovered many interesting, nontrivial specifications in real-world code that are beyond the reach of existing automatic techniques.", "Dynamic inference techniques have been demonstrated to provide useful support for various software engineering tasks including bug finding, test suite evaluation and improvement, and specification generation. To date, however, dynamic inference has only been used effectively on small programs under controlled conditions. In this paper, we identify reasons why scaling dynamic inference techniques has proven difficult, and introduce solutions that enable a dynamic inference technique to scale to large programs and work effectively with the imperfect traces typically available in industrial scenarios. We describe our approximate inference algorithm, present and evaluate heuristics for winnowing the large number of inferred properties to a manageable set of interesting properties, and report on experiments using inferred properties. We evaluate our techniques on JBoss and the Windows kernel. Our tool is able to infer many of the properties checked by the Static Driver Verifier and leads us to discover a previously unknown bug in Windows.", "", "Component-based software design is a popular and effective approach to designing large systems. While components typically have well-defined interfaces, sequencing information---which calls must come in which order---is often not formally specified.This paper proposes using multiple finite statemachine (FSM) submodels to model the interface of a class. A submodel includes a subset of methods that, for example, implement a Java interface, or access some particular field. Each state-modifying method is represented as a state in the FSM, and transitions of the FSMs represent allow able pairs of consecutive methods. In addition, state-preserving methods are constrained to execute only under certain states.We have designed and implemented a system that includes static analyses to deduce illegal call sequences in a program, dynamic instrumentation techniques to extract models from execution runs, and a dynamic model checker that ensures that the code conforms to the model. Extracted models can serve as documentation; they can serve as constraints to be enforced by a static checker; they can be studied directly by developers to determine if the program is exhibiting unexpected behavior; or they can be used to determine the completeness of a test suite.Our system has been run on several large code bases, including the joeq virtual machine, the basic Java libraries, and the Java 2 Enterprise Edition library code. Our experience suggests that this approach yields useful information.", "", "To learn what constitutes correct program behavior, one can start with normal behavior. We observe actual program executions to construct state machines that summarize object behavior. These state machines, called object behavior models, capture the relationships between two kinds of methods: mutators that change the state (such as add()) and inspectors that keep the state unchanged (such as isEmpty()): \"A Vector object initially is in isEmpty() state; after add(), it goes into ¬isEmpty() state\". Our ADABU prototype for JAVA has successfully mined models of undocumented behavior from the AspectJ compiler and the Columba email client; the models tend to be small and easily understandable." ] }
1701.07842
2952359514
Event-driven programming frameworks, such as Android, are based on components with asynchronous interfaces. The protocols for interacting with these components can often be described by finite-state machines we dub *callback typestates*. Callback typestates are akin to classical typestates, with the difference that their outputs (callbacks) are produced asynchronously. While useful, these specifications are not commonly available, because writing them is difficult and error-prone. Our goal is to make the task of producing callback typestates significantly easier. We present a callback typestate assistant tool, DroidStar, that requires only limited user interaction to produce a callback typestate. Our approach is based on an active learning algorithm, L*. We improved the scalability of equivalence queries (a key component of L*), thus making active learning tractable on the Android system. We use DroidStar to learn callback typestates for Android classes both for cases where one is already provided by the documentation, and for cases where the documentation is unclear. The results show that DroidStar learns callback typestates accurately and efficiently. Moreover, in several cases, the synthesized callback typestates uncovered surprising and undocumented behaviors.
Our work builds on the seminal paper of Angluin @cite_32 and the subsequent extensions and optimizations. In particular, we build on for I O automata @cite_29 @cite_38 . The optimizations we use include the counterexample suffix analysis from @cite_18 and the optimizations for prefix-closed languages from @cite_25 . The relation to conformance testing methods @cite_20 @cite_7 @cite_30 @cite_37 @cite_11 has been discussed in .
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_18", "@cite_37", "@cite_7", "@cite_29", "@cite_32", "@cite_25", "@cite_20", "@cite_11" ], "mid": [ "2147573597", "1529010373", "", "2140047593", "", "1840142437", "1989445634", "", "2011762419", "2029755436" ], "abstract": [ "This paper describes the design of experimental procedures for determining whether or not a sequential switching circuit is operating in accordance with a given state-table description. These procedures are particularly easy to apply when the given state table is reduced, strongly-connected, and has a distinguishing sequence, and when the actual circuit has no more states than the given table. They can also be extended to cover more general cases, although the resulting experiments are more cumbersome.", "Automata learning techniques are getting significant importance for their applications in a wide variety of software engineering problems, especially in the analysis and testing of complex systems. In recent studies, a previous learning approach [1] has been extended to synthesize Mealy machine models which are specifically tailored for I O based systems. In this paper, we discuss the inference of Mealy machines and propose improvements that reduces the worst-time learning complexity of the existing algorithm. The gain over the complexity of the proposed algorithm has also been confirmed by experimentation on a large set of finite state machines.", "", "A methodical procedure for organization of fault detection experiments for synchronous sequential machines possessing distinguishing sequences (DS) is given. The organization is based on the transition checking approach. The checking experiment is considered in three concatenative parts: 1) the initial sequence which brings the machine under test into a specific state, 2) the α-sequence to recognize all the states and to establish the information about the next states under the input DS, and 3) the β-sequence to check all the individual transitions in the state table.", "", "Links are established between three widely used modeling frameworks for reactive systems: the ioco theory of Tretmans, the interface automata of De Alfaro and Henzinger, and Mealy machines. It is shown that, by exploiting these links, any tool for active learning of Mealy machines can be used for learning I O automata that are deterministic and output determined. The main idea is to place a transducer in between the I O automata teacher and the Mealy machine learner, which translates concepts from the world of I O automata to the world of Mealy machines, and vice versa. The transducer comes equipped with an interface automaton that allows us to focus the learning process on those parts of the behavior that can effectively be tested and or are of particular interest. The approach has been implemented on top of the LearnLib tool and has been applied successfully to three case studies.", "The problem of identifying an unknown regular set from examples of its members and nonmembers is addressed. It is assumed that the regular set is presented by a minimaMy adequate Teacher, which can answer membership queries about the set and can also test a conjecture and indicate whether it is equal to the unknown set and provide a counterexample if not. (A counterexample is a string in the symmetric difference of the correct set and the conjectured set.) A learning algorithm L* is described that correctly learns any regular set from any minimally adequate Teacher in time polynomial in the number of states of the minimum dfa for the set and the maximum length of any counterexample provided by the Teacher. It is shown that in a stochastic setting the ability of the Teacher to test conjectures may be replaced by a random sampling oracle, EX( ). A polynomial-time learning algorithm is shown for a particular problem of context-free language identification.", "", "We propose a method of testing the correctness of control structures that can be modeled by a finite-state machine. Test results derived from the design are evaluated against the specification. No \"executable\" prototype is required. The method is based on a result in automata theory and can be applied to software testing. Its error-detecting capability is compared with that of other approaches. Application experience is summarized.", "A novel procedure presented here generates test sequences for checking the conformity of protocol implementations to their specifications. The test sequences generated by this procedure only detect the presence of many faults, but they do not locate the faults. It can always detect the problem in an implementation with a single fault. A protocol entity is specified as a finite state machine (FSM). It typically has two interfaces: an interface with the user and with the lower-layer protocol. The inputs from both interfaces are merged into a single set I and the outputs from both interfaces are merged into a single set O. The implementation is assumed to be a black box. The key idea in this procedure is to tour all states and state transitions and to check a unique signature for each state, called the Unique Input Output (UIO) sequence. A UIO sequence for a state is an I O behavior that is not exhibited by any other state." ] }
1701.08096
2624526235
Discovering the key structure of a database is one of the main goals of data mining. In pattern set mining we do so by discovering a small set of patterns that together describe the data well. The richer the class of patterns we consider, and the more powerful our description language, the better we will be able to summarise the data. In this paper we propose , a novel greedy MDL-based method for summarising sequential data using rich patterns that are allowed to interleave. Experiments show is orders of magnitude faster than the state of the art, results in better models, as well as discovers meaningful semantics in the form patterns that identify multiple choices of values.
Discovering sequential patterns is an active research topic. Traditionally there was a focus on mining frequent sequential patterns, with different definitions of how to count occurrences @cite_22 @cite_8 @cite_11 . Mining general patterns, patterns where the order of events are specified by a DAG is surprisingly hard. Even testing whether a sequence contains a pattern is NP-complete @cite_19 . Consequently, research has focused on mining subclasses of episodes, such as, episodes with unique labels @cite_16 @cite_12 , strict episodes @cite_21 , and injective episodes @cite_16 .
{ "cite_N": [ "@cite_22", "@cite_8", "@cite_21", "@cite_19", "@cite_16", "@cite_12", "@cite_11" ], "mid": [ "", "2122182354", "2036514379", "2001456586", "2146334655", "2142103633", "" ], "abstract": [ "", "Previous studies have presented convincing arguments that a frequent pattern mining algorithm should not mine all frequent patterns but only the closed ones because the latter leads to not only more compact yet complete result set but also better efficiency. However, most of the previously developed closed pattern mining algorithms work under the candidate maintenance-and-test paradigm which is inherently costly in both runtime and space usage when the support threshold is low or the patterns become long. We present, BIDE, an efficient algorithm for mining frequent closed sequences without candidate maintenance. We adopt a novel sequence closure checking scheme called bidirectional extension, and prunes the search space more deeply compared to the previous algorithms by using the BackScan pruning method and the Scan-Skip optimization technique. A thorough performance study with both sparse and dense real-life data sets has demonstrated that BIDE significantly outperforms the previous algorithms: it consumes order(s) of magnitude less memory and can be more than an order of magnitude faster. It is also linearly scalable in terms of database size.", "Discovering patterns in a sequence is an important aspect of data mining. One popular choice of such patterns are episodes, patterns in sequential data describing events that often occur in the vicinity of each other. Episodes also enforce in which order the events are allowed to occur. In this work we introduce a technique for discovering closed episodes. Adopting existing approaches for discovering traditional patterns, such as closed itemsets, to episodes is not straightforward. First of all, we cannot define a unique closure based on frequency because an episode may have several closed superepisodes. Moreover, to define a closedness concept for episodes we need a subset relationship between episodes, which is not trivial to define. We approach these problems by introducing strict episodes. We argue that this class is general enough, and at the same time we are able to define a natural subset relationship within it and use it efficiently. In order to mine closed episodes we define an auxiliary closure operator. We show that this closure satisfies the needed properties so that we can use the existing framework for mining closed patterns. Discovering the true closed episodes can be done as a post-processing step. We combine these observations into an efficient mining algorithm and demonstrate empirically its performance in practice.", "Sequential pattern discovery is a well-studied field in data mining. Episodes are sequential patterns describing events that often occur in the vicinity of each other. Episodes can impose restrictions to the order of the events, which makes them a versatile technique for describing complex patterns in the sequence. Most of the research on episodes deals with special cases such as serial, parallel, and injective episodes, while discovering general episodes is understudied. In this paper we extend the definition of an episode in order to be able to represent cases where events often occur simultaneously. We present an efficient and novel miner for discovering frequent and closed general episodes. Such a task presents unique challenges. Firstly, we cannot define closure based on frequency. We solve this by computing a more conservative closure that we use to reduce the search space and discover the closed episodes as a postprocessing step. Secondly, episodes are traditionally presented as directed acyclic graphs. We argue that this representation has drawbacks leading to redundancy in the output. We solve these drawbacks by defining a subset relationship in such a way that allows us to remove the redundant episodes. We demonstrate the efficiency of our algorithm and the need for using closed episodes empirically on synthetic and real-world datasets.", "Frequent episode discovery is a popular framework for temporal pattern discovery in event streams. An episode is a partially ordered set of nodes with each node associated with an event type. Currently algorithms exist for episode discovery only when the associated partial order is total order (serial episode) or trivial (parallel episode). In this paper, we propose efficient algorithms for discovering frequent episodes with unrestricted partial orders when the associated event-types are unique. These algorithms can be easily specialized to discover only serial or parallel episodes. Also, the algorithms are flexible enough to be specialized for mining in the space of certain interesting subclasses of partial orders. We point out that frequency alone is not a sufficient measure of interestingness in the context of partial order mining. We propose a new interestingness measure for episodes with unrestricted partial orders which, when used along with frequency, results in an efficient scheme of data mining. Simulations are presented to demonstrate the effectiveness of our algorithms.", "Mining knowledge about ordering from sequence data is an important problem with many applications, such as bioinformatics, Web mining, network management, and intrusion detection. For example, if many customers follow a partial order in their purchases of a series of products, the partial order can be used to predict other related customers' future purchases and develop marketing campaigns. Moreover, some biological sequences (e.g., microarray data) can be clustered based on the partial orders shared by the sequences. Given a set of items, a total order of a subset of items can be represented as a string. A string database is a multiset of strings. In this paper, we identify a novel problem of mining frequent closed partial orders from strings. Frequent closed partial orders capture the nonredundant and interesting ordering information from string databases. Importantly, mining frequent closed partial orders can discover meaningful knowledge that cannot be disclosed by previous data mining techniques. However, the problem of mining frequent closed partial orders is challenging. To tackle the problem, we develop Frecpo (for frequent closed partial order), a practically efficient algorithm for mining the complete set of frequent closed partial orders from large string databases. Several interesting pruning techniques are devised to speed up the search. We report an extensive performance study on both real data sets and synthetic data sets to illustrate the effectiveness and the efficiency of our approach", "" ] }
1701.08096
2624526235
Discovering the key structure of a database is one of the main goals of data mining. In pattern set mining we do so by discovering a small set of patterns that together describe the data well. The richer the class of patterns we consider, and the more powerful our description language, the better we will be able to summarise the data. In this paper we propose , a novel greedy MDL-based method for summarising sequential data using rich patterns that are allowed to interleave. Experiments show is orders of magnitude faster than the state of the art, results in better models, as well as discovers meaningful semantics in the form patterns that identify multiple choices of values.
Recently, Fowkes and Sutton proposed the @math algorithm @cite_14 . is based on a generative probabilistic model of the sequence database, and uses EM to search for that set of patterns that is most likely to generate the database. does not explicitly consider model complexity. Like , can handle interleaving and nesting of sequences. We will empirically compare to in the experiments.
{ "cite_N": [ "@cite_14" ], "mid": [ "2274829541" ], "abstract": [ "Recent sequential pattern mining methods have used the minimum description length (MDL) principle to define an encoding scheme which describes an algorithm for mining the most compressing patterns in a database. We present a novel subsequence interleaving model based on a probabilistic model of the sequence database, which allows us to search for the most compressing set of patterns without designing a specific encoding scheme. Our proposed algorithm is able to efficiently mine the most relevant sequential patterns and rank them using an associated measure of interestingness. The efficient inference in our model is a direct result of our use of a structural expectation-maximization framework, in which the expectation-step takes the form of a submodular optimization problem subject to a coverage constraint. We show on both synthetic and real world datasets that our model mines a set of sequential patterns with low spuriousness and redundancy, high interpretability and usefulness in real-world applications. Furthermore, we demonstrate that the quality of the patterns from our approach is comparable to, if not better than, existing state of the art sequential pattern mining algorithms." ] }
1701.07993
2951957138
Virtual Network Functions as a Service (VNFaaS) is currently under attentive study by telecommunications and cloud stakeholders as a promising business and technical direction consisting of providing network functions as a service on a cloud (NFV Infrastructure), instead of delivering standalone network appliances, in order to provide higher scalability and reduce maintenance costs. However, the functioning of such NFVI hosting the VNFs is fundamental for all the services and applications running on top of it, forcing to guarantee a high availability level. Indeed the availability of an VNFaaS relies on the failure rate of its single components, namely the servers, the virtualization software, and the communication network. The proper assignment of the virtual machines implementing network functions to NFVI servers and their protection is essential to guarantee high availability. We model the High Availability Virtual Network Function Placement (HA-VNFP) as the problem of finding the best assignment of virtual machines to servers guaranteeing protection by replication. We propose a probabilistic approach to measure the real availability of a system and design both efficient and effective algorithms that can be used by stakeholders for both online and offline planning.
Even if VM and VNF resource placement in cloud systems is a recent area of research (see @cite_29 for a high-level comprehensive study), however there already exists orchestrators that are driven by optimization algorithms for the placement, such as @cite_0 . We now present few works in literature studying the optimization problems that arises in this context.
{ "cite_N": [ "@cite_0", "@cite_29" ], "mid": [ "2474498303", "2034603054" ], "abstract": [ "Network Functions Visualization is focused on migrating traditional hardware-based network functions to software-based appliances running on standard high volume severs. There are a variety of challenges facing early adopters of Network Function Virtualizations; key among them are resource and service mapping, to support virtual network function orchestration. Service providers need efficient and effective mapping capabilities to optimally deploy network services. This paper describes TeNOR, a micro-service based network function virtualisation orchestrator capable of effectively addressing resource and network service mapping. The functional architecture and data models of TeNOR are described, as well as two proposed approaches to address the resource mapping problem. Key evaluation results are discussed and an assessment of the mapping approaches is performed in terms of the service acceptance ratio and scalability of the proposed approaches.", "Resource management in a cloud environment is a hard problem, due to: the scale of modern data centers; the heterogeneity of resource types and their interdependencies; the variability and unpredictability of the load; as well as the range of objectives of the different actors in a cloud ecosystem. Consequently, both academia and industry began significant research efforts in this area. In this paper, we survey the recent literature, covering 250+ publications, and highlighting key results. We outline a conceptual framework for cloud resource management and use it to structure the state-of-the-art review. Based on our analysis, we identify five challenges for future investigation. These relate to: providing predictable performance for cloud-hosted applications; achieving global manageability for cloud systems; engineering scalable resource management systems; understanding economic behavior and cloud pricing; and developing solutions for the mobile cloud paradigm ." ] }
1701.07993
2951957138
Virtual Network Functions as a Service (VNFaaS) is currently under attentive study by telecommunications and cloud stakeholders as a promising business and technical direction consisting of providing network functions as a service on a cloud (NFV Infrastructure), instead of delivering standalone network appliances, in order to provide higher scalability and reduce maintenance costs. However, the functioning of such NFVI hosting the VNFs is fundamental for all the services and applications running on top of it, forcing to guarantee a high availability level. Indeed the availability of an VNFaaS relies on the failure rate of its single components, namely the servers, the virtualization software, and the communication network. The proper assignment of the virtual machines implementing network functions to NFVI servers and their protection is essential to guarantee high availability. We model the High Availability Virtual Network Function Placement (HA-VNFP) as the problem of finding the best assignment of virtual machines to servers guaranteeing protection by replication. We propose a probabilistic approach to measure the real availability of a system and design both efficient and effective algorithms that can be used by stakeholders for both online and offline planning.
@cite_18 studies the problem of placing VMs in datacenters minimizing the average latency of VM-to-VM communications. Such a problem is @math -hard and falls into the category of . The authors provide a polynomial time heuristic algorithm solving the problem in a fashion. In @cite_20 the authors deal with the problem of placing VMs in geo-distributed clouds minimizing the inter-VM communication delays. They decompose the problem in subproblems that they solve heuristically. They also prove that, under certain conditions, one of the subproblems can be solved to optimality in polynomial time. @cite_1 studies the VM placement problem minimizing the maximum ratio of the demand and the capacity across all cuts in the network, in order to absorb unpredictable traffic burst. The authors provide two different heuristics to solve the problem in reasonable computing time.
{ "cite_N": [ "@cite_18", "@cite_1", "@cite_20" ], "mid": [ "", "2131400480", "1974099360" ], "abstract": [ "", "Virtual Machine (VM) placement has to carefully consider the aggregated resource consumption of co-located VMs in order to obey service level agreements at lower possible cost. In this paper, we focus on satisfying the traffic demands of the VMs in addition to CPU and memory requirements. This is a much more complex problem both due to its quadratic nature (being the communication between a pair of VMs) and since it involves many factors beyond the physical host, like the network topologies and the routing scheme. Moreover, traffic patterns may vary over time and predicting the resulting effect on the actual available bandwidth between hosts within the data center is extremely difficult. We address this problem by trying to allocate a placement that not only satisfies the predicted communication demand but is also resilient to demand time-variations. This gives rise to a new optimization problem that we call the Min Cut Ratio-aware VM Placement (MCRVMP). The general MCRVMP problem is NP-Hard, hence, we introduce several heuristics to solve it in reasonable time. We present extensive experimental results, associated with both placement computation and run-time performance under time-varying traffic demands, to show that our heuristics provide good results (compared to the optimal solution) for medium size data centers.", "We consider resource allocation algorithms for distributed cloud systems, which deploy cloud-computing resources that are geographically distributed over a large number of locations in a wide-area network. This distribution of cloud-computing resources over many locations in the network may be done for several reasons, such as to locate resources closer to users, to reduce bandwidth costs, to increase availability, etc. To get the maximum benefit from a distributed cloud system, we need efficient algorithms for resource allocation which minimize communication costs and latency. In this paper, we develop efficient resource allocation algorithms for use in distributed clouds. Our contributions are as follows: Assuming that users specify their resource needs, such as the number of virtual machines needed for a large computational task, we develop an efficient 2-approximation algorithm for the optimal selection of data centers in the distributed cloud. Our objective is to minimize the maximum distance, or latency, between the selected data centers. Next, we consider use of a similar algorithm to select, within each data center, the racks and servers where the requested virtual machines for the task will be located. Since the network inside a data center is structured and typically a tree, we make use of this structure to develop an optimal algorithm for rack and server selection. Finally, we develop a heuristic for partitioning the requested resources for the task amongst the chosen data centers and racks. We use simulations to evaluate the performance of our algorithms over example distributed cloud systems and find that our algorithms provide significant gains over other simpler allocation algorithms." ] }
1701.07993
2951957138
Virtual Network Functions as a Service (VNFaaS) is currently under attentive study by telecommunications and cloud stakeholders as a promising business and technical direction consisting of providing network functions as a service on a cloud (NFV Infrastructure), instead of delivering standalone network appliances, in order to provide higher scalability and reduce maintenance costs. However, the functioning of such NFVI hosting the VNFs is fundamental for all the services and applications running on top of it, forcing to guarantee a high availability level. Indeed the availability of an VNFaaS relies on the failure rate of its single components, namely the servers, the virtualization software, and the communication network. The proper assignment of the virtual machines implementing network functions to NFVI servers and their protection is essential to guarantee high availability. We model the High Availability Virtual Network Function Placement (HA-VNFP) as the problem of finding the best assignment of virtual machines to servers guaranteeing protection by replication. We propose a probabilistic approach to measure the real availability of a system and design both efficient and effective algorithms that can be used by stakeholders for both online and offline planning.
@cite_13 applies NFV to LTE mobile core gateways proposing the problem of placing VNFs in datacenters satisfying all client requests and latency constraints while minimizing the overall network load. Instead, in @cite_2 the objective function requires to minimize the total system cost, comprising the setup and link costs. @cite_9 introduces the VNF orchestration problem of placing VNFs and routing client requests through a chain of VNFs. The authors minimize the setup costs while satisfying all client demands. They propose both an ILP and a heuristic to solve such problem. Also @cite_10 considers the VNF orchestration problem with VNF switching piece-wise linear latency function and bit-rate compression and decompression operations. Two different objective functions are studied: one minimizing costs and one balancing the network usage.
{ "cite_N": [ "@cite_10", "@cite_9", "@cite_13", "@cite_2" ], "mid": [ "2138314086", "2105685892", "1967912924", "1578960134" ], "abstract": [ "Network Functions Virtualization (NFV) is incrementally deployed by Internet Service Providers (ISPs) in their carrier networks, by means of Virtual Network Function (VNF) chains, to address customers' demands. The motivation is the increasing manageability, reliability and performance of NFV systems, the gains in energy and space granted by virtualization, at a cost that becomes competitive with respect to legacy physical network function nodes. From a network optimization perspective, the routing of VNF chains across a carrier network implies key novelties making the VNF chain routing problem unique with respect to the state of the art: the bitrate of each demand flow can change along a VNF chain, the VNF processing latency and computing load can be a function of the demands traffic, VNFs can be shared among demands, etc. In this paper, we provide an NFV network model suitable for ISP operations. We define the generic VNF chain routing optimization problem and devise a mixed integer linear programming formulation. By extensive simulation on realistic ISP topologies, we draw conclusions on the trade-offs achievable between legacy Traffic Engineering (TE) ISP goals and novel combined TE-NFV goals.", "Network Function Virtualization (NFV) is a promising network architecture concept, in which virtualization technologies are employed to manage networking functions via software as opposed to having to rely on hardware to handle these functions. By shifting dedicated, hardware-based network function processing to software running on commoditized hardware, NFV has the potential to make the provisioning of network functions more flexible and cost-effective, to mention just a few anticipated benefits. Despite consistent initial efforts to make NFV a reality, little has been done towards efficiently placing virtual network functions and deploying service function chains (SFC). With respect to this particular research problem, it is important to make sure resource allocation is carefully performed and orchestrated, preventing over- or under-provisioning of resources and keeping end-to-end delays comparable to those observed in traditional middlebox-based networks. In this paper, we formalize the network function placement and chaining problem and propose an Integer Linear Programming (ILP) model to solve it. Additionally, in order to cope with large infrastructures, we propose a heuristic procedure for efficiently guiding the ILP solver towards feasible, near-optimal solutions. Results show that the proposed model leads to a reduction of up to 25 in end-to-end delays (in comparison to chainings observed in traditional infrastructures) and an acceptable resource over-provisioning limited to 4 . Further, we demonstrate that our heuristic approach is able to find solutions that are very close to optimality while delivering results in a timely manner.", "With the rapid growth of user data, service innovation, and the persistent necessity to reduce costs, today's mobile operators are faced with severe challenges. In networking, two new concepts have emerged aiming at cost reduction, increase of network scalability and service flexibility, namely Network Functions Virtualization (NFV) and Software Defined Networking (SDN). NFV proposes to run the mobile network functions as software instances on commodity servers or datacenters (DC), while SDN supports a decomposition of the mobile network into control-plane and data-plane functions. Whereas these new concepts are considered as very promising drivers to design cost efficient mobile network architectures, limited attention has been drawn to the network load and infringed data-plane delay imposed by introducing NFV or SDN. We argue that within a widely-spanned mobile network, there is in fact a high potential to combine both concepts. Taking load and delay into account, there will be areas of the mobile network rather benefiting from an NFV deployment with all functions virtualized, while for other areas, an SDN deployment with functions decomposition is more advantageous. We refer to this problem as the functions placement problem. We propose a model that resolves the functions placement and aims at minimizing the transport network load overhead against several parameters such as data-plane delay, number of potential datacenters and SDN control overhead. We illustrate our proposed concept along with a concrete use case example.", "Network Function Virtualization (NFV) is a new networking paradigm where network functions are executed on commodity servers located in small cloud nodes distributed across the network, and where software defined mechanisms are used to control the network flows. This paradigm is a major turning point in the evolution of networking, as it introduces high expectations for enhanced economical network services, as well as major technical challenges. In this paper, we address one of the main technical challenges in this domain: the actual placement of the virtual functions within the physical network. This placement has a critical impact on the performance of the network, as well as on its reliability and operation cost. We perform a thorough study of the NFV location problem, show that it introduces a new type of optimization problems, and provide near optimal approximation algorithms guaranteeing a placement with theoretically proven performance. The performance of the solution is evaluated with respect to two measures: the distance cost between the clients and the virtual functions by which they are served, as well as the setup costs of these functions. We provide bi-criteria solutions reaching constant approximation factors with respect to the overall performance, and adhering to the capacity constraints of the networking infrastructure by a constant factor as well. Finally, using extensive simulations, we show that the proposed algorithms perform well in many realistic scenarios." ] }
1701.07993
2951957138
Virtual Network Functions as a Service (VNFaaS) is currently under attentive study by telecommunications and cloud stakeholders as a promising business and technical direction consisting of providing network functions as a service on a cloud (NFV Infrastructure), instead of delivering standalone network appliances, in order to provide higher scalability and reduce maintenance costs. However, the functioning of such NFVI hosting the VNFs is fundamental for all the services and applications running on top of it, forcing to guarantee a high availability level. Indeed the availability of an VNFaaS relies on the failure rate of its single components, namely the servers, the virtualization software, and the communication network. The proper assignment of the virtual machines implementing network functions to NFVI servers and their protection is essential to guarantee high availability. We model the High Availability Virtual Network Function Placement (HA-VNFP) as the problem of finding the best assignment of virtual machines to servers guaranteeing protection by replication. We propose a probabilistic approach to measure the real availability of a system and design both efficient and effective algorithms that can be used by stakeholders for both online and offline planning.
In @cite_26 VMs are placed with a protection guaranteeing @math -resiliency, that is at least @math slaves for each VM. The authors propose an integer formulation that they solve by means of constraint programming. @cite_5 the recovery problem of a cloud system is considered where slaves are usually turned off to reduce energy consumption but can be turned on in advance to reduce the recovery time. The authors propose a bicriteria approximation algorithm and a greedy heuristic. @cite_22 the authors solve a problem where links connecting datacenters may fail, and a star connection between VMs must be found minimizing the probability of failure. The authors propose an exact and a greedy algorithms to solve both small and large instances, respectively. Within disaster-resilient VM placement, @cite_14 proposes a protection scheme in which for each master a slave is selected on a different datacenter, enforcing also path protection. @cite_8 the authors solve the problem of placing slaves for a given set of master VMs without exceeding neither servers nor link capacities. Their heuristic approaches decompose the problems in two parts: the first allocating slaves, and the second defining protection relationships.
{ "cite_N": [ "@cite_14", "@cite_26", "@cite_22", "@cite_8", "@cite_5" ], "mid": [ "2098440105", "2169138757", "2096006784", "2399443652", "2136020837" ], "abstract": [ "A key strategy to build disaster-resilient clouds is to employ backups of virtual machines in a geo-distributed infrastructure. Today, the continuous and acknowledged replication of virtual machines in different servers is a service provided by different hypervisors. This strategy guarantees that the virtual machines will have no loss of disk and memory content if a disaster occurs, at a cost of strict bandwidth and latency requirements. Considering this kind of service, in this work, we propose an optimization problem to place servers in a wide area network. The goal is to guarantee that backup machines do not fail at the same time as their primary counterparts. In addition, by using virtualization, we also aim to reduce the amount of backup servers required. The optimal results, achieved in real topologies, reduce the number of backup servers by at least 40 . Moreover, this work highlights several characteristics of the backup service according to the employed network, such as the fulfillment of latency requirements.", "The placement of virtual machines (VMs) on a cluster of hosts under multiple constraints, including administrative (security, regulations) resource-oriented (capacity, energy), and QoS-oriented (performance) is a highly complex task. We define a new high-availability property for a VM, when a VM is marked as k-resilient, as long as there are up to k host failures, it should be guaranteed that it can be relocated to a non-failed host without relocating other VMs. Together with Hardware Predictive Failure Analysis and live migration, which enable VMs to be evacuated from a host before it fails, this property allows the continuous running of VMs on the cluster despite host failures. The complexity of the constraints associated with k-resiliency, which are naturally expressed by Second Order logic statements, prevented their integration into the placement computation until now. We present a novel algorithm which enables this integration by transforming the k-resiliency constraints to rules consumable by a generic Constraint Programming engine, prove that it guarantees the required resiliency and describe the implementation. We provide some preliminary results and compare our high availability support with naive solutions.", "In this paper, we study the reliable resource allocation (RRA) problem of allocating virtual machines (VMs) from multiple optically interconnected data centers (DCs) with the objective of minimizing the total failure probability based on the information obtained from the optical network virtulization. We first describe the framework of resource allocation, formulate the RRA problem, and prove that RRA is NP-complete. We provide an algorithm, named Minimum Failure Cover (MFC), to obtain optimal solutions for small scale problems. We then provide a greedy algorithm, named VM-over-Reliability (VOR), to solve large scale problems. Numerical results show that VOR achieves results close to optimal solutions gained by MFC for small scale problems. Numerical results also show that VOR outperforms the resource allocation through random DC selection (RDS).", "In cloud data centers, where hosted applications share the underlying network resources, network-bandwidth guarantees have been shown to improve predictability of application performance and cost. However, recent empirical studies have also shown that often data center devices and links are not all that reliable and that failures may cause service outages, rendering significant revenue loss for the affected tenants, as well as the cloud operator. Accordingly, cloud operators are pressed to offer both reliable and predictable performance for the hosted applications. While much work has been done on solving both problems separately, this paper seeks to develop a joint framework by which cloud operators can offer both performance and availability guarantees for the hosted tenants. In particular, this paper considers a simple model to abstract the bandwidth guarantees requirement for the tenant and presents a protection plan design which consists of backup virtual machines placement and bandwidth provisioning to optimize the internal data center traffic. We show through solid motivational examples that finding the optimal protection plan design is highly perplexing, and encompasses several constituent challenges. Owing to its complexity, we decompose it into two subproblems, and solve them separately. First, we invoke a placement subproblem of the minimum number of backup VMs and then we attempt to find the most efficient correspondence between backup and primary VMs (i.e., protection plan) which minimizes the bandwidth redundancy. Our numerical evaluation shows that our two-step method is both scalable and accurate; further, it performs much better than a baseline method where placement of backup VMs is done at random.", "Maintaining high availability of IaaS services at a reasonable cost is a challenging task that received recent attention due to the growing popularity of Cloud computing as a preferred means of affordable IT outsourcing. In large data-centers faults are prone to happen and thus the only reasonable cost-effective method of providing high availability of services is an SLA aware recovery plan; that is, a mapping of the service VMs onto backup machines where they can be executed in case of a failure. The recovery process may benefit from powering on some of these machines in advance, since redeployment on powered machines is much faster. However, this comes with an additional maintenance cost, so the real problem is how to balance between the expected recovery time improvement and the cost of machines activation. We model this problem as an offline optimization problem and present a bicriteria approximation algorithm for it. While this is the first performance guaranteed algorithm for this problem, it is somewhat complex to implement in practice. Thus, we further present a much simpler and practical heuristic based on a greedy approach. We evaluate the performance of this heuristic over real data-center data, and show that it performs well in terms of scale, hierarchical faults and variant costs. Our results indicate that our scheme can reduce the overall recovery costs by 10-15 when compared to currently used approaches. We also show that fault recovery cost aware VM placement may farther help reducing the expected recovery costs, as it can reduce the backup machine activations costs." ] }
1701.08215
2953198080
We establish a priori upper bounds for solutions to the spatially inhomogeneous Landau equation in the case of moderately soft potentials, with arbitrary initial data, under the assumption that mass, energy and entropy densities stay under control. Our pointwise estimates decay polynomially in the velocity variable. We also show that if the initial data satisfies a Gaussian upper bound, this bound is propagated for all positive times.
The local estimates for parabolic kinetic equations with rough coefficients play an important role in this work. Local @math estimates were obtained in @cite_19 using Moser iteration, and local H "older estimates were proven in @cite_6 @cite_18 using a weak Poincar 'e inequality. A new proof was given in @cite_22 using a version of De Giorgi's method.
{ "cite_N": [ "@cite_19", "@cite_18", "@cite_22", "@cite_6" ], "mid": [ "1983529651", "", "2498247430", "2092830636" ], "abstract": [ "We adapt the iterative scheme by Moser, to prove that the weak solutions to an ultraparabolic equation, with measurable coefficients, are locally bounded functions. Due to the strong degeneracy of the equation, our method differs from the classical one in that it is based on some ad hoc Sobolev type inequalities for solutions.", "", "We extend the De Giorgi--Nash--Moser theory to a class of kinetic Fokker-Planck equations and deduce new results on the Landau-Coulomb equation. More precisely, we first study the H o lder regularity and establish a Harnack inequality for solutions to a general linear equation of Fokker-Planck type whose coefficients are merely measurable and essentially bounded, i.e. assuming no regularity on the coefficients in order to later derive results for non-linear problems. This general equation has the formal structure of the hypoelliptic equations \"of type II\" , sometimes also called ultraparabolic equations of Kolmogorov type, but with rough coefficients: it combines a first-order skew-symmetric operator with a second-order elliptic operator involving derivatives along only part of the coordinates and with rough coefficients. These general results are then applied to the non-negative essentially bounded weak solutions of the Landau equation with inverse-power law @math @math [--d, 1] whose mass, energy and entropy density are bounded and mass is bounded away from 0, and we deduce the H o lder regularity of these solutions.", "We obtain the Cα regularity for weak solutions of a class of non-homogeneous ultraparabolic equation, with measurable coefficients. The result generalizes our recent Cα regularity results of homogeneous ultraparabolic equations." ] }
1701.08215
2953198080
We establish a priori upper bounds for solutions to the spatially inhomogeneous Landau equation in the case of moderately soft potentials, with arbitrary initial data, under the assumption that mass, energy and entropy densities stay under control. Our pointwise estimates decay polynomially in the velocity variable. We also show that if the initial data satisfies a Gaussian upper bound, this bound is propagated for all positive times.
Classical solutions for have so far only been constructed in a close-to-equilibrium setting: see the work of Guo @cite_7 and Mouhot-Neumann @cite_1 . A suitable notion of weak solution, for general initial data, was constructed by Alexandre-Villani @cite_15 @cite_14 .
{ "cite_N": [ "@cite_15", "@cite_14", "@cite_1", "@cite_7" ], "mid": [ "1989465898", "", "1990233096", "2039481916" ], "abstract": [ "Abstract This paper studies the approximation of the Boltzmann equation by the Landau equation in a regime when grazing collisions prevail. While all previous results in the subject were limited to the spatially homogeneous case, here we manage to cover the general, space-dependent situation, assuming only basic physical estimates of finite mass, energy, entropy and entropy production. The proofs are based on the recent results and methods introduced previously in [R. Alexandre, C. Villani, Comm. Pure Appl. Math. 55 (1) (2002) 30–70] by both authors, and the entropy production smoothing effects established in [R. , Arch. Rational Mech. Anal. 152 (4) (2000) 327–355]. We are able to treat realistic singularities of Coulomb type, and approximations of the Debye cut. However, our method only works for finite-time intervals, while the Landau equation is supposed to describe long-time corrections to the Vlasov–Poisson equation. If the mean-field interaction is neglected, then our results apply to physically relevant situations after a time rescaling.", "", "For a general class of linear collisional kinetic models in the torus, including in particular the linearized Boltzmann equation for hard spheres, the linearized Landau equation with hard and moderately soft potentials and the semi-classical linearized fermionic and bosonic relaxation models, we prove explicit coercivity estimates on the associated integro-differential operator for some modified Sobolev norms. We deduce the existence of classical solutions near equilibrium for the full nonlinear models associated with explicit regularity bounds, and we obtain explicit estimates on the rate of exponential convergence towards equilibrium in this perturbative setting. The proof is based on a linear energy method which combines the coercivity property of the collision operator in the velocity space with transport effects, in order to deduce coercivity estimates in the whole phase space.", "The Landau equation, which was proposed by Landau in 1936, is a fundamental equation to describe collisions among charged particles interacting with their Coulombic force. In this article, global in time classical solutions near Maxwellians are constructed for the Landau equation in a periodic box. Our result also covers a class of generalized Landau equations, which describes grazing collisions in a dilute gas." ] }
1701.08215
2953198080
We establish a priori upper bounds for solutions to the spatially inhomogeneous Landau equation in the case of moderately soft potentials, with arbitrary initial data, under the assumption that mass, energy and entropy densities stay under control. Our pointwise estimates decay polynomially in the velocity variable. We also show that if the initial data satisfies a Gaussian upper bound, this bound is propagated for all positive times.
The global @math estimate we prove in Theorem is similar to an estimate in @cite_16 for the Boltzmann equation. The techniques in the proof are completely different. The propagation of Gaussian bounds that we give in Theorem is reminiscent of the result in @cite_10 . That result is for the space-homogeneous Boltzmann equation with cut-off, which is in some sense the opposite of the Landau equation in terms of the angular singularity in the cross section.
{ "cite_N": [ "@cite_16", "@cite_10" ], "mid": [ "230142061", "2018379915" ], "abstract": [ "We apply recent results on regularity for general integro-differential equations to derive a priori estimates in Holder spaces for the space homogeneous Boltzmann equation in the non cut-off case. We also show an a priori estimate in ( L^ ) which applies in the space inhomogeneous case as well, provided that the macroscopic quantities remain bounded.", "For the spatially homogeneous Boltzmann equation with cutoff hard potentials, it is shown that solutions remain bounded from above uniformly in time by a Maxwellian distribution, provided the initial data have a Maxwellian upper bound. The main technique is based on a comparison principle that uses a certain dissipative property of the linear Boltzmann equation. Implications of the technique to propagation of upper Maxwellian bounds in the spatially-inhomogeneous case are discussed." ] }
1701.08215
2953198080
We establish a priori upper bounds for solutions to the spatially inhomogeneous Landau equation in the case of moderately soft potentials, with arbitrary initial data, under the assumption that mass, energy and entropy densities stay under control. Our pointwise estimates decay polynomially in the velocity variable. We also show that if the initial data satisfies a Gaussian upper bound, this bound is propagated for all positive times.
In order to keep track of the constants for parabolic regularization estimates (as in @cite_22 ) for large velocities, we describe a change of variables in Lemma . This change of variables may be useful in other contexts. It is related to one mentioned in the appendix of @cite_17 for the Boltzmann equation.
{ "cite_N": [ "@cite_22", "@cite_17" ], "mid": [ "2498247430", "2513014888" ], "abstract": [ "We extend the De Giorgi--Nash--Moser theory to a class of kinetic Fokker-Planck equations and deduce new results on the Landau-Coulomb equation. More precisely, we first study the H o lder regularity and establish a Harnack inequality for solutions to a general linear equation of Fokker-Planck type whose coefficients are merely measurable and essentially bounded, i.e. assuming no regularity on the coefficients in order to later derive results for non-linear problems. This general equation has the formal structure of the hypoelliptic equations \"of type II\" , sometimes also called ultraparabolic equations of Kolmogorov type, but with rough coefficients: it combines a first-order skew-symmetric operator with a second-order elliptic operator involving derivatives along only part of the coordinates and with rough coefficients. These general results are then applied to the non-negative essentially bounded weak solutions of the Landau equation with inverse-power law @math @math [--d, 1] whose mass, energy and entropy density are bounded and mass is bounded away from 0, and we deduce the H o lder regularity of these solutions.", "We obtain the weak Harnack inequality and Holder estimates for a large class of kinetic integro-differential equations. We prove that the Boltzmann equation without cutoff can be written in this form and satisfies our assumptions provided that the mass density is bounded away from vacuum and mass, energy and entropy densities are bounded above. As a consequence, we derive a local Holder estimate and a quantitative lower bound for solutions of the (inhomogeneous) Boltzmann equation without cutoff ." ] }
1701.08215
2953198080
We establish a priori upper bounds for solutions to the spatially inhomogeneous Landau equation in the case of moderately soft potentials, with arbitrary initial data, under the assumption that mass, energy and entropy densities stay under control. Our pointwise estimates decay polynomially in the velocity variable. We also show that if the initial data satisfies a Gaussian upper bound, this bound is propagated for all positive times.
For the homogeneous Landau equation, which arises when @math is assumed to be independent of @math in , the theory is more developed. The @math smoothing is established for hard potentials in @cite_3 and for Maxwell molecules in @cite_5 , under the assumption that the initial data has finite mass and energy. Propagation of @math estimates in the case of moderately soft potentials was shown in @cite_0 and @cite_4 . Global upper bounds in a weighted @math space were established in @cite_20 , even for @math , as a consequence of entropy dissipation. Global @math bounds that do not depend on @math and that do not degenerate as @math were derived in @cite_21 for moderately soft potentials, and this result also implies @math smoothing by standard parabolic regularity theory. Note that in the space homogeneous case our assumptions , and hold for all @math provided that the initial data has finite mass, energy and entropy. Both Theorems and are new results even in the space homogeneous case. The previous results for soft potentials do not address the decay of the solution for large velocities.
{ "cite_N": [ "@cite_4", "@cite_21", "@cite_3", "@cite_0", "@cite_5", "@cite_20" ], "mid": [ "2964294246", "2964023345", "2066588103", "2135329241", "2062484908", "2963550302" ], "abstract": [ "This paper is devoted to some a priori estimates for the homogeneous Landau equation with soft potentials. Using coercivity properties of the Landau operator for soft potentials, we prove that the global in time a priori estimates of weak solutions in @math space hold true for moderately soft potential cases @math without any smallness assumption on the initial data. For very soft potential cases @math , which cover in particular the Coulomb case @math , we get local in time estimates of weak solutions in @math . In the proofs of these estimates, global ones for the special case @math and local ones for very soft potential cases @math , the control on time integral of some weighted Fisher information is required, which is an additional a priori estimate given by the entropy dissipation inequality.", "Abstract We consider a parabolic equation in nondivergence form, defined in the full space [ 0 , ∞ ) × R d , with a power nonlinearity as the right-hand side. We obtain an upper bound for the solution in terms of a weighted control in L p . This upper bound is applied to the homogeneous Landau equation with moderately soft potentials. We obtain an estimate in L ∞ ( R d ) for the solution of the Landau equation, for positive time, which depends only on the mass, energy and entropy of the initial data.", "We study the Cauchy problem for the homogeneous Landau equation of kinetic theory, in the case of hard potentials. We prove that for a large class of initial data, there exists a unique weak solution to this problem, which becomes immediately smooth and rapidly decaying at infinity.", "Abstract This paper deals with some global in time a priori estimates of the spatially homogeneous Landau equation for soft potentials γ ∈ [ − 2 , 0 ) . For the first result, we obtain the estimate of weak solutions in L t α L v 3 − e for α = 2 ( 3 − e ) 3 ( 2 − e ) and 0 e 1 , which is an improvement over estimates by Fournier and Guerin [10] . For the second result, we have the estimate of weak solutions in L t ∞ L v p , p > 1 , which extends part of results by Fournier and Guerin [10] and Alexandre, Liao and Lin [1] . As an application, we deduce some global well-posedness results for γ ∈ [ − 2 , 0 ) . Our estimates include the case γ = − 2 , which is the key point in this paper.", "We establish a simplified form for the Landau equation with Maxwellian-type molecules. We study in details the Cauchy problem associated to this equation, and some qualitative features of the solution. Explicit solutions are given.", "Abstract We present in this paper an estimate which bounds from below the entropy dissipation D ( f ) of the Landau operator with Coulomb interaction by a weighted H 1 norm of the square root of f . As a consequence, we get a weighted L t 1 ( L v 3 ) estimate for the solutions of the spatially homogeneous Landau equation with Coulomb interaction, and the propagation of L 1 moments of any order for this equation. We also present an application of our estimate to the Landau equation with (moderately) soft potentials, providing thus a new proof of some recent results of [30] ." ] }
1701.08071
2583743457
In this paper the task of emotion recognition from speech is considered. Proposed approach uses deep recurrent neural network trained on a sequence of acoustic features calculated over small speech intervals. At the same time special probabilistic-nature CTC loss function allows to consider long utterances containing both emotional and neutral parts. The effectiveness of such an approach is shown in two ways. Firstly, the comparison with recent advances in this field is carried out. Secondly, human performance on the same task is measured. Both criteria show the high quality of the proposed method.
Before the deep learning era people have come with many different methods which mostly extract complex low-level handcrafted features out of the initial audio recording of the utterance and then apply conventional classification algorithms. One of the approaches is to use generative models like Hidden Markov Models or Gaussian Mixture Model to learn the underlying probability distribution of the features and then to train a Bayessian classifier using maximal likelihood principle. Variations of this method was introduced by in 2003 in @cite_44 and by in 2004 in @cite_24 . Another common approach is to gather a global statistics over local low-level features computed over the parts of the signal and apply a classification model. in 2009 @cite_4 and in 2011 @cite_33 used this approach with Support Vector Machine as a classification model. in 2011 in @cite_26 used Decision Trees and in 2013 in @cite_8 utilized K Nearest Neighbours instead of SVM. People also tried to adapt popular speech recognition methods to the task of emotion recognition: for more information look at works of in 2007 @cite_29 and in 2013 in @cite_9 .
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_33", "@cite_8", "@cite_29", "@cite_9", "@cite_44", "@cite_24" ], "mid": [ "2153720647", "2167854178", "1677182931", "2093174546", "1972280480", "1522301498", "2110052520", "1668904664" ], "abstract": [ "Automated emotion state tracking is a crucial element in the computational study of human communication behaviors. It is important to design robust and reliable emotion recognition systems that are suitable for real-world applications both to enhance analytical abilities to support human decision making and to design human-machine interfaces that facilitate efficient communication. We introduce a hierarchical computational structure to recognize emotions. The proposed structure maps an input speech utterance into one of the multiple emotion classes through subsequent layers of binary classifications. The key idea is that the levels in the tree are designed to solve the easiest classification tasks first, allowing us to mitigate error propagation. We evaluated the classification framework on two different emotional databases using acoustic features, the AIBO database and the USC IEMOCAP database. In the case of the AIBO database, we obtain a balanced recall on each of the individual emotion classes using this hierarchical structure. The performance measure of the average unweighted recall on the evaluation data set improves by 3.37 absolute (8.82 relative) over a Support Vector Machine baseline model. In the USC IEMOCAP database, we obtain an absolute improvement of 7.44 (14.58 ) over a baseline Support Vector Machine modeling. The results demonstrate that the presented hierarchical approach is effective for classifying emotional utterances in multiple database contexts.", "Various open-source toolkits exist for speech recognition and speech processing. These toolkits have brought a great benefit to the research community, i.e. speeding up research. Yet, no such freely available toolkit exists for automatic affect recognition from speech. We herein introduce a novel open-source affect and emotion recognition engine, which integrates all necessary components in one highly efficient software package. The components include audio recording and audio file reading, state-of-the-art paralinguistic feature extraction and plugable classification modules. In this paper we introduce the engine and extensive baseline results. Pre-trained models for four affect recognition tasks are included in the openEAR distribution. The engine is tailored for multi-threaded, incremental on-line processing of live input in real-time, however it can also be used for batch processing of databases.", "Rectified activation units (rectifiers) are essential for state-of-the-art neural networks. In this work, we study rectifier neural networks for image classification from two aspects. First, we propose a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit. PReLU improves model fitting with nearly zero extra computational cost and little overfitting risk. Second, we derive a robust initialization method that particularly considers the rectifier nonlinearities. This method enables us to train extremely deep rectified models directly from scratch and to investigate deeper or wider network architectures. Based on the learnable activation and advanced initialization, we achieve 4.94 top-5 test error on the ImageNet 2012 classification dataset. This is a 26 relative improvement over the ILSVRC 2014 winner (GoogLeNet, 6.66 [33]). To our knowledge, our result is the first to surpass the reported human-level performance (5.1 , [26]) on this dataset.", "Human emotion changes continuously and sequentially. This results in dynamics intrinsic to affective communication. One of the goals of automatic emotion recognition research is to computationally represent and analyze these dynamic patterns. In this work, we focus on the global utterance-level dynamics. We are motivated by the hypothesis that global dynamics have emotion-specific variations that can be used to differentiate between emotion classes. Consequently, classification systems that focus on these patterns will be able to make accurate emotional assessments. We quantitatively represent emotion flow within an utterance by estimating short-time affective characteristics. We compare time-series estimates of these characteristics using Dynamic Time Warping, a time-series similarity measure. We demonstrate that this similarity can effectively recognize the affective label of the utterance. The similarity-based pattern modeling outperforms both a feature-based baseline and static modeling. It also provides insight into typical high-level patterns of emotion. We visualize these dynamic patterns and the similarities between the patterns to gain insight into the nature of emotion expression.", "Speech emotion recognition is a challenging yet important speech technology. In this paper, the GMM supervector based SVM is applied to this field with spectral features. A GMM is trained for each emotional utterance, and the corresponding GMM supervector is used as the input feature for SVM. Experimental results on an emotional speech database demonstrate that the GMM supervector based SVM outperforms standard GMM on speech emotion recognition.", "We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.", "In this contribution we introduce speech emotion recognition by use of continuous hidden Markov models. Two methods are propagated and compared throughout the paper. Within the first method a global statistics framework of an utterance is classified by Gaussian mixture models using derived features of the raw pitch and energy contour of the speech signal. A second method introduces increased temporal complexity applying continuous hidden Markov models considering several states using low-level instantaneous features instead of global statistics. The paper addresses the design of working recognition engines and results achieved with respect to the alluded alternatives. A speech corpus consisting of acted and spontaneous emotion samples in German and English language is described in detail. Both engines have been tested and trained using this equivalent speech corpus. Results in recognition of seven discrete emotions exceeded 86 recognition rate. As a basis of comparison the similar judgment of human deciders classifying the same corpus at 79.8 recognition rate was analyzed.", "Recognizing human emotions attitudes from speech cues has gained increased attention recently. Most previous work has focused primarily on suprasegmental prosodic features calculated at the utterance level for this purpose. Notably, not much attention is paid to details at the segmental phoneme level in the modeling. Based on the hypothesis that different emotions have varying effects on the properties of the different speech sounds, this paper investigates the usefulness of phoneme-level modeling for the classification of emotional states from speech. Hidden Markov models (HMM) based on short-term spectral features are used for this purpose using data obtained from a recording of an actress’ expressing 4 different emotional states anger, happiness, neutral, and sadness. We designed and compared two sets of HMM classifiers: a generic set of ”emotional speech” HMMs (one for each emotion) a set of broad phoneticclass based HMMs for each emotion type considered. Five broad phonetic classes were used to explore the effect of emotional coloring on different phoneme classes, and it was found that (spectral properties of) vowel sounds were the best indicator to emotions in terms of the classification performance. The experiments also showed that the best performance can be obtained by using phoneme-class classifiers over generic “emotional” HMM classifier and classifiers based on global prosodic features. To see complementary effect of the prosodic and spectral features, two classifiers were combined at the decision level. The improvement was 0.55 in absolute compared with the result from phoneme-class based HMM classifier." ] }
1701.08071
2583743457
In this paper the task of emotion recognition from speech is considered. Proposed approach uses deep recurrent neural network trained on a sequence of acoustic features calculated over small speech intervals. At the same time special probabilistic-nature CTC loss function allows to consider long utterances containing both emotional and neutral parts. The effectiveness of such an approach is shown in two ways. Firstly, the comparison with recent advances in this field is carried out. Secondly, human performance on the same task is measured. Both criteria show the high quality of the proposed method.
One of the first deep learning end-to-end approaches was presented by in 2014 in their work @cite_38 . Their idea is to split each utterance into frames and calculate low-level features at the first step. Then authors used densely connected neural network with three hidden layers to transform this sequence of features to the sequence of probability distributions over the target emotion labels. Then these probabilities are aggregated into utterance-level features using simple statistics like maximum, minimum, average, percentiles, etc. After that the Extreme Learning Machine (ELM) @cite_34 is trained to classify utterances by emotional state.
{ "cite_N": [ "@cite_38", "@cite_34" ], "mid": [ "2295001676", "2111072639" ], "abstract": [ "Abstract Speech emotion recognition is a challenging problem partly be-cause it is unclear what features are effective for the task. Inthis paper we propose to utilize deep neural networks (DNNs)to extract high level features from raw data and show that theyare effective for speech emotion recognition. We first producean emotion state probability distribution for each speech seg-ment using DNNs. We then construct utterance-level featuresfrom segment-level probability distributions. These utterance-level features are then fed into an extreme learning machine(ELM),aspecialsimpleandefficientsingle-hidden-layer neuralnetwork, to identify utterance-level emotions. The experimen-tal results demonstrate that the proposed approach effectivelylearns emotional information from low-level features and leadsto 20 relative accuracy improvement compared to the state-of-the-art approaches. IndexTerms : Emotion recognition, Deep neural networks, Ex-treme learning machine 1. Introduction Despite the great progress made in artificial intelligence, weare still far from being able to naturally interact with machines,partly because machines do not understand our emotion states.Recently, speech emotion recognition, which aims to recognizeemotion states from speech signals, has been drawing increas-ing attention. Speech emotion recognition is a very challengingtask of which extracting effective emotional features is an openquestion [1, 2].Adeep neural network (DNN) is afeed-forward neural net-work that has more than one hidden layers between its inputsand outputs. It is capable of learning high-level representationfrom the raw features and effectively classifying data [3, 4].With sufficient training data and appropriate training strategies,DNNs perform very well in many machine learning tasks (e.g.,speech recognition [5]).Feature analysis in emotion recognition is much less stud-ied than that in speech recognition. Most previous studies em-pirically chose features for emotion classification. In this study,aDNNtakesasinputtheconventional acousticfeatureswithinaspeechsegmentandproducessegment-levelemotionstateprob-ability distributions, from which utterance-level features areconstructed and used to determine the utterance-level emotionstate. Since the segment-level outputs already provide consid-erable emotional information and the utterance-level classifica-", "Abstract It is clear that the learning speed of feedforward neural networks is in general far slower than required and it has been a major bottleneck in their applications for past decades. Two key reasons behind may be: (1) the slow gradient-based learning algorithms are extensively used to train neural networks, and (2) all the parameters of the networks are tuned iteratively by using such learning algorithms. Unlike these conventional implementations, this paper proposes a new learning algorithm called e xtreme l earning m achine (ELM) for s ingle-hidden l ayer f eedforward neural n etworks (SLFNs) which randomly chooses hidden nodes and analytically determines the output weights of SLFNs. In theory, this algorithm tends to provide good generalization performance at extremely fast learning speed. The experimental results based on a few artificial and real benchmark function approximation and classification problems including very large complex applications show that the new algorithm can produce good generalization performance in most cases and can learn thousands of times faster than conventional popular learning algorithms for feedforward neural networks. 1" ] }
1701.08071
2583743457
In this paper the task of emotion recognition from speech is considered. Proposed approach uses deep recurrent neural network trained on a sequence of acoustic features calculated over small speech intervals. At the same time special probabilistic-nature CTC loss function allows to consider long utterances containing both emotional and neutral parts. The effectiveness of such an approach is shown in two ways. Firstly, the comparison with recent advances in this field is carried out. Secondly, human performance on the same task is measured. Both criteria show the high quality of the proposed method.
In the continuation of the work Lee and Tashev presented their paper @cite_11 in 2015. They have used the same idea and approach as in @cite_38 . The main contribution is that they replaced simple densely-connected network with recurrent neural network (RNN) with Long short-term memory (LSTM) units. Lee and Tashev have also introduced probabilistic approach to learning which is in some points similar to approach presented in current paper. But they continued to use local probabilities aggregation into gloabal feature vector and ELM on top of them.
{ "cite_N": [ "@cite_38", "@cite_11" ], "mid": [ "2295001676", "2408520939" ], "abstract": [ "Abstract Speech emotion recognition is a challenging problem partly be-cause it is unclear what features are effective for the task. Inthis paper we propose to utilize deep neural networks (DNNs)to extract high level features from raw data and show that theyare effective for speech emotion recognition. We first producean emotion state probability distribution for each speech seg-ment using DNNs. We then construct utterance-level featuresfrom segment-level probability distributions. These utterance-level features are then fed into an extreme learning machine(ELM),aspecialsimpleandefficientsingle-hidden-layer neuralnetwork, to identify utterance-level emotions. The experimen-tal results demonstrate that the proposed approach effectivelylearns emotional information from low-level features and leadsto 20 relative accuracy improvement compared to the state-of-the-art approaches. IndexTerms : Emotion recognition, Deep neural networks, Ex-treme learning machine 1. Introduction Despite the great progress made in artificial intelligence, weare still far from being able to naturally interact with machines,partly because machines do not understand our emotion states.Recently, speech emotion recognition, which aims to recognizeemotion states from speech signals, has been drawing increas-ing attention. Speech emotion recognition is a very challengingtask of which extracting effective emotional features is an openquestion [1, 2].Adeep neural network (DNN) is afeed-forward neural net-work that has more than one hidden layers between its inputsand outputs. It is capable of learning high-level representationfrom the raw features and effectively classifying data [3, 4].With sufficient training data and appropriate training strategies,DNNs perform very well in many machine learning tasks (e.g.,speech recognition [5]).Feature analysis in emotion recognition is much less stud-ied than that in speech recognition. Most previous studies em-pirically chose features for emotion classification. In this study,aDNNtakesasinputtheconventional acousticfeatureswithinaspeechsegmentandproducessegment-levelemotionstateprob-ability distributions, from which utterance-level features areconstructed and used to determine the utterance-level emotionstate. Since the segment-level outputs already provide consid-erable emotional information and the utterance-level classifica-", "This paper presents a speech emotion recognition system using a recurrent neural network (RNN) model trained by an efficient learning algorithm. The proposed system takes into account the long-range context effect and the uncertainty of emotional label expressions. To extract high-level representation of emotional states with regard to its temporal dynamics, a powerful learning method with a bidirectional long short-term memory (BLSTM) model is adopted. To overcome the uncertainty of emotional labels, such that all frames in the same utterance are mapped into the same emotional label, it is assumed that the label of each frame is regarded as a sequence of random variables. Then, the sequences are trained by the proposed learning algorithm. The weighted accuracy of the proposed emotion recognition system is improved up to 12 compared to the DNN-ELM based emotion recognition system used as a baseline." ] }
1701.08071
2583743457
In this paper the task of emotion recognition from speech is considered. Proposed approach uses deep recurrent neural network trained on a sequence of acoustic features calculated over small speech intervals. At the same time special probabilistic-nature CTC loss function allows to consider long utterances containing both emotional and neutral parts. The effectiveness of such an approach is shown in two ways. Firstly, the comparison with recent advances in this field is carried out. Secondly, human performance on the same task is measured. Both criteria show the high quality of the proposed method.
After that few purely deep learning and end-to-end approaches based on modern architectures have already arisen. Neumann and Vu in their 2017 paper @cite_32 used currently popular attentive architecture. Attention is a mechanism that was firstly introduced by in 2015 in @cite_31 and now is state-of-the-art in the field of machine translation @cite_17 . in their 2017 work @cite_42 used a slightly different approach based in Deep Belief Networks (DBN) and continuous problem statement in 2D Valence-Arousal space. Each utterance can be assessed in ordinal scale and then embedded into multidimensional space. Regions in this space are associated with different emotions. The task then is to learn how to embed the utterances in this space. One of the most recent and interesting works was presented in 2018 by in @cite_21 . They suggested to do a transfer learning from usual speech recognition task to the emotion recognition. One might anticipate this method to work well because the speech corpora for speech recognition are far better developed - they are bigger and better annotated. Authors performed a fine-tuning of the DeepSpeech @cite_5 kind of network trained on LibriSpeech @cite_23 .
{ "cite_N": [ "@cite_42", "@cite_21", "@cite_32", "@cite_23", "@cite_5", "@cite_31", "@cite_17" ], "mid": [ "2343758848", "2774085128", "2620836355", "1494198834", "2949640717", "2133564696", "2626778328" ], "abstract": [ "Dimensional models have been proposed in psychology studies to represent complex human emotional expressions. Activation and valence are two common dimensions in such models. They can be used to describe certain emotions. For example, anger is one type of emotion with a low valence and high activation value; neutral has both a medium level valence and activation value. In this work, we propose to apply multi-task learning to leverage activation and valence information for acoustic emotion recognition based on the deep belief network (DBN) framework. We treat the categorical emotion recognition task as the major task. For the secondary task, we leverage activation and valence labels in two different ways, category level based classification and continuous level based regression. The combination of the loss functions from the major and secondary tasks is used as the objective function in the multi-task learning framework. After iterative optimization, the values from the last hidden layer in the DBN are used as new features and fed into a support vector machine classifier for emotion recognition. Our experimental results on the Interactive Emotional Dyadic Motion Capture and Sustained Emotionally Colored Machine-Human Interaction Using Nonverbal Expression databases show significant improvements on unweighted accuracy, illustrating the benefit of utilizing additional information in a multi-task learning setup for emotion recognition.", "Acoustic emotion recognition aims to categorize the affective state of the speaker and is still a difficult task for machine learning models. The difficulties come from the scarcity of training data, general subjectivity in emotion perception resulting in low annotator agreement, and the uncertainty about which features are the most relevant and robust ones for classification. In this paper, we will tackle the latter problem. Inspired by the recent success of transfer learning methods we propose a set of architectures which utilize neural representations inferred by training on large speech databases for the acoustic emotion recognition task. Our experiments on the IEMOCAP dataset show 10 relative improvements in the accuracy and F1-score over the baseline recurrent neural network which is trained end-to-end for emotion recognition.", "Speech emotion recognition is an important and challenging task in the realm of human-computer interaction. Prior work proposed a variety of models and feature sets for training a system. In this work, we conduct extensive experiments using an attentive convolutional neural network with multi-view learning objective function. We compare system performance using different lengths of the input signal, different types of acoustic features and different types of emotion speech (improvised scripted). Our experimental results on the Interactive Emotional Motion Capture (IEMOCAP) database reveal that the recognition performance strongly depends on the type of speech data independent of the choice of input features. Furthermore, we achieved state-of-the-art results on the improvised speech data of IEMOCAP.", "This paper introduces a new corpus of read English speech, suitable for training and evaluating speech recognition systems. The LibriSpeech corpus is derived from audiobooks that are part of the LibriVox project, and contains 1000 hours of speech sampled at 16 kHz. We have made the corpus freely available for download, along with separately prepared language-model training data and pre-built language models. We show that acoustic models trained on LibriSpeech give lower error rate on the Wall Street Journal (WSJ) test sets than models trained on WSJ itself. We are also releasing Kaldi scripts that make it easy to build these systems.", "We show that an end-to-end deep learning approach can be used to recognize either English or Mandarin Chinese speech--two vastly different languages. Because it replaces entire pipelines of hand-engineered components with neural networks, end-to-end learning allows us to handle a diverse variety of speech including noisy environments, accents and different languages. Key to our approach is our application of HPC techniques, resulting in a 7x speedup over our previous system. Because of this efficiency, experiments that previously took weeks now run in days. This enables us to iterate more quickly to identify superior architectures and algorithms. As a result, in several cases, our system is competitive with the transcription of human workers when benchmarked on standard datasets. Finally, using a technique called Batch Dispatch with GPUs in the data center, we show that our system can be inexpensively deployed in an online setting, delivering low latency when serving users at scale.", "Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.", "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data." ] }
1701.07254
2949674250
Micro Aerial Vehicles (MAVs) are limited in their operation outdoors near obstacles by their ability to withstand wind gusts. Currently widespread position control methods such as Proportional Integral Derivative control do not perform well under the influence of gusts. Incremental Nonlinear Dynamic Inversion (INDI) is a sensor-based control technique that can control nonlinear systems subject to disturbances. It was developed for the attitude control of manned aircraft or MAVs. In this paper we generalize this method to the outer loop control of MAVs under severe gust loads. Significant improvements over a traditional Proportional Integral Derivative (PID) controller are demonstrated in an experiment where the quadrotor flies in and out of a windtunnel exhaust at 10 m s. The control method does not rely on frequent position updates, as is demonstrated in an outside experiment using a standard GPS module. Finally, we investigate the effect of using a linearization to calculate thrust vector increments, compared to a nonlinear calculation. The method requires little modeling and is computationally efficient.
@cite_4 developed an altitude controller that utilizes the vertical acceleration measurement. However, they fed the acceleration back, multiplied with a gain, without utilizing the physical relation between thrust and acceleration. In a different paper, they state that their PID position control implementation has little ability to reject disturbances from wind and translational velocity effects @cite_17 . A vertical controller using the INDI principle was developed for a traditional helicopter in simulation by @cite_18 . Only very limited sensor noise was taken into account, which did not require any filtering. Also, in both of these papers, by separating the vertical axis from the lateral axes, coupling can be expected. We show that by inverting the control effectiveness for all axes, accelerations in each of these axes can be controlled.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_17" ], "mid": [ "", "2165771902", "2141666765" ], "abstract": [ "", "Abstract Quadrotor helicopters continue to grow in popularity for unmanned aerial vehicle applications. However, accurate dynamic models for deriving controllers for moderate to high speeds have been lacking. This work presents theoretical models of quadrotor aerodynamics with non-zero free-stream velocities based on helicopter momentum and blade element theory, validated with static tests and flight data. Controllers are derived using these models and implemented on the Stanford Testbed of Autonomous Rotorcraft for Multi-Agent Control (STARMAC), demonstrating significant improvements over existing methods. The design of the STARMAC platform is described, and flight results are presented demonstrating improved accuracy over commercially available quadrotors.", "Quadrotor helicopters are emerging as a popular platform for unmanned aerial vehicle (UAV) research, due to the simplicity of their construction and maintenance, their ability to hover, and their vertical take o and landing (VTOL) capability. Current designs have often considered only nominal operating conditions for vehicle control design. This work seeks to address issues that arise when deviating significantly from the hover flight regime. Aided by well established research for helicopter flight control, three separate aerodynamic eects are investigated as they pertain to quadrotor flight, due to vehicular velocity, angle of attack, and airframe design. They cause moments that aect attitude control, and thrust variation that aects altitude control. Where possible, a theoretical development is first presented, and is then validated through both thrust test stand measurements and vehicle flight tests using the Stanford Testbed of Autonomous Rotorcraft for Multi-Agent Control (STARMAC) quadrotor helicopter. The results enabled improved controller performance." ] }
1701.07681
2581867724
Time series (TS) occur in many scientific and commercial applications, ranging from earth surveillance to industry automation to the smart grids. An important type of TS analysis is classification, which can, for instance, improve energy load forecasting in smart grids by detecting the types of electronic devices based on their energy consumption profiles recorded by automatic sensors. Such sensor-driven applications are very often characterized by (a) very long TS and (b) very large TS datasets needing classification. However, current methods to time series classification (TSC) cannot cope with such data volumes at acceptable accuracy; they are either scalable but offer only inferior classification quality, or they achieve state-of-the-art classification quality but cannot scale to large data volumes. In this paper, we present WEASEL (Word ExtrAction for time SEries cLassification), a novel TSC method which is both fast and accurate. Like other state-of-the-art TSC methods, WEASEL transforms time series into feature vectors, using a sliding-window approach, which are then analyzed through a machine learning classifier. The novelty of WEASEL lies in its specific method for deriving features, resulting in a much smaller yet much more discriminative feature set. On the popular UCR benchmark of 85 TS datasets, WEASEL is more accurate than the best current non-ensemble algorithms at orders-of-magnitude lower classification and training times, and it is almost as accurate as ensemble classifiers, whose computational complexity makes them inapplicable even for mid-size datasets. The outstanding robustness of WEASEL is also confirmed by experiments on two real smart grid datasets, where it out-of-the-box achieves almost the same accuracy as highly tuned, domain-specific methods.
In contrast, classifiers rely on comparing features generated from substructures of TS. The most successful approaches can be grouped as either using shapelets or bag-of-patterns (BOP). are defined as TS subsequences that are maximally representative of a class. @cite_8 a decision tree is built on the distance to a set of shapelets. The Shapelet Transform (ST) @cite_41 @cite_7 , which is the most accurate shapelet approach according to a recent evaluation @cite_40 , uses the distance to the shapelets as input features for an ensemble of different classification methods. In the Learning Shapelets (LS) approach @cite_26 , optimal shapelets are synthetically generated. The drawback of shapelet methods is the high computational complexity resulting in rather long training and classification times.
{ "cite_N": [ "@cite_26", "@cite_7", "@cite_8", "@cite_41", "@cite_40" ], "mid": [ "1978371851", "1900747440", "", "2468738844", "2555077524" ], "abstract": [ "Shapelets are discriminative sub-sequences of time series that best predict the target variable. For this reason, shapelet discovery has recently attracted considerable interest within the time-series research community. Currently shapelets are found by evaluating the prediction qualities of numerous candidates extracted from the series segments. In contrast to the state-of-the-art, this paper proposes a novel perspective in terms of learning shapelets. A new mathematical formalization of the task via a classification objective function is proposed and a tailored stochastic gradient learning algorithm is applied. The proposed method enables learning near-to-optimal shapelets directly without the need to try out lots of candidates. Furthermore, our method can learn true top-K shapelets by capturing their interaction. Extensive experimentation demonstrates statistically significant improvement in terms of wins and ranks against 13 baselines over 28 time-series datasets.", "Shapelets have recently been proposed as a new primitive for time series classification. Shapelets are subseries of series that best split the data into its classes. In the original research, shapelets were found recursively within a decision tree through enumeration of the search space. Subsequent research indicated that using shapelets as the basis for transforming datasets leads to more accurate classifiers.", "", "Shapelets are discriminative subsequences of time series, usually embedded in shapelet-based decision trees. The enumeration of time series shapelets is, however, computationally costly, which in addition to the inherent difficulty of the decision tree learning algorithm to effectively handle high-dimensional data, severely limits the applicability of shapelet-based decision tree learning from large (multivariate) time series databases. This paper introduces a novel tree-based ensemble method for univariate and multivariate time series classification using shapelets, called the generalized random shapelet forest algorithm. The algorithm generates a set of shapelet-based decision trees, where both the choice of instances used for building a tree and the choice of shapelets are randomized. For univariate time series, it is demonstrated through an extensive empirical investigation that the proposed algorithm yields predictive performance comparable to the current state-of-the-art and significantly outperforms several alternative algorithms, while being at least an order of magnitude faster. Similarly for multivariate time series, it is shown that the algorithm is significantly less computationally costly and more accurate than the current state-of-the-art.", "In the last 5 years there have been a large number of new time series classification algorithms proposed in the literature. These algorithms have been evaluated on subsets of the 47 data sets in the University of California, Riverside time series classification archive. The archive has recently been expanded to 85 data sets, over half of which have been donated by researchers at the University of East Anglia. Aspects of previous evaluations have made comparisons between algorithms difficult. For example, several different programming languages have been used, experiments involved a single train test split and some used normalised data whilst others did not. The relaunch of the archive provides a timely opportunity to thoroughly evaluate algorithms on a larger number of datasets. We have implemented 18 recently proposed algorithms in a common Java framework and compared them against two standard benchmark classifiers (and each other) by performing 100 resampling experiments on each of the 85 datasets. We use these results to test several hypotheses relating to whether the algorithms are significantly more accurate than the benchmarks and each other. Our results indicate that only nine of these algorithms are significantly more accurate than both benchmarks and that one classifier, the collective of transformation ensembles, is significantly more accurate than all of the others. All of our experiments and results are reproducible: we release all of our code, results and experimental details and we hope these experiments form the basis for more robust testing of new algorithms in the future." ] }
1701.07576
2583240304
We derive inner and outer bounds on the capacity region for a class of three-user partially connected interference channels. We focus on the impact of topology, interference alignment, and interplay between interference and noise. The representative channels we consider are the ones that have clear interference alignment gain. For these channels, Z-channel type outer bounds are tight to within a constant gap from capacity. We present near-optimal achievable schemes based on rate-splitting and lattice alignment.
Lattice coding based on nested lattices is shown to achieve the capacity of the single user Gaussian channel in @cite_18 @cite_25 . The idea of lattice-based interference alignment by decoding the sum of lattice codewords appeared in the conference version of @cite_1 . This lattice alignment technique is used to derive capacity bounds for three-user interference channel in @cite_20 @cite_14 . The idea of decoding the sum of lattice codewords is also used in @cite_17 @cite_8 @cite_23 to derive the approximate capacity of the two-way relay channel. An extended approach, compute-and-forward @cite_10 @cite_0 enables to first decode some linear combinations of lattice codewords and then solve the lattice equation to recover the desired messages. This approach is also used in @cite_21 to characterize approximate sum-rate capacity of the fully connected @math -user interference channel.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_8", "@cite_21", "@cite_1", "@cite_0", "@cite_23", "@cite_10", "@cite_25", "@cite_20", "@cite_17" ], "mid": [ "", "2101431202", "2057929374", "2145886843", "2151027523", "1494099594", "1505858209", "2005196269", "", "", "2147355810" ], "abstract": [ "", "The paper studies a class of three user Gaussian interference channels. A new layered lattice coding scheme is introduced as a transmission strategy. The use of lattice codes allows for an ldquoalignmentrdquo of the interference observed at each receiver. The layered lattice coding is shown to achieve more than one degree of freedom for a class of interference channels and also achieves rates which are better than the rates obtained using the Han-Kobayashi coding scheme.", "In this paper, a Gaussian two-way relay channel, where two source nodes exchange messages with each other through a relay, is considered. We assume that all nodes operate in full-duplex mode and there is no direct channel between the source nodes. We propose an achievable scheme composed of nested lattice codes for the uplink and structured binning for the downlink. Unlike conventional nested lattice codes, our codes utilize two different shaping lattices for source nodes based on a three-stage lattice partition chain, which is a key ingredient for producing the best gap-to-capacity results to date. Specifically, for all channel parameters, the achievable rate region of our scheme is within 1 2 bit from the capacity region for each user and its sum rate is within log3 2 bit from the sum capacity.", "Interference alignment has emerged as a powerful tool in the analysis of multiuser networks. Despite considerable recent progress, the capacity region of the Gaussian @math -user interference channel is still unknown in general, in part due to the challenges associated with alignment on the signal scale using lattice codes. This paper develops a new framework for lattice interference alignment, based on the compute-and-forward approach. Within this framework, each receiver decodes by first recovering two or more linear combinations of the transmitted codewords with integer-valued coefficients and then solving these linear combinations for its desired codeword. For the special case of symmetric channel gains, this framework is used to derive the approximate sum capacity of the Gaussian interference channel, up to an explicitly defined outage set of the channel gains. The key contributions are the capacity lower bounds for the weak through strong interference regimes, where each receiver should jointly decode its own codeword along with part of the interfering codewords. As part of the analysis, it is shown that decoding @math linear combinations of the codewords can approach the sum capacity of the @math -user Gaussian multiple-access channel up to a gap of no more than @math bits.", "Recently, Etkin, Tse, and Wang found the capacity region of the two-user Gaussian interference channel to within 1 bit s Hz. A natural goal is to apply this approach to the Gaussian interference channel with an arbitrary number of users. We make progress towards this goal by finding the capacity region of the many-to-one and one-to-many Gaussian interference channels to within a constant number of bits. The result makes use of a deterministic model to provide insight into the Gaussian channel. The deterministic model makes explicit the dimension of signal level. A central theme emerges: the use of lattice codes for alignment of interfering signals on the signal level.", "This comprehensive treatment of network information theory and its applications provides the first unified coverage of both classical and recent results. With an approach that balances the introduction of new models and new coding techniques, readers are guided through Shannon's point-to-point information theory, single-hop networks, multihop networks, and extensions to distributed computing, secrecy, wireless communication, and networking. Elementary mathematical tools and techniques are used throughout, requiring only basic knowledge of probability, whilst unified proofs of coding theorems are based on a few simple lemmas, making the text accessible to newcomers. Key topics covered include successive cancellation and superposition coding, MIMO wireless communication, network coding, and cooperative relaying. Also covered are feedback and interactive communication, capacity approximations and scaling laws, and asynchronous and random access channels. This book is ideal for use in the classroom, for self-study, and as a reference for researchers and engineers in industry and academia.", "In this paper, we consider a class of single-source multicast relay networks. We assume that all outgoing channels of a node in the network to its neighbors are orthogonal while the incoming signals from its neighbors can interfere with each other. We first focus on Gaussian relay networks with interference and find an achievable rate using a lattice coding scheme. We show that the achievable rate of our scheme is within a constant bit gap from the information theoretic cut-set bound, where the constant depends only on the network topology, but not on the transmit power, noise variance, and channel gains. This is similar to a recent result by Avestimehr, Diggavi, and Tse, who showed an approximate capacity characterization for general Gaussian relay networks. However, our achievability uses a structured code instead of a random one. Using the idea used in the Gaussian case, we also consider a linear finite-field symmetric network with interference and characterize its capacity using a linear coding scheme.", "Interference is usually viewed as an obstacle to communication in wireless networks. This paper proposes a new strategy, compute-and-forward, that exploits interference to obtain significantly higher rates between users in a network. The key idea is that relays should decode linear functions of transmitted messages according to their observed channel coefficients rather than ignoring the interference as noise. After decoding these linear equations, the relays simply send them towards the destinations, which given enough equations, can recover their desired messages. The underlying codes are based on nested lattices whose algebraic structure ensures that integer combinations of codewords can be decoded reliably. Encoders map messages from a finite field to a lattice and decoders recover equations of lattice points which are then mapped back to equations over the finite field. This scheme is applicable even if the transmitters lack channel state information.", "", "", "We consider a communication system where two transmitters wish to exchange information through a central relay. The transmitter and relay nodes exchange data over synchronized, average power constrained additive white Gaussian noise channels with a real input with signal-to-noise ratio (SNR) of snr. An upper bound on the capacity is 1 2 log(1 + snr) bits per transmitter per use of the multiple access phase and broadcast phase of the bidirectional relay channel. We show that, using lattice codes and lattice decoding, we can obtain a rate of 1 2 log(1 2 + snr) bits per transmitter, which is essentially optimal at high SNR. The main idea is to decode the sum of the codewords modulo a lattice at the relay followed by a broadcast phase which performs Slepian-Wolf coding. We also show that if the two transmitters use identical lattices with minimum angle decoding, we can achieve the same rate of 1 2 log(1 2 + snr). The proposed scheme can be thought of as a joint physical-layer network-layer code which outperforms other recently proposed analog network coding schemes." ] }
1701.07576
2583240304
We derive inner and outer bounds on the capacity region for a class of three-user partially connected interference channels. We focus on the impact of topology, interference alignment, and interplay between interference and noise. The representative channels we consider are the ones that have clear interference alignment gain. For these channels, Z-channel type outer bounds are tight to within a constant gap from capacity. We present near-optimal achievable schemes based on rate-splitting and lattice alignment.
The idea of sending multiple copies of the same sub-message at different signal levels, so-called Zigzag decoding, appeared in @cite_13 where receivers collect side information and use them for interference cancellation.
{ "cite_N": [ "@cite_13" ], "mid": [ "2108016639" ], "abstract": [ "We characterize the generalized degrees of freedom of the K user symmetric Gaussian interference channel where all desired links have the same signal-to-noise ratio (SNR) and all undesired links carrying interference have the same interference-to-noise ratio, INR = SNRα. We find that the number of generalized degrees of freedom per user, d(α), does not depend on the number of users, so that the characterization is identical to the 2 user interference channel with the exception of a singularity at α = 1 where d(1) = 1 K. The achievable schemes use multilevel coding with a nested lattice structure that opens the possibility that the sum of interfering signals can be decoded at a receiver even though the messages carried by the interfering signals are not decodable." ] }
1701.07576
2583240304
We derive inner and outer bounds on the capacity region for a class of three-user partially connected interference channels. We focus on the impact of topology, interference alignment, and interplay between interference and noise. The representative channels we consider are the ones that have clear interference alignment gain. For these channels, Z-channel type outer bounds are tight to within a constant gap from capacity. We present near-optimal achievable schemes based on rate-splitting and lattice alignment.
The @math -user cyclic Gaussian interference channel is considered in @cite_6 where an approximate capacity for the weak interference regime ( @math for all @math ) and the exact capacity for the strong interference regime ( @math for all @math ) are derived. Our type 4 and 5 channels are @math cases in interference regimes, which were not considered in @cite_6 .
{ "cite_N": [ "@cite_6" ], "mid": [ "2568383335" ], "abstract": [ "This paper studies the capacity region of a K-user cyclic Gaussian interference channel, where the kth user interferes with only the (k-1)th user (mod K ) in the network. Inspired by the work of Etkin, Tse, and Wang, who derived a capacity region outer bound for the two-user Gaussian interference channel and proved that a simple Han-Kobayashi power-splitting scheme can achieve to within one bit of the capacity region for all values of channel parameters, this paper shows that a similar strategy also achieves the capacity region of the K-user cyclic interference channel to within a constant gap in the weak interference regime. Specifically, for the K-user cyclic Gaussian interference channel, a compact representation of the Han-Kobayashi achievable rate region using Fourier-Motzkin elimination is first derived; a capacity region outer bound is then established. It is shown that the Etkin-Tse-Wang power-splitting strategy gives a constant gap of at most 2 bits in the weak interference regime. For the special three-user case, this gap can be sharpened to 1 ½ bits by time-sharing of several different strategies. The capacity result of the K-user cyclic Gaussian interference channel in the strong interference regime is also given. Further, based on the capacity results, this paper studies the generalized degrees of freedom (GDoF) of the symmetric cyclic interference channel. It is shown that the GDoF of the symmetric capacity is the same as that of the classic two-user interference channel, no matter how many users are in the network." ] }
1701.07368
2952441726
We investigate the problem of representing an entire video using CNN features for human action recognition. Currently, limited by GPU memory, we have not been able to feed a whole video into CNN RNNs for end-to-end learning. A common practice is to use sampled frames as inputs and video labels as supervision. One major problem of this popular approach is that the local samples may not contain the information indicated by global labels. To deal with this problem, we propose to treat the deep networks trained on local inputs as local feature extractors. After extracting local features, we aggregate them into global features and train another mapping function on the same training data to map the global features into global labels. We study a set of problems regarding this new type of local features such as how to aggregate them into global features. Experimental results on HMDB51 and UCF101 datasets show that, for these new local features, a simple maximum pooling on the sparsely sampled features lead to significant performance improvement.
In traditional video representations, trajectory based approaches @cite_22 @cite_5 , especially the Dense Trajectory (DT) and IDT @cite_18 @cite_0 , are the basis of current state-of-the-art hand-crafted algorithms. These trajectory-based methods are designed to address the flaws of image-extended video features. Their superior performance validates the need for a unique representation of motion features. There have been many studies attempting to improve IDT due to its popularity. @cite_15 enhanced the performance of IDT by increasing codebook sizes and fusing multiple coding methods. @cite_7 explored ways to sub-sample and generate vocabularies for DT features. Hoai & Zisserman @cite_12 achieved superior performance on several action recognition datasets by using three techniques including data augmentation, modeling score distribution over video subsequences, and capturing the relationship among action classes. @cite_6 modeled the evolution of appearance in the video and achieved state-of-the-art results on the Hollywood2 dataset. @cite_4 proposed to extract features from videos with multiple playback speeds to achieve speed invariances. However, with the arising of deep neural network methods, these traditional methods are gradually forgotten.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_22", "@cite_7", "@cite_6", "@cite_0", "@cite_5", "@cite_15", "@cite_12" ], "mid": [ "2126574503", "1920196880", "", "160239212", "1926645898", "", "1211924006", "2951552696", "2176316098" ], "abstract": [ "Feature trajectories have shown to be efficient for representing videos. Typically, they are extracted using the KLT tracker or matching SIFT descriptors between frames. However, the quality as well as quantity of these trajectories is often not sufficient. Inspired by the recent success of dense sampling in image classification, we propose an approach to describe videos by dense trajectories. We sample dense points from each frame and track them based on displacement information from a dense optical flow field. Given a state-of-the-art optical flow algorithm, our trajectories are robust to fast irregular motions as well as shot boundaries. Additionally, dense trajectories cover the motion information in videos well. We, also, investigate how to design descriptors to encode the trajectory information. We introduce a novel descriptor based on motion boundary histograms, which is robust to camera motion. This descriptor consistently outperforms other state-of-the-art descriptors, in particular in uncontrolled realistic videos. We evaluate our video description in the context of action classification with a bag-of-features approach. Experimental results show a significant improvement over the state of the art on four datasets of varying difficulty, i.e. KTH, YouTube, Hollywood2 and UCF sports.", "Most state-of-the-art action feature extractors involve differential operators, which act as highpass filters and tend to attenuate low frequency action information. This attenuation introduces bias to the resulting features and generates ill-conditioned feature matrices. The Gaussian Pyramid has been used as a feature enhancing technique that encodes scale-invariant characteristics into the feature space in an attempt to deal with this attenuation. However, at the core of the Gaussian Pyramid is a convolutional smoothing operation, which makes it incapable of generating new features at coarse scales. In order to address this problem, we propose a novel feature enhancing technique called Multi-skIp Feature Stacking (MIFS), which stacks features extracted using a family of differential filters parameterized with multiple time skips and encodes shift-invariance into the frequency space. MIFS compensates for information lost from using differential operators by recapturing information at coarse scales. This recaptured information allows us to match actions at different speeds and ranges of motion. We prove that MIFS enhances the learnability of differential-based features exponentially. The resulting feature matrices from MIFS have much smaller conditional numbers and variances than those from conventional methods. Experimental results show significantly improved performance on challenging action recognition and event detection tasks. Specifically, our method exceeds the state-of-the-arts on Hollywood2, UCF101 and UCF50 datasets and is comparable to state-of-the-arts on HMDB51 and Olympics Sports datasets. MIFS can also be used as a speedup strategy for feature extraction with minimal or no accuracy cost.", "", "The recent trend in action recognition is towards larger datasets, an increasing number of action classes and larger visual vocabularies. State-of-the-art human action classification in challenging video data is currently based on a bag-of-visual-words pipeline in which space-time features are aggregated globally to form a histogram. The strategies chosen to sample features and construct a visual vocabulary are critical to performance, in fact often dominating performance. In this work we provide a critical evaluation of various approaches to building a vocabulary and show that good practises do have a significant impact. By subsampling and partitioning features strategically, we are able to achieve state-of-the-art results on 5 major action recognition datasets using relatively small visual vocabularies.", "In this paper we present a method to capture video-wide temporal information for action recognition. We postulate that a function capable of ordering the frames of a video temporally (based on the appearance) captures well the evolution of the appearance within the video. We learn such ranking functions per video via a ranking machine and use the parameters of these as a new video representation. The proposed method is easy to interpret and implement, fast to compute and effective in recognizing a wide variety of actions. We perform a large number of evaluations on datasets for generic action recognition (Hollywood2 and HMDB51), fine-grained actions (MPII- cooking activities) and gestures (Chalearn). Results show that the proposed method brings an absolute improvement of 7–10 , while being compatible with and complementary to further improvements in appearance and local motion based methods.", "", "Human action recognition in videos is a challenging problem with wide applications. State-of-the-art approaches often adopt the popular bag-of-features representation based on isolated local patches or temporal patch trajectories, where motion patterns like object relationships are mostly discarded. This paper proposes a simple representation specifically aimed at the modeling of such motion relationships. We adopt global and local reference points to characterize motion information, so that the final representation can be robust to camera movement. Our approach operates on top of visual codewords derived from local patch trajectories, and therefore does not require accurate foreground-background separation, which is typically a necessary step to model object relationships. Through an extensive experimental evaluation, we show that the proposed representation offers very competitive performance on challenging benchmark datasets, and combining it with the bag-of-features representation leads to substantial improvement. On Hollywood2, Olympic Sports, and HMDB51 datasets, we obtain 59.5 , 80.6 and 40.7 respectively, which are the best reported results to date.", "Video based action recognition is one of the important and challenging problems in computer vision research. Bag of Visual Words model (BoVW) with local features has become the most popular method and obtained the state-of-the-art performance on several realistic datasets, such as the HMDB51, UCF50, and UCF101. BoVW is a general pipeline to construct a global representation from a set of local features, which is mainly composed of five steps: (i) feature extraction, (ii) feature pre-processing, (iii) codebook generation, (iv) feature encoding, and (v) pooling and normalization. Many efforts have been made in each step independently in different scenarios and their effect on action recognition is still unknown. Meanwhile, video data exhibits different views of visual pattern, such as static appearance and motion dynamics. Multiple descriptors are usually extracted to represent these different views. Many feature fusion methods have been developed in other areas and their influence on action recognition has never been investigated before. This paper aims to provide a comprehensive study of all steps in BoVW and different fusion methods, and uncover some good practice to produce a state-of-the-art action recognition system. Specifically, we explore two kinds of local features, ten kinds of encoding methods, eight kinds of pooling and normalization strategies, and three kinds of fusion methods. We conclude that every step is crucial for contributing to the final recognition rate. Furthermore, based on our comprehensive study, we propose a simple yet effective representation, called hybrid representation, by exploring the complementarity of different BoVW frameworks and local descriptors. Using this representation, we obtain the state-of-the-art on the three challenging datasets: HMDB51 (61.1 ), UCF50 (92.3 ), and UCF101 (87.9 ).", "We propose two complementary techniques to improve the performance of action recognition systems. The first technique addresses the temporal interval ambiguity of actions by learning a classifier score distribution over video subsequences. A classifier based on this score distribution is shown to be more effective than using the maximum or average scores. The second technique learns a classifier for the relative values of action scores, capturing the correlation and exclusion between action classes. Both techniques are simple and have efficient implementations using a Least-Squares SVM. We demonstrate that taken together the techniques exceed the state-of-the-art performance by a wide margin on challenging benchmarks for human actions." ] }
1701.07368
2952441726
We investigate the problem of representing an entire video using CNN features for human action recognition. Currently, limited by GPU memory, we have not been able to feed a whole video into CNN RNNs for end-to-end learning. A common practice is to use sampled frames as inputs and video labels as supervision. One major problem of this popular approach is that the local samples may not contain the information indicated by global labels. To deal with this problem, we propose to treat the deep networks trained on local inputs as local feature extractors. After extracting local features, we aggregate them into global features and train another mapping function on the same training data to map the global features into global labels. We study a set of problems regarding this new type of local features such as how to aggregate them into global features. Experimental results on HMDB51 and UCF101 datasets show that, for these new local features, a simple maximum pooling on the sparsely sampled features lead to significant performance improvement.
At the time we wrote this paper, two similar work @cite_21 @cite_16 have been published on Arxiv. Both of them propose a new feature aggregation method to pool together the local neural network features into global video features. @cite_21 proposes a bilinear model to pool together the outputs of last convolutional layers of the pre-trained networks and achieve state-of-the-art results on both HMDB51 and UCF101 datasets. @cite_2 proposes a new quantization method that is similar to FV and achieves similar performance as @cite_21 . However, neither of them provide detailed analysis of the local neural network features they have used. In this paper, we perform a more extensive analysis and show that a simple max pooling can achieve similar or better results compared to those much more complex feature aggregation methods as in @cite_21 @cite_16 .
{ "cite_N": [ "@cite_21", "@cite_16", "@cite_2" ], "mid": [ "2950554226", "", "2951789542" ], "abstract": [ "The CNN-encoding of features from entire videos for the representation of human actions has rarely been addressed. Instead, CNN work has focused on approaches to fuse spatial and temporal networks, but these were typically limited to processing shorter sequences. We present a new video representation, called temporal linear encoding (TLE) and embedded inside of CNNs as a new layer, which captures the appearance and motion throughout entire videos. It encodes this aggregated information into a robust video feature representation, via end-to-end learning. Advantages of TLEs are: (a) they encode the entire video into a compact feature representation, learning the semantics and a discriminative feature space; (b) they are applicable to all kinds of networks like 2D and 3D CNNs for video classification; and (c) they model feature interactions in a more expressive way and without loss of information. We conduct experiments on two challenging human action datasets: HMDB51 and UCF101. The experiments show that TLE outperforms current state-of-the-art methods on both datasets.", "", "Deep convolutional neural networks (CNNs) have proven highly effective for visual recognition, where learning a universal representation from activations of convolutional layer plays a fundamental problem. In this paper, we present Fisher Vector encoding with Variational Auto-Encoder (FV-VAE), a novel deep architecture that quantizes the local activations of convolutional layer in a deep generative model, by training them in an end-to-end manner. To incorporate FV encoding strategy into deep generative models, we introduce Variational Auto-Encoder model, which steers a variational inference and learning in a neural network which can be straightforwardly optimized using standard stochastic gradient method. Different from the FV characterized by conventional generative models (e.g., Gaussian Mixture Model) which parsimoniously fit a discrete mixture model to data distribution, the proposed FV-VAE is more flexible to represent the natural property of data for better generalization. Extensive experiments are conducted on three public datasets, i.e., UCF101, ActivityNet, and CUB-200-2011 in the context of video action recognition and fine-grained image classification, respectively. Superior results are reported when compared to state-of-the-art representations. Most remarkably, our proposed FV-VAE achieves to-date the best published accuracy of 94.2 on UCF101." ] }
1701.07393
2584325720
Acquiring 3D geometry of real world objects has various applications in 3D digitization, such as navigation and content generation in virtual environments. Image remains one of the most popular media for such visual tasks due to its simplicity of acquisition. Traditional image-based 3D reconstruction approaches heavily exploit point-to-point correspondence among multiple images to estimate camera motion and 3D geometry. Establishing point-to-point correspondence lies at the center of the 3D reconstruction pipeline, which however is easily prone to errors. In this paper, we propose an optimization framework which traces image points using a novel structure-guided dynamic tracking algorithm and estimates both the camera motion and a 3D structure model by enforcing a set of planar constraints. The key to our method is a structure model represented as a set of planes and their arrangements. Constraints derived from the structure model is used both in the correspondence establishment stage and the bundle adjustment stage in our reconstruction pipeline. Experiments show that our algorithm can effectively localize structure correspondence across dense image frames while faithfully reconstructing the camera motion and the underlying structured 3D model.
Many structure-based modeling approaches assume there is a structure. These structures include Manhattan-world assumption @cite_18 , cuboid assumption @cite_22 , CSG representation @cite_4 , symmetry @cite_12 @cite_16 @cite_7 and repetitions @cite_27 , etc., which are exploited to help regulate and reconstruct the 3D object and to truly interpret the scene.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_22", "@cite_7", "@cite_27", "@cite_16", "@cite_12" ], "mid": [ "", "2007492000", "2005496605", "2106551213", "2072637632", "", "2110833274" ], "abstract": [ "", "Virtual exploration tools for large indoor environments (e.g. museums) have so far been limited to either blueprint-style 2D maps that lack photo-realistic views of scenes, or ground-level image-to-image transitions, which are immersive but ill-suited for navigation. On the other hand, photorealistic aerial maps would be a useful navigational guide for large indoor environments, but it is impossible to directly acquire photographs covering a large indoor environment from aerial viewpoints. This paper presents a 3D reconstruction and visualization system for automatically producing clean and well-regularized texture-mapped 3D models for large indoor scenes, from ground-level photographs and 3D laser points. The key component is a new algorithm called \"inverse constructive solid geometry (CSG)\" for reconstructing a scene with a CSG representation consisting of volumetric primitives, which imposes powerful regularization constraints. We also propose several novel techniques to adjust the 3D model to make it suitable for rendering the 3D maps from aerial viewpoints. The visualization system enables users to easily browse a large-scale indoor environment from a bird's-eye view, locate specific room interiors, fly into a place of interest, view immersive ground-level panorama views, and zoom out again, all with seamless 3D transitions. We demonstrate our system on various museums, including the Metropolitan Museum of Art in New York City--one of the largest art galleries in the world.", "Objects occupy physical space and obey physical laws. To truly understand a scene, we must reason about the space that objects in it occupy, and how each objects is supported stably by each other. In other words, we seek to understand which objects would, if moved, cause other objects to fall. This 3D volumetric reasoning is important for many scene understanding tasks, ranging from segmentation of objects to perception of a rich 3D, physically well-founded, interpretations of the scene. In this paper, we propose a new algorithm to parse a single RGB-D image with 3D block units while jointly reasoning about the segments, volumes, supporting relationships, and object stability. Our algorithm is based on the intuition that a good 3D representation of the scene is one that fits the depth data well, and is a stable, self-supporting arrangement of objects (i.e., one that does not topple). We design an energy function for representing the quality of the block representation based on these properties. Our algorithm fits 3D blocks to the depth values corresponding to image segments, and iteratively optimizes the energy function. Our proposed algorithm is the first to consider stability of objects in complex arrangements for reasoning about the underlying structure of the scene. Experimental results show that our stability-reasoning framework improves RGB-D segmentation and scene volumetric representation.", "We present a method to recover a 3D texture-mapped architecture model from a single image. Both single image based modeling and architecture modeling are challenging problems. We handle these difficulties by employing constraints derived from shape symmetries, which are prevalent in architecture. We first present a novel algorithm to calibrate the camera from a single image by exploiting symmetry. Then a set of 3D points is recovered according to the calibration and the underlying symmetry. With these reconstructed points, the user interactively marks out components of the architecture structure, whose shapes and positions are automatically determined according to the 3D points. Lastly, we texture the 3D model according to the input image, and we enhance the texture quality at those foreshortened and occluded regions according to their symmetric counterparts. The modeling process requires only a few minutes interaction. Multiple examples are provided to demonstrate the presented method.", "Repeated structures are ubiquitous in urban facades. Such repetitions lead to ambiguity in establishing correspondences across sets of unordered images. A decoupled structure-from-motion reconstruction followed by symmetry detection often produces errors: outputs are either noisy and incomplete, or even worse, appear to be valid but actually have a wrong number of repeated elements. We present an optimization framework for extracting repeated elements in images of urban facades, while simultaneously calibrating the input images and recovering the 3D scene geometry using a graph-based global analysis. We evaluate the robustness of the proposed scheme on a range of challenging examples containing widespread repetitions and nondistinctive features. These image sets are common but cannot be handled well with state-of-the-art methods. We show that the recovered symmetry information along with the 3D geometry enables a range of novel image editing operations that maintain consistency across the images.", "", "This paper proposes a new approach to 3D reconstruction of piecewise planar objects based on two image regularities, connectivity and perspective symmetry. First, we formulate the whole shape of the objects in an image as a shape vector consisting of the normals of all the faces of the objects. Then, we impose several linear constraints on the shape vector using connectivity and perspective symmetry of the objects. Finally, we obtain a closed-form solution to the 3D reconstruction problem. We also develop an efficient algorithm to detect a face of perspective symmetry. Experimental results on real images are shown to demonstrate the effectiveness of our approach." ] }
1701.07393
2584325720
Acquiring 3D geometry of real world objects has various applications in 3D digitization, such as navigation and content generation in virtual environments. Image remains one of the most popular media for such visual tasks due to its simplicity of acquisition. Traditional image-based 3D reconstruction approaches heavily exploit point-to-point correspondence among multiple images to estimate camera motion and 3D geometry. Establishing point-to-point correspondence lies at the center of the 3D reconstruction pipeline, which however is easily prone to errors. In this paper, we propose an optimization framework which traces image points using a novel structure-guided dynamic tracking algorithm and estimates both the camera motion and a 3D structure model by enforcing a set of planar constraints. The key to our method is a structure model represented as a set of planes and their arrangements. Constraints derived from the structure model is used both in the correspondence establishment stage and the bundle adjustment stage in our reconstruction pipeline. Experiments show that our algorithm can effectively localize structure correspondence across dense image frames while faithfully reconstructing the camera motion and the underlying structured 3D model.
By giving pre-known constraints in perspective projection, we can recover the 3D information from a single image @cite_13 @cite_7 @cite_8 by calculating the normals. But single image has a very limited field of view, and can not deal with the occlusion without additional symmetry assumption @cite_7 @cite_16 .
{ "cite_N": [ "@cite_16", "@cite_13", "@cite_7", "@cite_8" ], "mid": [ "", "2011639889", "2106551213", "2040939933" ], "abstract": [ "", "Recovering 3D geometry from a single view of an object is an important and challenging problem in computer vision. Previous methods mainly focus on one specific class of objects without large topological changes, such as cars, faces, or human bodies. In this paper, we propose a novel single view reconstruction algorithm for symmetric piece-wise planar objects that are not restricted to some object classes. Symmetry is ubiquitous in manmade and natural objects and provides rich information for 3D reconstruction. Given a single view of a symmetric piecewise planar object, we first find out all the symmetric line pairs. The geometric properties of symmetric objects are used to narrow down the searching space. Then, based on the symmetric lines, a depth map is recovered through a Markov random field. Experimental results show that our algorithm can efficiently recover the 3D shapes of different objects with significant topological variations.", "We present a method to recover a 3D texture-mapped architecture model from a single image. Both single image based modeling and architecture modeling are challenging problems. We handle these difficulties by employing constraints derived from shape symmetries, which are prevalent in architecture. We first present a novel algorithm to calibrate the camera from a single image by exploiting symmetry. Then a set of 3D points is recovered according to the calibration and the underlying symmetry. With these reconstructed points, the user interactively marks out components of the architecture structure, whose shapes and positions are automatically determined according to the 3D points. Lastly, we texture the 3D model according to the input image, and we enhance the texture quality at those foreshortened and occluded regions according to their symmetric counterparts. The modeling process requires only a few minutes interaction. Multiple examples are provided to demonstrate the presented method.", "We present an approach for 3D reconstruction of objects from a single image. Obviously, constraints on the 3D structure are needed to perform this task. Our approach is based on user-provided coplanarity, perpendicularity and parallelism constraints. These are used to calibrate the image and perform 3D reconstruction. The method is described in detail and results are provided." ] }
1701.07393
2584325720
Acquiring 3D geometry of real world objects has various applications in 3D digitization, such as navigation and content generation in virtual environments. Image remains one of the most popular media for such visual tasks due to its simplicity of acquisition. Traditional image-based 3D reconstruction approaches heavily exploit point-to-point correspondence among multiple images to estimate camera motion and 3D geometry. Establishing point-to-point correspondence lies at the center of the 3D reconstruction pipeline, which however is easily prone to errors. In this paper, we propose an optimization framework which traces image points using a novel structure-guided dynamic tracking algorithm and estimates both the camera motion and a 3D structure model by enforcing a set of planar constraints. The key to our method is a structure model represented as a set of planes and their arrangements. Constraints derived from the structure model is used both in the correspondence establishment stage and the bundle adjustment stage in our reconstruction pipeline. Experiments show that our algorithm can effectively localize structure correspondence across dense image frames while faithfully reconstructing the camera motion and the underlying structured 3D model.
Mura et el. @cite_2 use clustered 3D range scans to create the structured 3D models of typical interior environments, namely of recognizing their structure of individual rooms and corridors.
{ "cite_N": [ "@cite_2" ], "mid": [ "2069693438" ], "abstract": [ "We present a robust approach for reconstructing the main architectural structure of complex indoor environments given a set of cluttered 3D input range scans. Our method uses an efficient occlusion-aware process to extract planar patches as candidate walls, separating them from clutter and coping with missing data, and automatically extracts the individual rooms that compose the environment by applying a diffusion process on the space partitioning induced by the candidate walls. This diffusion process, which has a natural interpretation in terms of heat propagation, makes our method robust to artifacts and other imperfections that occur in typical scanned data of interiors. For each room, our algorithm reconstructs an accurate polyhedral model by applying methods from robust statistics. We demonstrate the validity of our approach by evaluating it on both synthetic models and real-world 3D scans of indoor environments. Graphical abstractDisplay Omitted HighlightsWe reconstruct an architectural model from a laser scanned indoor environment.Our algorithm can handle complex and highly concave room arrangements.It automatically detects all rooms without knowing the number of rooms in advance.Our pipeline can cope with occlusions and clutter using a robust heat diffusion process.An evaluation on artificial and real world data shows the accuracy of the method." ] }
1701.07393
2584325720
Acquiring 3D geometry of real world objects has various applications in 3D digitization, such as navigation and content generation in virtual environments. Image remains one of the most popular media for such visual tasks due to its simplicity of acquisition. Traditional image-based 3D reconstruction approaches heavily exploit point-to-point correspondence among multiple images to estimate camera motion and 3D geometry. Establishing point-to-point correspondence lies at the center of the 3D reconstruction pipeline, which however is easily prone to errors. In this paper, we propose an optimization framework which traces image points using a novel structure-guided dynamic tracking algorithm and estimates both the camera motion and a 3D structure model by enforcing a set of planar constraints. The key to our method is a structure model represented as a set of planes and their arrangements. Constraints derived from the structure model is used both in the correspondence establishment stage and the bundle adjustment stage in our reconstruction pipeline. Experiments show that our algorithm can effectively localize structure correspondence across dense image frames while faithfully reconstructing the camera motion and the underlying structured 3D model.
By learning the unique features of different types of surfaces and the contextual relationships between them, @cite_1 propose a method to automatically convert the 3D point data from a laser scanner into a compact, semantically rich information model. And from panorama RGBD images, @cite_28 use a graph to represent the internal structures and reconstruct an indoor scene as a structured model.
{ "cite_N": [ "@cite_28", "@cite_1" ], "mid": [ "2201056710", "2033552406" ], "abstract": [ "This paper presents a novel 3D modeling framework that reconstructs an indoor scene as a structured model from panorama RGBD images. A scene geometry is represented as a graph, where nodes correspond to structural elements such as rooms, walls, and objects. The approach devises a structure grammar that defines how a scene graph can be manipulated. The grammar then drives a principled new reconstruction algorithm, where the grammar rules are sequentially applied to recover a structured model. The paper also proposes a new room segmentation algorithm and an offset-map reconstruction algorithm that are used in the framework and can enforce architectural shape priors far beyond existing state-of-the-art. The structured scene representation enables a variety of novel applications, ranging from indoor scene visualization, automated floorplan generation, Inverse-CAD, and more. We have tested our framework and algorithms on six synthetic and five real datasets with qualitative and quantitative evaluations. The source code and the data are available at the project website [15].", "Abstract In the Architecture, Engineering, and Construction (AEC) domain, semantically rich 3D information models are increasingly used throughout a facility's life cycle for diverse applications, such as planning renovations, space usage planning, and managing building maintenance. These models, which are known as building information models (BIMs), are often constructed using dense, three dimensional (3D) point measurements obtained from laser scanners. Laser scanners can rapidly capture the “as-is” conditions of a facility, which may differ significantly from the design drawings. Currently, the conversion from laser scan data to BIM is primarily a manual operation, and it is labor-intensive and can be error-prone. This paper presents a method to automatically convert the raw 3D point data from a laser scanner positioned at multiple locations throughout a facility into a compact, semantically rich information model. Our algorithm is capable of identifying and modeling the main visible structural components of an indoor environment (walls, floors, ceilings, windows, and doorways) despite the presence of significant clutter and occlusion, which occur frequently in natural indoor environments. Our method begins by extracting planar patches from a voxelized version of the input point cloud. The algorithm learns the unique features of different types of surfaces and the contextual relationships between them and uses this knowledge to automatically label patches as walls, ceilings, or floors. Then, we perform a detailed analysis of the recognized surfaces to locate openings, such as windows and doorways. This process uses visibility reasoning to fuse measurements from different scan locations and to identify occluded regions and holes in the surface. Next, we use a learning algorithm to intelligently estimate the shape of window and doorway openings even when partially occluded. Finally, occluded surface regions are filled in using a 3D inpainting algorithm. We evaluated the method on a large, highly cluttered data set of a building with forty separate rooms." ] }