id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1202.5599 | On the Ingleton-Violations in Finite Groups | Given $n$ discrete random variables, its entropy vector is the $2^n-1$ dimensional vector obtained from the joint entropies of all non-empty subsets of the random variables. It is well known that there is a one-to-one correspondence between such an entropy vector and a certain group-characterizable vector obtained from a finite group and $n$ of its subgroups [3]. This correspondence may be useful for characterizing the space of entropic vectors and for designing network codes. If one restricts attention to abelian groups then not all entropy vectors can be obtained. This is an explanation for the fact shown by Dougherty et al [4] that linear network codes cannot achieve capacity in general network coding problems. All abelian group-characterizable vectors, and by fiat all entropy vectors generated by linear network codes, satisfy a linear inequality called the Ingleton inequality. It is therefore of interest to identify groups that violate the Ingleton inequality. In this paper, we study the problem of finding nonabelian finite groups that yield characterizable vectors which violate the Ingleton inequality. Using a refined computer search, we find the symmetric group $S_5$ to be the smallest group that violates the Ingleton inequality. Careful study of the structure of this group, and its subgroups, reveals that it belongs to the Ingleton-violating family $PGL(2,q)$ with a prime power $q \geq 5$, i.e., the projective group of $2\times 2$ nonsingular matrices with entries in $\mathbb{F}_q$. We further interpret this family using the theory of group actions. We also extend the construction to more general groups such as $PGL(n,q)$ and $GL(n,q)$. The families of groups identified here are therefore good candidates for constructing network codes more powerful than linear network codes, and we discuss some considerations for constructing such group network codes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 14,568 |
2302.08631 | Practical Contextual Bandits with Feedback Graphs | While contextual bandit has a mature theory, effectively leveraging different feedback patterns to enhance the pace of learning remains unclear. Bandits with feedback graphs, which interpolates between the full information and bandit regimes, provides a promising framework to mitigate the statistical complexity of learning. In this paper, we propose and analyze an approach to contextual bandits with feedback graphs based upon reduction to regression. The resulting algorithms are computationally practical and achieve established minimax rates, thereby reducing the statistical complexity in real-world applications. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 346,113 |
2407.20095 | Crafting Generative Art through Genetic Improvement: Managing Creative
Outputs in Diverse Fitness Landscapes | Generative art is a rules-driven approach to creating artistic outputs in various mediums. For example, a fluid simulation can govern the flow of colored pixels across a digital display or a rectangle placement algorithm can yield a Mondrian-style painting. Previously, we investigated how genetic improvement, a sub-field of genetic programming, can automatically create and optimize generative art drawing programs. One challenge of applying genetic improvement to generative art is defining fitness functions and their interaction in a many-objective evolutionary algorithm such as Lexicase selection. Here, we assess the impact of each fitness function in terms of the their individual effects on generated images, characteristics of generated programs, and impact of bloat on this specific domain. Furthermore, we have added an additional fitness function that uses a classifier for mimicking a human's assessment as to whether an output is considered as "art." This classifier is trained on a dataset of input images resembling the glitch art aesthetic that we aim to create. Our experimental results show that with few fitness functions, individual generative techniques sweep across populations. Moreover, we found that compositions tended to be driven by one technique with our current fitness functions. Lastly, we show that our classifier is best suited for filtering out noisy images, ideally leading towards more outputs relevant to user preference. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 477,044 |
2008.04481 | Transformer with Bidirectional Decoder for Speech Recognition | Attention-based models have made tremendous progress on end-to-end automatic speech recognition(ASR) recently. However, the conventional transformer-based approaches usually generate the sequence results token by token from left to right, leaving the right-to-left contexts unexploited. In this work, we introduce a bidirectional speech transformer to utilize the different directional contexts simultaneously. Specifically, the outputs of our proposed transformer include a left-to-right target, and a right-to-left target. In inference stage, we use the introduced bidirectional beam search method, which can not only generate left-to-right candidates but also generate right-to-left candidates, and determine the best hypothesis by the score. To demonstrate our proposed speech transformer with a bidirectional decoder(STBD), we conduct extensive experiments on the AISHELL-1 dataset. The results of experiments show that STBD achieves a 3.6\% relative CER reduction(CERR) over the unidirectional speech transformer baseline. Besides, the strongest model in this paper called STBD-Big can achieve 6.64\% CER on the test set, without language model rescoring and any extra data augmentation strategies. | false | false | true | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 191,239 |
2501.07069 | Hierarchical Superpixel Segmentation via Structural Information Theory | Superpixel segmentation is a foundation for many higher-level computer vision tasks, such as image segmentation, object recognition, and scene understanding. Existing graph-based superpixel segmentation methods typically concentrate on the relationships between a given pixel and its directly adjacent pixels while overlooking the influence of non-adjacent pixels. These approaches do not fully leverage the global information in the graph, leading to suboptimal segmentation quality. To address this limitation, we present SIT-HSS, a hierarchical superpixel segmentation method based on structural information theory. Specifically, we first design a novel graph construction strategy that incrementally explores the pixel neighborhood to add edges based on 1-dimensional structural entropy (1D SE). This strategy maximizes the retention of graph information while avoiding an overly complex graph structure. Then, we design a new 2D SE-guided hierarchical graph partitioning method, which iteratively merges pixel clusters layer by layer to reduce the graph's 2D SE until a predefined segmentation scale is achieved. Experimental results on three benchmark datasets demonstrate that the SIT-HSS performs better than state-of-the-art unsupervised superpixel segmentation algorithms. The source code is available at \url{https://github.com/SELGroup/SIT-HSS}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 524,252 |
1310.7048 | Scaling SVM and Least Absolute Deviations via Exact Data Reduction | The support vector machine (SVM) is a widely used method for classification. Although many efforts have been devoted to develop efficient solvers, it remains challenging to apply SVM to large-scale problems. A nice property of SVM is that the non-support vectors have no effect on the resulting classifier. Motivated by this observation, we present fast and efficient screening rules to discard non-support vectors by analyzing the dual problem of SVM via variational inequalities (DVI). As a result, the number of data instances to be entered into the optimization can be substantially reduced. Some appealing features of our screening method are: (1) DVI is safe in the sense that the vectors discarded by DVI are guaranteed to be non-support vectors; (2) the data set needs to be scanned only once to run the screening, whose computational cost is negligible compared to that of solving the SVM problem; (3) DVI is independent of the solvers and can be integrated with any existing efficient solvers. We also show that the DVI technique can be extended to detect non-support vectors in the least absolute deviations regression (LAD). To the best of our knowledge, there are currently no screening methods for LAD. We have evaluated DVI on both synthetic and real data sets. Experiments indicate that DVI significantly outperforms the existing state-of-the-art screening rules for SVM, and is very effective in discarding non-support vectors for LAD. The speedup gained by DVI rules can be up to two orders of magnitude. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 28,004 |
2209.05139 | Automated MIMO Motion Feedforward Control: Efficient Learning through
Data-Driven Gradients via Adjoint Experiments and Stochastic Approximation | Parameterized feedforward control is at the basis of many successful control applications with varying references. The aim of this paper is to develop an efficient data-driven approach to learn the feedforward parameters for MIMO systems. To this end, a cost criterion is minimized using a stochastic gradient descent algorithm, in which both the search direction and step size are determined through system experiments. In particular, the search direction is chosen as an unbiased estimate of the gradient which is obtained from a single experiment, regardless of the size of the MIMO system. The approach is illustrated using a simulation example, in which it is shown to be superior to a deterministic method in terms of convergence speed and thus experimental cost. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 317,006 |
2108.05722 | MT-ORL: Multi-Task Occlusion Relationship Learning | Retrieving occlusion relation among objects in a single image is challenging due to sparsity of boundaries in image. We observe two key issues in existing works: firstly, lack of an architecture which can exploit the limited amount of coupling in the decoder stage between the two subtasks, namely occlusion boundary extraction and occlusion orientation prediction, and secondly, improper representation of occlusion orientation. In this paper, we propose a novel architecture called Occlusion-shared and Path-separated Network (OPNet), which solves the first issue by exploiting rich occlusion cues in shared high-level features and structured spatial information in task-specific low-level features. We then design a simple but effective orthogonal occlusion representation (OOR) to tackle the second issue. Our method surpasses the state-of-the-art methods by 6.1%/8.3% Boundary-AP and 6.5%/10% Orientation-AP on standard PIOD/BSDS ownership datasets. Code is available at https://github.com/fengpanhe/MT-ORL. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 250,396 |
2103.02768 | Learning to Predict with Supporting Evidence: Applications to Clinical
Risk Prediction | The impact of machine learning models on healthcare will depend on the degree of trust that healthcare professionals place in the predictions made by these models. In this paper, we present a method to provide people with clinical expertise with domain-relevant evidence about why a prediction should be trusted. We first design a probabilistic model that relates meaningful latent concepts to prediction targets and observed data. Inference of latent variables in this model corresponds to both making a prediction and providing supporting evidence for that prediction. We present a two-step process to efficiently approximate inference: (i) estimating model parameters using variational learning, and (ii) approximating maximum a posteriori estimation of latent variables in the model using a neural network, trained with an objective derived from the probabilistic model. We demonstrate the method on the task of predicting mortality risk for patients with cardiovascular disease. Specifically, using electrocardiogram and tabular data as input, we show that our approach provides appropriate domain-relevant supporting evidence for accurate predictions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 223,059 |
2410.02511 | Choices are More Important than Efforts: LLM Enables Efficient
Multi-Agent Exploration | With expansive state-action spaces, efficient multi-agent exploration remains a longstanding challenge in reinforcement learning. Although pursuing novelty, diversity, or uncertainty attracts increasing attention, redundant efforts brought by exploration without proper guidance choices poses a practical issue for the community. This paper introduces a systematic approach, termed LEMAE, choosing to channel informative task-relevant guidance from a knowledgeable Large Language Model (LLM) for Efficient Multi-Agent Exploration. Specifically, we ground linguistic knowledge from LLM into symbolic key states, that are critical for task fulfillment, in a discriminative manner at low LLM inference costs. To unleash the power of key states, we design Subspace-based Hindsight Intrinsic Reward (SHIR) to guide agents toward key states by increasing reward density. Additionally, we build the Key State Memory Tree (KSMT) to track transitions between key states in a specific task for organized exploration. Benefiting from diminishing redundant explorations, LEMAE outperforms existing SOTA approaches on the challenging benchmarks (e.g., SMAC and MPE) by a large margin, achieving a 10x acceleration in certain scenarios. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | 494,308 |
2404.14322 | A Novel Approach to Chest X-ray Lung Segmentation Using U-net and
Modified Convolutional Block Attention Module | Lung segmentation in chest X-ray images is of paramount importance as it plays a crucial role in the diagnosis and treatment of various lung diseases. This paper presents a novel approach for lung segmentation in chest X-ray images by integrating U-net with attention mechanisms. The proposed method enhances the U-net architecture by incorporating a Convolutional Block Attention Module (CBAM), which unifies three distinct attention mechanisms: channel attention, spatial attention, and pixel attention. The channel attention mechanism enables the model to concentrate on the most informative features across various channels. The spatial attention mechanism enhances the model's precision in localization by focusing on significant spatial locations. Lastly, the pixel attention mechanism empowers the model to focus on individual pixels, further refining the model's focus and thereby improving the accuracy of segmentation. The adoption of the proposed CBAM in conjunction with the U-net architecture marks a significant advancement in the field of medical imaging, with potential implications for improving diagnostic precision and patient outcomes. The efficacy of this method is validated against contemporary state-of-the-art techniques, showcasing its superiority in segmentation performance. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 448,636 |
1710.08590 | Iterative Receivers for Downlink MIMO-SCMA: Message Passing and
Distributed Cooperative Detection | The rapid development of the mobile communications requires ever higher spectral efficiency. The non-orthogonal multiple access (NOMA) has emerged as a promising technology to further increase the access efficiency of wireless networks. Amongst several NOMA schemes, the sparse code multiple access (SCMA) has been shown to be able to achieve better performance. In this paper, we consider a downlink MIMO-SCMA system over frequency selective fading channels. For optimal detection, the complexity increases exponentially with the product of the number of users, the number of antennas and the channel length. To tackle this challenge, we propose near optimal low-complexity iterative receivers based on factor graph. By introducing auxiliary variables, a stretched factor graph is constructed and a hybrid belief propagation (BP) and expectation propagation (EP) receiver, named as `Stretch-BP-EP', is proposed. Considering the convergence problem of BP algorithm on loopy factor graph, we convexify the Bethe free energy and propose a convergence-guaranteed BP-EP receiver, named as `Conv-BP-EP'. We further consider cooperative network and propose two distributed cooperative detection schemes to exploit the diversity gain, namely, belief consensus-based algorithm and Bregman alternative direction method of multipliers (ADMM)-based method. Simulation results verify the superior performance of the proposed Conv-BP-EP receiver compared with other methods. The two proposed distributed cooperative detection schemes can improve the bit error rate performance by exploiting the diversity gain. Moreover, Bregman ADMM method outperforms the belief consensus-based algorithm in noisy inter-user links. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 83,100 |
2404.11665 | Exploring DNN Robustness Against Adversarial Attacks Using Approximate
Multipliers | Deep Neural Networks (DNNs) have advanced in many real-world applications, such as healthcare and autonomous driving. However, their high computational complexity and vulnerability to adversarial attacks are ongoing challenges. In this letter, approximate multipliers are used to explore DNN robustness improvement against adversarial attacks. By uniformly replacing accurate multipliers for state-of-the-art approximate ones in DNN layer models, we explore the DNNs robustness against various adversarial attacks in a feasible time. Results show up to 7% accuracy drop due to approximations when no attack is present while improving robust accuracy up to 10% when attacks applied. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 447,564 |
2210.01662 | DGORL: Distributed Graph Optimization based Relative Localization of
Multi-Robot Systems | An optimization problem is at the heart of many robotics estimating, planning, and optimum control problems. Several attempts have been made at model-based multi-robot localization, and few have formulated the multi-robot collaborative localization problem as a factor graph problem to solve through graph optimization. Here, the optimization objective is to minimize the errors of estimating the relative location estimates in a distributed manner. Our novel graph-theoretic approach to solving this problem consists of three major components; (connectivity) graph formation, expansion through transition model, and optimization of relative poses. First, we estimate the relative pose-connectivity graph using the received signal strength between the connected robots, indicating relative ranges between them. Then, we apply a motion model to formulate graph expansion and optimize them using g$^2$o graph optimization as a distributed solver over dynamic networks. Finally, we theoretically analyze the algorithm and numerically validate its optimality and performance through extensive simulations. The results demonstrate the practicality of the proposed solution compared to a state-of-the-art algorithm for collaborative localization in multi-robot systems. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | 321,345 |
1811.08021 | CM Sequence based Trajectory Modeling with Destination | In some problems there is information about the destination of a moving object. An example is an airliner flying from an origin to a destination. Such problems have three main components: an origin, a destination, and motion in between. To emphasize that the motion trajectories end up at the destination, we call them \textit{destination-directed trajectories}. The Markov sequence is not flexible enough to model such trajectories. Given an initial density and an evolution law, the future of a Markov sequence is determined probabilistically. One class of conditionally Markov (CM) sequences, called the $CM_L$ sequence (including the Markov sequence as a special case), has the following main components: a joint endpoint density (i.e., an initial density and a final density conditioned on the initial) and a Markov-like evolution law. This paper proposes using the $CM_L$ sequence for modeling destination-directed trajectories. It is demonstrated how the $CM_L$ sequence enjoys several desirable properties for destination-directed trajectory modeling. Some simulations of trajectory modeling and prediction are presented for illustration. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 113,926 |
1202.3735 | Noisy-OR Models with Latent Confounding | Given a set of experiments in which varying subsets of observed variables are subject to intervention, we consider the problem of identifiability of causal models exhibiting latent confounding. While identifiability is trivial when each experiment intervenes on a large number of variables, the situation is more complicated when only one or a few variables are subject to intervention per experiment. For linear causal models with latent variables Hyttinen et al. (2010) gave precise conditions for when such data are sufficient to identify the full model. While their result cannot be extended to discrete-valued variables with arbitrary cause-effect relationships, we show that a similar result can be obtained for the class of causal models whose conditional probability distributions are restricted to a `noisy-OR' parameterization. We further show that identification is preserved under an extension of the model that allows for negative influences, and present learning algorithms that we test for accuracy, scalability and robustness. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 14,407 |
1712.09376 | Entropy-SGD optimizes the prior of a PAC-Bayes bound: Generalization
properties of Entropy-SGD and data-dependent priors | We show that Entropy-SGD (Chaudhari et al., 2017), when viewed as a learning algorithm, optimizes a PAC-Bayes bound on the risk of a Gibbs (posterior) classifier, i.e., a randomized classifier obtained by a risk-sensitive perturbation of the weights of a learned classifier. Entropy-SGD works by optimizing the bound's prior, violating the hypothesis of the PAC-Bayes theorem that the prior is chosen independently of the data. Indeed, available implementations of Entropy-SGD rapidly obtain zero training error on random labels and the same holds of the Gibbs posterior. In order to obtain a valid generalization bound, we rely on a result showing that data-dependent priors obtained by stochastic gradient Langevin dynamics (SGLD) yield valid PAC-Bayes bounds provided the target distribution of SGLD is {\epsilon}-differentially private. We observe that test error on MNIST and CIFAR10 falls within the (empirically nonvacuous) risk bounds computed under the assumption that SGLD reaches stationarity. In particular, Entropy-SGLD can be configured to yield relatively tight generalization bounds and still fit real labels, although these same settings do not obtain state-of-the-art performance. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 87,342 |
2402.04420 | Measuring machine learning harms from stereotypes: requires
understanding who is being harmed by which errors in what ways | As machine learning applications proliferate, we need an understanding of their potential for harm. However, current fairness metrics are rarely grounded in human psychological experiences of harm. Drawing on the social psychology of stereotypes, we use a case study of gender stereotypes in image search to examine how people react to machine learning errors. First, we use survey studies to show that not all machine learning errors reflect stereotypes nor are equally harmful. Then, in experimental studies we randomly expose participants to stereotype-reinforcing, -violating, and -neutral machine learning errors. We find stereotype-reinforcing errors induce more experientially (i.e., subjectively) harmful experiences, while having minimal changes to cognitive beliefs, attitudes, or behaviors. This experiential harm impacts women more than men. However, certain stereotype-violating errors are more experientially harmful for men, potentially due to perceived threats to masculinity. We conclude that harm cannot be the sole guide in fairness mitigation, and propose a nuanced perspective depending on who is experiencing what harm and why. | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | 427,443 |
2205.14224 | Will Bilevel Optimizers Benefit from Loops | Bilevel optimization has arisen as a powerful tool for solving a variety of machine learning problems. Two current popular bilevel optimizers AID-BiO and ITD-BiO naturally involve solving one or two sub-problems, and consequently, whether we solve these problems with loops (that take many iterations) or without loops (that take only a few iterations) can significantly affect the overall computational efficiency. Existing studies in the literature cover only some of those implementation choices, and the complexity bounds available are not refined enough to enable rigorous comparison among different implementations. In this paper, we first establish unified convergence analysis for both AID-BiO and ITD-BiO that are applicable to all implementation choices of loops. We then specialize our results to characterize the computational complexity for all implementations, which enable an explicit comparison among them. Our result indicates that for AID-BiO, the loop for estimating the optimal point of the inner function is beneficial for overall efficiency, although it causes higher complexity for each update step, and the loop for approximating the outer-level Hessian-inverse-vector product reduces the gradient complexity. For ITD-BiO, the two loops always coexist, and our convergence upper and lower bounds show that such loops are necessary to guarantee a vanishing convergence error, whereas the no-loop scheme suffers from an unavoidable non-vanishing convergence error. Our numerical experiments further corroborate our theoretical results. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 299,260 |
2303.02604 | Two-Stage Grasping: A New Bin Picking Framework for Small Objects | This paper proposes a novel bin picking framework, two-stage grasping, aiming at precise grasping of cluttered small objects. Object density estimation and rough grasping are conducted in the first stage. Fine segmentation, detection, grasping, and pushing are performed in the second stage. A small object bin picking system has been realized to exhibit the concept of two-stage grasping. Experiments have shown the effectiveness of the proposed framework. Unlike traditional bin picking methods focusing on vision-based grasping planning using classic frameworks, the challenges of picking cluttered small objects can be solved by the proposed new framework with simple vision detection and planning. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 349,422 |
1907.00431 | Influence measures in subnetworks using vertex centrality | This work deals with the issue of assessing the influence of a node in the entire network and in the subnetwork to which it belongs as well, adapting the classical idea of vertex centrality. We provide a general definition of relative vertex centrality measure with respect to the classical one, referred to the whole network. Specifically, we give a decomposition of the relative centrality measure by including also the relative influence of the single node with respect to a given subgraph containing it. The proposed measure of relative centrality is tested in the empirical networks generated by collecting assets of the $S\&P$ 100, focusing on two specific centrality indices: betweenness and eigenvector centrality. The analysis is performed in a time perspective, capturing the assets influence, with respect to the characteristics of the analysed measures, in both the entire network and the specific sectors to which the assets belong. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 137,039 |
0811.0726 | Improved Capacity Scaling in Wireless Networks With Infrastructure | This paper analyzes the impact and benefits of infrastructure support in improving the throughput scaling in networks of $n$ randomly located wireless nodes. The infrastructure uses multi-antenna base stations (BSs), in which the number of BSs and the number of antennas at each BS can scale at arbitrary rates relative to $n$. Under the model, capacity scaling laws are analyzed for both dense and extended networks. Two BS-based routing schemes are first introduced in this study: an infrastructure-supported single-hop (ISH) routing protocol with multiple-access uplink and broadcast downlink and an infrastructure-supported multi-hop (IMH) routing protocol. Then, their achievable throughput scalings are analyzed. These schemes are compared against two conventional schemes without BSs: the multi-hop (MH) transmission and hierarchical cooperation (HC) schemes. It is shown that a linear throughput scaling is achieved in dense networks, as in the case without help of BSs. In contrast, the proposed BS-based routing schemes can, under realistic network conditions, improve the throughput scaling significantly in extended networks. The gain comes from the following advantages of these BS-based protocols. First, more nodes can transmit simultaneously in the proposed scheme than in the MH scheme if the number of BSs and the number of antennas are large enough. Second, by improving the long-distance signal-to-noise ratio (SNR), the received signal power can be larger than that of the HC, enabling a better throughput scaling under extended networks. Furthermore, by deriving the corresponding information-theoretic cut-set upper bounds, it is shown under extended networks that a combination of four schemes IMH, ISH, MH, and HC is order-optimal in all operating regimes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 2,631 |
2404.17023 | Out-of-Distribution Detection using Maximum Entropy Coding | Given a default distribution $P$ and a set of test data $x^M=\{x_1,x_2,\ldots,x_M\}$ this paper seeks to answer the question if it was likely that $x^M$ was generated by $P$. For discrete distributions, the definitive answer is in principle given by Kolmogorov-Martin-L\"{o}f randomness. In this paper we seek to generalize this to continuous distributions. We consider a set of statistics $T_1(x^M),T_2(x^M),\ldots$. To each statistic we associate its maximum entropy distribution and with this a universal source coder. The maximum entropy distributions are subsequently combined to give a total codelength, which is compared with $-\log P(x^M)$. We show that this approach satisfied a number of theoretical properties. For real world data $P$ usually is unknown. We transform data into a standard distribution in the latent space using a bidirectional generate network and use maximum entropy coding there. We compare the resulting method to other methods that also used generative neural networks to detect anomalies. In most cases, our results show better performance. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 449,704 |
2112.13099 | Fine-Tuning Data Structures for Analytical Query Processing | We introduce a framework for automatically choosing data structures to support efficient computation of analytical workloads. Our contributions are twofold. First, we introduce a novel low-level intermediate language that can express the algorithms behind various query processing paradigms such as classical joins, groupjoin, and in-database machine learning engines. This language is designed around the notion of dictionaries, and allows for a more fine-grained choice of its low-level implementation. Second, the cost model for alternative implementations is automatically inferred by combining machine learning and program reasoning. The dictionary cost model is learned using a regression model trained over the profiling dataset of dictionary operations on a given hardware architecture. The program cost model is inferred using static program analysis. Our experimental results show the effectiveness of the trained cost model on micro benchmarks. Furthermore, we show that the performance of the code generated by our framework either outperforms or is on par with the state-of-the-art analytical query engines and a recent in-database machine learning framework. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | 273,139 |
1912.05945 | Towards a Robust Classifier: An MDL-Based Method for Generating
Adversarial Examples | We address the problem of adversarial examples in machine learning where an adversary tries to misguide a classifier by making functionality-preserving modifications to original samples. We assume a black-box scenario where the adversary has access to only the feature set, and the final hard-decision output of the classifier. We propose a method to generate adversarial examples using the minimum description length (MDL) principle. Our final aim is to improve the robustness of the classifier by considering generated examples in rebuilding the classifier. We evaluate our method for the application of static malware detection in portable executable (PE) files. We consider API calls of PE files as their distinguishing features where the feature vector is a binary vector representing the presence-absence of API calls. In our method, we first create a dataset of benign samples by querying the target classifier. We next construct a code table of frequent patterns for the compression of this dataset using the MDL principle. We finally generate an adversarial example corresponding to a malware sample by selecting and adding a pattern from the benign code table to the malware sample. The selected pattern is the one that minimizes the length of the compressed adversarial example given the code table. This modification preserves the functionalities of the original malware sample as all original API calls are kept, and only some new API calls are added. Considering a neural network, we show that the evasion rate is 78.24 percent for adversarial examples compared to 8.16 percent for original malware samples. This shows the effectiveness of our method in generating examples that need to be considered in rebuilding the classifier. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 157,234 |
1512.08814 | Combined statistical and model based texture features for improved image
classification | This paper aims to improve the accuracy of texture classification based on extracting texture features using five different texture methods and classifying the patterns using a naive Bayesian classifier. Three statistical-based and two model-based methods are used to extract texture features from eight different texture images, then their accuracy is ranked after using each method individually and in pairs. The accuracy improved up to 97.01% when model based -Gaussian Markov random field (GMRF) and fractional Brownian motion (fBm) - were used together for classification as compared to the highest achieved using each of the five different methods alone; and proved to be better in classifying as compared to statistical methods. Also, using GMRF with statistical based methods, such as Gray level co-occurrence (GLCM) and run-length (RLM) matrices, improved the overall accuracy to 96.94% and 96.55%; respectively. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 50,547 |
1903.07833 | Fisher Discriminative Least Squares Regression for Image Classification | Discriminative least squares regression (DLSR) has been shown to achieve promising performance in multi-class image classification tasks. Its key idea is to force the regression labels of different classes to move in opposite directions by means of the proposed the joint use of the $\epsilon$-draggings technique, yielding discriminative regression model exhibiting wider margins, and the Fisher criterion. The $\epsilon$-draggings technique ignores an important problem: its non-negative relaxation matrix is dynamically updated in optimization, which means the dragging values can also cause the labels from the same class to be uncorrelated. In order to learn a more powerful discriminative projection, as well as regression labels, we propose a Fisher regularized DLSR (FDLSR) framework by constraining the relaxed labels using the Fisher criterion. On one hand, the Fisher criterion improves the intra-class compactness of the relaxed labels during relaxation learning. On the other hand, it is expected further to enhance the inter-class separability of $\epsilon$-draggings technique. FDLSR for the first time ever attempts to integrate the Fisher discriminant criterion and $\epsilon$-draggings technique into one unified model because they are absolutely complementary in learning discriminative projection. Extensive experiments on various datasets demonstrate that the proposed FDLSR method achieves performance that is superior to other state-of-the-art classification methods. The Matlab codes of this paper are available at https://github.com/chenzhe207/FDLSR. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 124,714 |
1901.02399 | Service Rate Region of Content Access from Erasure Coded Storage | We consider storage systems in which $K$ files are stored over $N$ nodes. A node may be systematic for a particular file in the sense that access to it gives access to the file. Alternatively, a node may be coded, meaning that it gives access to a particular file only when combined with other nodes (which may be coded or systematic). Requests for file $f_k$ arrive at rate $\lambda_k$, and we are interested in the rate that can be served by a particular system. In this paper, we determine the set of request arrival rates for the a $3$-file coded storage system. We also provide an algorithm to maximize the rate of requests served for file $K$ given $\lambda_1,\dots, \lambda_{K-1}$ in a general $K$-file case. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 118,194 |
0906.2820 | Equalization for Non-Coherent UWB Systems with Approximate Semi-Definite
Programming | In this paper, we propose an approximate semi-definite programming framework for demodulation and equalization of non-coherent ultra-wide-band communication systems with inter-symbol-interference. It is assumed that the communication systems follow non-linear second-order Volterra models. We formulate the demodulation and equalization problems as semi-definite programming problems. We propose an approximate algorithm for solving the formulated semi-definite programming problems. Compared with the existing non-linear equalization approaches, the proposed semi-definite programming formulation and approximate solving algorithm have low computational complexity and storage requirements. We show that the proposed algorithm has satisfactory error probability performance by simulation results. The proposed non-linear equalization approach can be adopted for a wide spectrum of non-coherent ultra-wide-band systems, due to the fact that most non-coherent ultra-wide-band systems with inter-symbol-interference follow non-linear second-order Volterra signal models. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 3,889 |
2112.12592 | Flow and Transport in Three-Dimensional Discrete Fracture Matrix Models
using Mimetic Finite Difference on a Conforming Multi-Dimensional Mesh | We present a comprehensive workflow to simulate single-phase flow and transport in fractured porous media using the discrete fracture matrix approach. The workflow has three primary parts: (1) a method for conforming mesh generation of and around a three-dimensional fracture network, (2) the discretization of the governing equations using a second-order mimetic finite difference method, and (3) implementation of numerical methods for high-performance computing environments. A method to create a conforming Delaunay tetrahedralization of the volume surrounding the fracture network, where the triangular cells of the fracture mesh are faces in the volume mesh, that addresses pathological cases which commonly arise and degrade mesh quality is also provided. Our open-source subsurface simulator uses a hierarchy of process kernels (one kernel per physical process) that allows for both strong and weak coupling of the fracture and matrix domains. We provide verification tests based on analytic solutions for flow and transport, as well as numerical convergence. We also provide multiple expositions of the method in complex fracture networks. In the first example, we demonstrate that the method is robust by considering two scenarios where the fracture network acts as a barrier to flow, as the primary pathway, or offers the same resistance as the surrounding matrix. In the second test, flow and transport through a three-dimensional stochastically generated network containing 257 fractures is presented. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 273,010 |
2009.05527 | On Multitask Loss Function for Audio Event Detection and Localization | Audio event localization and detection (SELD) have been commonly tackled using multitask models. Such a model usually consists of a multi-label event classification branch with sigmoid cross-entropy loss for event activity detection and a regression branch with mean squared error loss for direction-of-arrival estimation. In this work, we propose a multitask regression model, in which both (multi-label) event detection and localization are formulated as regression problems and use the mean squared error loss homogeneously for model training. We show that the common combination of heterogeneous loss functions causes the network to underfit the data whereas the homogeneous mean squared error loss leads to better convergence and performance. Experiments on the development and validation sets of the DCASE 2020 SELD task demonstrate that the proposed system also outperforms the DCASE 2020 SELD baseline across all the detection and localization metrics, reducing the overall SELD error (the combined metric) by approximately 10% absolute. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 195,346 |
2303.03470 | Partial-Information, Longitudinal Cyber Attacks on LiDAR in Autonomous
Vehicles | What happens to an autonomous vehicle (AV) if its data are adversarially compromised? Prior security studies have addressed this question through mostly unrealistic threat models, with limited practical relevance, such as white-box adversarial learning or nanometer-scale laser aiming and spoofing. With growing evidence that cyber threats pose real, imminent danger to AVs and cyber-physical systems (CPS) in general, we present and evaluate a novel AV threat model: a cyber-level attacker capable of disrupting sensor data but lacking any situational awareness. We demonstrate that even though the attacker has minimal knowledge and only access to raw data from a single sensor (i.e., LiDAR), she can design several attacks that critically compromise perception and tracking in multi-sensor AVs. To mitigate vulnerabilities and advance secure architectures in AVs, we introduce two improvements for security-aware fusion: a probabilistic data-asymmetry monitor and a scalable track-to-track fusion of 3D LiDAR and monocular detections (T2T-3DLM); we demonstrate that the approaches significantly reduce attack effectiveness. To support objective safety and security evaluations in AVs, we release our security evaluation platform, AVsec, which is built on security-relevant metrics to benchmark AVs on gold-standard longitudinal AV datasets and AV simulators. | false | false | false | false | false | false | false | false | false | false | true | false | true | false | false | false | false | false | 349,738 |
2310.20145 | Efficient Robust Bayesian Optimization for Arbitrary Uncertain Inputs | Bayesian Optimization (BO) is a sample-efficient optimization algorithm widely employed across various applications. In some challenging BO tasks, input uncertainty arises due to the inevitable randomness in the optimization process, such as machining errors, execution noise, or contextual variability. This uncertainty deviates the input from the intended value before evaluation, resulting in significant performance fluctuations in the final result. In this paper, we introduce a novel robust Bayesian Optimization algorithm, AIRBO, which can effectively identify a robust optimum that performs consistently well under arbitrary input uncertainty. Our method directly models the uncertain inputs of arbitrary distributions by empowering the Gaussian Process with the Maximum Mean Discrepancy (MMD) and further accelerates the posterior inference via Nystrom approximation. Rigorous theoretical regret bound is established under MMD estimation error and extensive experiments on synthetic functions and real problems demonstrate that our approach can handle various input uncertainties and achieve state-of-the-art performance. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 404,271 |
2107.12078 | 6DCNN with roto-translational convolution filters for volumetric data
processing | In this work, we introduce 6D Convolutional Neural Network (6DCNN) designed to tackle the problem of detecting relative positions and orientations of local patterns when processing three-dimensional volumetric data. 6DCNN also includes SE(3)-equivariant message-passing and nonlinear activation operations constructed in the Fourier space. Working in the Fourier space allows significantly reducing the computational complexity of our operations. We demonstrate the properties of the 6D convolution and its efficiency in the recognition of spatial patterns. We also assess the 6DCNN model on several datasets from the recent CASP protein structure prediction challenges. Here, 6DCNN improves over the baseline architecture and also outperforms the state of the art. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 247,795 |
2010.09235 | Ensemble Chinese End-to-End Spoken Language Understanding for Abnormal
Event Detection from audio stream | Conventional spoken language understanding (SLU) consist of two stages, the first stage maps speech to text by automatic speech recognition (ASR), and the second stage maps text to intent by natural language understanding (NLU). End-to-end SLU maps speech directly to intent through a single deep learning model. Previous end-to-end SLU models are primarily used for English environment due to lacking large scale SLU dataset in Chines, and use only one ASR model to extract features from speech. With the help of Kuaishou technology, a large scale SLU dataset in Chinese is collected to detect abnormal event in their live audio stream. Based on this dataset, this paper proposed a ensemble end-to-end SLU model used for Chinese environment. This ensemble SLU models extracted hierarchies features using multiple pre-trained ASR models, leading to better representation of phoneme level and word level information. This proposed approached achieve 9.7% increase of accuracy compared to previous end-to-end SLU model. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 201,457 |
1702.07772 | Video and Accelerometer-Based Motion Analysis for Automated Surgical
Skills Assessment | Purpose: Basic surgical skills of suturing and knot tying are an essential part of medical training. Having an automated system for surgical skills assessment could help save experts time and improve training efficiency. There have been some recent attempts at automated surgical skills assessment using either video analysis or acceleration data. In this paper, we present a novel approach for automated assessment of OSATS based surgical skills and provide an analysis of different features on multi-modal data (video and accelerometer data). Methods: We conduct the largest study, to the best of our knowledge, for basic surgical skills assessment on a dataset that contained video and accelerometer data for suturing and knot-tying tasks. We introduce "entropy based" features - Approximate Entropy (ApEn) and Cross-Approximate Entropy (XApEn), which quantify the amount of predictability and regularity of fluctuations in time-series data. The proposed features are compared to existing methods of Sequential Motion Texture (SMT), Discrete Cosine Transform (DCT) and Discrete Fourier Transform (DFT), for surgical skills assessment. Results: We report average performance of different features across all applicable OSATS criteria for suturing and knot tying tasks. Our analysis shows that the proposed entropy based features out-perform previous state-of-the-art methods using video data. For accelerometer data, our method performs better for suturing only. We also show that fusion of video and acceleration features can improve overall performance with the proposed entropy features achieving highest accuracy. Conclusions: Automated surgical skills assessment can be achieved with high accuracy using the proposed entropy features. Such a system can significantly improve the efficiency of surgical training in medical schools and teaching hospitals. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 68,835 |
0704.2509 | Signal Set Design for Full-Diversity Low-Decoding-Complexity
Differential Scaled-Unitary STBCs | The problem of designing high rate, full diversity noncoherent space-time block codes (STBCs) with low encoding and decoding complexity is addressed. First, the notion of $g$-group encodable and $g$-group decodable linear STBCs is introduced. Then for a known class of rate-1 linear designs, an explicit construction of fully-diverse signal sets that lead to four-group encodable and four-group decodable differential scaled unitary STBCs for any power of two number of antennas is provided. Previous works on differential STBCs either sacrifice decoding complexity for higher rate or sacrifice rate for lower decoding complexity. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 64 |
1904.08668 | An Efficient Approximate kNN Graph Method for Diffusion on Image
Retrieval | The application of the diffusion in many computer vision and artificial intelligence projects has been shown to give excellent improvements in performance. One of the main bottlenecks of this technique is the quadratic growth of the kNN graph size due to the high-quantity of new connections between nodes in the graph, resulting in long computation times. Several strategies have been proposed to address this, but none are effective and efficient. Our novel technique, based on LSH projections, obtains the same performance as the exact kNN graph after diffusion, but in less time (approximately 18 times faster on a dataset of a hundred thousand images). The proposed method was validated and compared with other state-of-the-art on several public image datasets, including Oxford5k, Paris6k, and Oxford105k. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 128,145 |
2407.19196 | Why Misinformation is Created? Detecting them by Integrating Intent
Features | Various social media platforms, e.g., Twitter and Reddit, allow people to disseminate a plethora of information more efficiently and conveniently. However, they are inevitably full of misinformation, causing damage to diverse aspects of our daily lives. To reduce the negative impact, timely identification of misinformation, namely Misinformation Detection (MD), has become an active research topic receiving widespread attention. As a complex phenomenon, the veracity of an article is influenced by various aspects. In this paper, we are inspired by the opposition of intents between misinformation and real information. Accordingly, we propose to reason the intent of articles and form the corresponding intent features to promote the veracity discrimination of article features. To achieve this, we build a hierarchy of a set of intents for both misinformation and real information by referring to the existing psychological theories, and we apply it to reason the intent of articles by progressively generating binary answers with an encoder-decoder structure. We form the corresponding intent features and integrate it with the token features to achieve more discriminative article features for MD. Upon these ideas, we suggest a novel MD method, namely Detecting Misinformation by Integrating Intent featuRes (DM-INTER). To evaluate the performance of DM-INTER, we conduct extensive experiments on benchmark MD datasets. The experimental results validate that DM-INTER can outperform the existing baseline MD methods. | false | false | false | true | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 476,684 |
1201.0676 | Knowledge epidemics and population dynamics models for describing idea
diffusion | The diffusion of ideas is often closely connected to the creation and diffusion of knowledge and to the technological evolution of society. Because of this, knowledge creation, exchange and its subsequent transformation into innovations for improved welfare and economic growth is briefly described from a historical point of view. Next, three approaches are discussed for modeling the diffusion of ideas in the areas of science and technology, through (i) deterministic, (ii) stochastic, and (iii) statistical approaches. These are illustrated through their corresponding population dynamics and epidemic models relative to the spreading of ideas, knowledge and innovations. The deterministic dynamical models are considered to be appropriate for analyzing the evolution of large and small societal, scientific and technological systems when the influence of fluctuations is insignificant. Stochastic models are appropriate when the system of interest is small but when the fluctuations become significant for its evolution. Finally statistical approaches and models based on the laws and distributions of Lotka, Bradford, Yule, Zipf-Mandelbrot, and others, provide much useful information for the analysis of the evolution of systems in which development is closely connected to the process of idea diffusion. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 13,672 |
1210.7102 | 3D Face Recognition using Significant Point based SULD Descriptor | In this work, we present a new 3D face recognition method based on Speeded-Up Local Descriptor (SULD) of significant points extracted from the range images of faces. The proposed model consists of a method for extracting distinctive invariant features from range images of faces that can be used to perform reliable matching between different poses of range images of faces. For a given 3D face scan, range images are computed and the potential interest points are identified by searching at all scales. Based on the stability of the interest point, significant points are extracted. For each significant point we compute the SULD descriptor which consists of vector made of values from the convolved Haar wavelet responses located on concentric circles centred on the significant point, and where the amount of Gaussian smoothing is proportional to the radii of the circles. Experimental results show that the newly proposed method provides higher recognition rate compared to other existing contemporary models developed for 3D face recognition. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 19,416 |
2410.01105 | M2P2: A Multi-Modal Passive Perception Dataset for Off-Road Mobility in
Extreme Low-Light Conditions | Long-duration, off-road, autonomous missions require robots to continuously perceive their surroundings regardless of the ambient lighting conditions. Most existing autonomy systems heavily rely on active sensing, e.g., LiDAR, RADAR, and Time-of-Flight sensors, or use (stereo) visible light imaging sensors, e.g., color cameras, to perceive environment geometry and semantics. In scenarios where fully passive perception is required and lighting conditions are degraded to an extent that visible light cameras fail to perceive, most downstream mobility tasks such as obstacle avoidance become impossible. To address such a challenge, this paper presents a Multi-Modal Passive Perception dataset, M2P2, to enable off-road mobility in low-light to no-light conditions. We design a multi-modal sensor suite including thermal, event, and stereo RGB cameras, GPS, two Inertia Measurement Units (IMUs), as well as a high-resolution LiDAR for ground truth, with a novel multi-sensor calibration procedure that can efficiently transform multi-modal perceptual streams into a common coordinate system. Our 10-hour, 32 km dataset also includes mobility data such as robot odometry and actions and covers well-lit, low-light, and no-light conditions, along with paved, on-trail, and off-trail terrain. Our results demonstrate that off-road mobility is possible through only passive perception in extreme low-light conditions using end-to-end learning and classical planning. The project website can be found at https://cs.gmu.edu/~xiao/Research/M2P2/ | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 493,614 |
1510.04493 | Sparsity-aware Possibilistic Clustering Algorithms | In this paper two novel possibilistic clustering algorithms are presented, which utilize the concept of sparsity. The first one, called sparse possibilistic c-means, exploits sparsity and can deal well with closely located clusters that may also be of significantly different densities. The second one, called sparse adaptive possibilistic c-means, is an extension of the first, where now the involved parameters are dynamically adapted. The latter can deal well with even more challenging cases, where, in addition to the above, clusters may be of significantly different variances. More specifically, it provides improved estimates of the cluster representatives, while, in addition, it has the ability to estimate the actual number of clusters, given an overestimate of it. Extensive experimental results on both synthetic and real data sets support the previous statements. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 47,922 |
2112.03203 | A New Sentence Extraction Strategy for Unsupervised Extractive
Summarization Methods | In recent years, text summarization methods have attracted much attention again thanks to the researches on neural network models. Most of the current text summarization methods based on neural network models are supervised methods which need large-scale datasets. However, large-scale datasets are difficult to obtain in practical applications. In this paper, we model the task of extractive text summarization methods from the perspective of Information Theory, and then describe the unsupervised extractive methods with a uniform framework. To improve the feature distribution and to decrease the mutual information of summarization sentences, we propose a new sentence extraction strategy which can be applied to existing unsupervised extractive methods. Experiments are carried out on different datasets, and results show that our strategy is indeed effective and in line with expectations. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 270,114 |
2211.17104 | Agent-Cells with DNA Programming: A Dynamic Decentralized System | This paper introduces a new concept. We intend to give life to a software agent. A software agent is a computer program that acts on a user's behalf. We put a DNA inside the agent. DNA is a simple text, a whole roadmap of a network of agents or a system with details. A Dynamic Numerical Abstract of a multiagent system. It is also a reproductive part for an \emph{agent} that makes the agent take actions and decide independently and reproduce coworkers. By defining different DNA structures, one can establish new agents and different nets for different usages. We initiate such thinking as \emph{DNA programming}. This strategy leads to a new field of programming. This type of programming can help us manage large systems with various elements with an incredibly organized customizable structure. An agent can reproduce another agent. We put one or a few agents around a given network, and the agents will reproduce themselves till they can reach others and pervade the whole network. An agent's position or other environmental or geographical characteristics make it possible for an agent to know its active set of \emph{genes} on its DNA. The active set of genes specifies its duties. There is a database that includes a list of functions s.t. each one is an implementation of what a \emph{gene} represents. To utilize a decentralized database, we may use a blockchain-based structure. This design can adapt to a system that manages many static and dynamic networks. This network could be a distributed system, a decentralized system, a telecommunication network such as a 5G monitoring system, an IoT management system, or even an energy management system. The final system is the combination of all the agents and the overlay net that connects the agents. We denote the final net as the \emph{body} of the system. | false | false | false | false | false | false | false | false | false | false | false | false | true | false | true | false | false | true | 333,862 |
1501.04370 | Structure Learning in Bayesian Networks of Moderate Size by Efficient
Sampling | We study the Bayesian model averaging approach to learning Bayesian network structures (DAGs) from data. We develop new algorithms including the first algorithm that is able to efficiently sample DAGs according to the exact structure posterior. The DAG samples can then be used to construct estimators for the posterior of any feature. We theoretically prove good properties of our estimators and empirically show that our estimators considerably outperform the estimators from the previous state-of-the-art methods. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 39,369 |
2005.14330 | Bipartite Distance for Shape-Aware Landmark Detection in Spinal X-Ray
Images | Scoliosis is a congenital disease that causes lateral curvature in the spine. Its assessment relies on the identification and localization of vertebrae in spinal X-ray images, conventionally via tedious and time-consuming manual radiographic procedures that are prone to subjectivity and observational variability. Reliability can be improved through the automatic detection and localization of spinal landmarks. To guide a CNN in the learning of spinal shape while detecting landmarks in X-ray images, we propose a novel loss based on a bipartite distance (BPD) measure, and show that it consistently improves landmark detection performance. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 179,238 |
1908.03684 | Bayesian Loss for Crowd Count Estimation with Point Supervision | In crowd counting datasets, each person is annotated by a point, which is usually the center of the head. And the task is to estimate the total count in a crowd scene. Most of the state-of-the-art methods are based on density map estimation, which convert the sparse point annotations into a "ground truth" density map through a Gaussian kernel, and then use it as the learning target to train a density map estimator. However, such a "ground-truth" density map is imperfect due to occlusions, perspective effects, variations in object shapes, etc. On the contrary, we propose \emph{Bayesian loss}, a novel loss function which constructs a density contribution probability model from the point annotations. Instead of constraining the value at every pixel in the density map, the proposed training loss adopts a more reliable supervision on the count expectation at each annotated point. Without bells and whistles, the loss function makes substantial improvements over the baseline loss on all tested datasets. Moreover, our proposed loss function equipped with a standard backbone network, without using any external detectors or multi-scale architectures, plays favourably against the state of the arts. Our method outperforms previous best approaches by a large margin on the latest and largest UCF-QNRF dataset. The source code is available at \url{https://github.com/ZhihengCV/Baysian-Crowd-Counting}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 141,295 |
2411.00109 | Prospective Learning: Learning for a Dynamic Future | In real-world applications, the distribution of the data, and our goals, evolve over time. The prevailing theoretical framework for studying machine learning, namely probably approximately correct (PAC) learning, largely ignores time. As a consequence, existing strategies to address the dynamic nature of data and goals exhibit poor real-world performance. This paper develops a theoretical framework called "Prospective Learning" that is tailored for situations when the optimal hypothesis changes over time. In PAC learning, empirical risk minimization (ERM) is known to be consistent. We develop a learner called Prospective ERM, which returns a sequence of predictors that make predictions on future data. We prove that the risk of prospective ERM converges to the Bayes risk under certain assumptions on the stochastic process generating the data. Prospective ERM, roughly speaking, incorporates time as an input in addition to the data. We show that standard ERM as done in PAC learning, without incorporating time, can result in failure to learn when distributions are dynamic. Numerical experiments illustrate that prospective ERM can learn synthetic and visual recognition problems constructed from MNIST and CIFAR-10. Code at https://github.com/neurodata/prolearn. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 504,434 |
2105.00201 | Designing Games for Enabling Co-creation with Social Agents | Digital tools have long been used for supporting children's creativity. Digital games that allow children to create artifacts and express themselves in a playful environment serve as efficient Creativity Support Tools (or CSTs). Creativity is also scaffolded by social interactions with others in their environment. In our work, we explore the use of game-based interactions with a social agent to scaffold children's creative expression as game players. We designed three collaborative games and play-tested with 146 5-10 year old children played with the social robot Jibo, which affords three different kinds of creativity: verbal creativity, figural creativity and divergent thinking during creative problem solving. In this paper, we reflect on game mechanic practices that we incorporated to design for stimulating creativity in children. These strategies may be valuable to game designers and HCI researchers designing games and social agents for supporting children's creativity. | true | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 233,127 |
2305.10572 | Tensor Products and Hyperdimensional Computing | Following up on a previous analysis of graph embeddings, we generalize and expand some results to the general setting of vector symbolic architectures (VSA) and hyperdimensional computing (HDC). Importantly, we explore the mathematical relationship between superposition, orthogonality, and tensor product. We establish the tensor product representation as the central representation, with a suite of unique properties. These include it being the most general and expressive representation, as well as being the most compressed representation that has errorrless unbinding and detection. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 365,130 |
1708.09492 | Automatically Generating Commit Messages from Diffs using Neural Machine
Translation | Commit messages are a valuable resource in comprehension of software evolution, since they provide a record of changes such as feature additions and bug repairs. Unfortunately, programmers often neglect to write good commit messages. Different techniques have been proposed to help programmers by automatically writing these messages. These techniques are effective at describing what changed, but are often verbose and lack context for understanding the rationale behind a change. In contrast, humans write messages that are short and summarize the high level rationale. In this paper, we adapt Neural Machine Translation (NMT) to automatically "translate" diffs into commit messages. We trained an NMT algorithm using a corpus of diffs and human-written commit messages from the top 1k Github projects. We designed a filter to help ensure that we only trained the algorithm on higher-quality commit messages. Our evaluation uncovered a pattern in which the messages we generate tend to be either very high or very low quality. Therefore, we created a quality-assurance filter to detect cases in which we are unable to produce good messages, and return a warning instead. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | 79,794 |
1302.4948 | Testing Identifiability of Causal Effects | This paper concerns the probabilistic evaluation of the effects of actions in the presence of unmeasured variables. We show that the identification of causal effect between a singleton variable X and a set of variables Y can be accomplished systematically, in time polynomial in the number of variables in the graph. When the causal effect is identifiable, a closed-form expression can be obtained for the probability that the action will achieve a specified goal, or a set of goals. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 22,222 |
2302.00192 | Density peak clustering using tensor network | Tensor networks, which have been traditionally used to simulate many-body physics, have recently gained significant attention in the field of machine learning due to their powerful representation capabilities. In this work, we propose a density-based clustering algorithm inspired by tensor networks. We encode classical data into tensor network states on an extended Hilbert space and train the tensor network states to capture the features of the clusters. Here, we define density and related concepts in terms of fidelity, rather than using a classical distance measure. We evaluate the performance of our algorithm on six synthetic data sets, four real world data sets, and three commonly used computer vision data sets. The results demonstrate that our method provides state-of-the-art performance on several synthetic data sets and real world data sets, even when the number of clusters is unknown. Additionally, our algorithm performs competitively with state-of-the-art algorithms on the MNIST, USPS, and Fashion-MNIST image data sets. These findings reveal the great potential of tensor networks for machine learning applications. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 343,133 |
1910.04992 | A sub-Riemannian model of the visual cortex with frequency and phase | In this paper we present a novel model of the primary visual cortex (V1) based on orientation, frequency and phase selective behavior of the V1 simple cells. We start from the first level mechanisms of visual perception: receptive profiles. The model interprets V1 as a fiber bundle over the 2-dimensional retinal plane by introducing orientation, frequency and phase as intrinsic variables. Each receptive profile on the fiber is mathematically interpreted as a rotated, frequency modulated and phase shifted Gabor function. We start from the Gabor function and show that it induces in a natural way the model geometry and the associated horizontal connectivity modeling the neural connectivity patterns in V1. We provide an image enhancement algorithm employing the model framework. The algorithm is capable of exploiting not only orientation but also frequency and phase information existing intrinsically in a 2-dimensional input image. We provide the experimental results corresponding to the enhancement algorithm. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 148,940 |
1811.05826 | Char2char Generation with Reranking for the E2E NLG Challenge | This paper describes our submission to the E2E NLG Challenge. Recently, neural seq2seq approaches have become mainstream in NLG, often resorting to pre- (respectively post-) processing delexicalization (relexicalization) steps at the word-level to handle rare words. By contrast, we train a simple character level seq2seq model, which requires no pre/post-processing (delexicalization, tokenization or even lowercasing), with surprisingly good results. For further improvement, we explore two re-ranking approaches for scoring candidates. We also introduce a synthetic dataset creation procedure, which opens up a new way of creating artificial datasets for Natural Language Generation. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 113,394 |
2309.01156 | Advances in machine-learning-based sampling motivated by lattice quantum
chromodynamics | Sampling from known probability distributions is a ubiquitous task in computational science, underlying calculations in domains from linguistics to biology and physics. Generative machine-learning (ML) models have emerged as a promising tool in this space, building on the success of this approach in applications such as image, text, and audio generation. Often, however, generative tasks in scientific domains have unique structures and features -- such as complex symmetries and the requirement of exactness guarantees -- that present both challenges and opportunities for ML. This Perspective outlines the advances in ML-based sampling motivated by lattice quantum field theory, in particular for the theory of quantum chromodynamics. Enabling calculations of the structure and interactions of matter from our most fundamental understanding of particle physics, lattice quantum chromodynamics is one of the main consumers of open-science supercomputing worldwide. The design of ML algorithms for this application faces profound challenges, including the necessity of scaling custom ML architectures to the largest supercomputers, but also promises immense benefits, and is spurring a wave of development in ML-based sampling more broadly. In lattice field theory, if this approach can realize its early promise it will be a transformative step towards first-principles physics calculations in particle, nuclear and condensed matter physics that are intractable with traditional approaches. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 389,569 |
2408.11948 | Topological Representational Similarity Analysis in Brains and Beyond | Understanding how the brain represents and processes information is crucial for advancing neuroscience and artificial intelligence. Representational similarity analysis (RSA) has been instrumental in characterizing neural representations, but traditional RSA relies solely on geometric properties, overlooking crucial topological information. This thesis introduces Topological RSA (tRSA), a novel framework combining geometric and topological properties of neural representations. tRSA applies nonlinear monotonic transforms to representational dissimilarities, emphasizing local topology while retaining intermediate-scale geometry. The resulting geo-topological matrices enable model comparisons robust to noise and individual idiosyncrasies. This thesis introduces several key methodological advances: (1) Topological RSA (tRSA) for identifying computational signatures and testing topological hypotheses; (2) Adaptive Geo-Topological Dependence Measure (AGTDM) for detecting complex multivariate relationships; (3) Procrustes-aligned Multidimensional Scaling (pMDS) for revealing neural computation stages; (4) Temporal Topological Data Analysis (tTDA) for uncovering developmental trajectories; and (5) Single-cell Topological Simplicial Analysis (scTSA) for characterizing cell population complexity. Through analyses of neural recordings, biological data, and neural network simulations, this thesis demonstrates the power and versatility of these methods in understanding brains, computational models, and complex biological systems. They not only offer robust approaches for adjudicating among competing models but also reveal novel theoretical insights into the nature of neural computation. This work lays the foundation for future investigations at the intersection of topology, neuroscience, and time series analysis, paving the way for more nuanced understanding of brain function and dysfunction. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 482,517 |
2404.07904 | HGRN2: Gated Linear RNNs with State Expansion | Hierarchically gated linear RNN (HGRN, \citealt{HGRN}) has demonstrated competitive training speed and performance in language modeling while offering efficient inference. However, the recurrent state size of HGRN remains relatively small, limiting its expressiveness. To address this issue, we introduce a simple outer product-based state expansion mechanism, which significantly enlarges the recurrent state size without introducing any additional parameters. This enhancement also provides a linear attention interpretation for HGRN2, enabling hardware-efficient training. Our extensive experiments verify the advantage of HGRN2 over HGRN consistently across different settings and competitive with other recurrent models. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 446,015 |
2301.09025 | Nichtverbales Verhalten sozialer Roboter: Bewegungen, deren Bedeutung
und die Technik dahinter | Nichtverbale Signale sind ein elementarer Bestandteil der menschlichen Kommunikation. Sie erf\"ullen eine Vielzahl von Funktionen bei der Kl\"arung von Mehrdeutigkeiten, der subtilen Aushandlung von Rollen oder dem Ausdruck dessen, was im Inneren der Gespr\"achspartner vorgeht. Viele Studien mit sozial-interaktiven Robotern zeigen, dass vom Menschen inspirierte Bewegungsmuster \"ahnlich interpretiert werden wie die von realen Personen. Dieses Kapitel erl\"autert daher die wichtigsten Funktionen, welche die jeweiligen Bewegungsmuster in der Kommunikation erf\"ullen, und gibt einen \"Uberblick dar\"uber, wie sie auf Roboter \"ubertragen werden k\"onnen. -- Non-verbal signals are a fundamental part of human communication. They serve a variety of functions in clarifying ambiguities, subtly negotiating roles, or expressing what is going on inside the interlocutors. Many studies with socially-interactive robots show that human-inspired movement patterns are interpreted similarly to those of real people. This chapter therefore explains the most important functions that the respective movement patterns fulfill in communication and gives an overview of how they can be transferred to robots. | true | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 341,376 |
2202.01856 | Data-Driven Optimal Control via Linear Transfer Operators: A Convex
Approach | This paper is concerned with data-driven optimal control of nonlinear systems. We present a convex formulation to the optimal control problem (OCP) with a discounted cost function. We consider OCP with both positive and negative discount factor. The convex approach relies on lifting nonlinear system dynamics in the space of densities using the linear Perron-Frobenius (P-F) operator. This lifting leads to an infinite-dimensional convex optimization formulation of the optimal control problem. The data-driven approximation of the optimization problem relies on the approximation of the Koopman operator using the polynomial basis function. We write the approximate finite-dimensional optimization problem as a polynomial optimization which is then solved efficiently using a sum-of-squares-based optimization framework. Simulation results are presented to demonstrate the efficacy of the developed data-driven optimal control framework. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 278,606 |
2410.12130 | Iter-AHMCL: Alleviate Hallucination for Large Language Model via
Iterative Model-level Contrastive Learning | The development of Large Language Models (LLMs) has significantly advanced various AI applications in commercial and scientific research fields, such as scientific literature summarization, writing assistance, and knowledge graph construction. However, a significant challenge is the high risk of hallucination during LLM inference, which can lead to security concerns like factual inaccuracies, inconsistent information, and fabricated content. To tackle this issue, it is essential to develop effective methods for reducing hallucination while maintaining the original capabilities of the LLM. This paper introduces a novel approach called Iterative Model-level Contrastive Learning (Iter-AHMCL) to address hallucination. This method modifies the representation layers of pre-trained LLMs by using contrastive `positive' and `negative' models, trained on data with and without hallucinations. By leveraging the differences between these two models, we create a more straightforward pathway to eliminate hallucinations, and the iterative nature of contrastive learning further enhances performance. Experimental validation on four pre-trained foundation LLMs (LLaMA2, Alpaca, LLaMA3, and Qwen) finetuning with a specially designed dataset shows that our approach achieves an average improvement of 10.1 points on the TruthfulQA benchmark. Comprehensive experiments demonstrate the effectiveness of Iter-AHMCL in reducing hallucination while maintaining the general capabilities of LLMs. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 498,864 |
1503.00604 | Robust Group Linkage | We study the problem of group linkage: linking records that refer to entities in the same group. Applications for group linkage include finding businesses in the same chain, finding conference attendees from the same affiliation, finding players from the same team, etc. Group linkage faces challenges not present for traditional record linkage. First, although different members in the same group can share some similar global values of an attribute, they represent different entities so can also have distinct local values for the same or different attributes, requiring a high tolerance for value diversity. Second, groups can be huge (with tens of thousands of records), requiring high scalability even after using good blocking strategies. We present a two-stage algorithm: the first stage identifies cores containing records that are very likely to belong to the same group, while being robust to possible erroneous values; the second stage collects strong evidence from the cores and leverages it for merging more records into the same group, while being tolerant to differences in local values of an attribute. Experimental results show the high effectiveness and efficiency of our algorithm on various real-world data sets. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 40,725 |
2202.03140 | OPP-Miner: Order-preserving sequential pattern mining | A time series is a collection of measurements in chronological order. Discovering patterns from time series is useful in many domains, such as stock analysis, disease detection, and weather forecast. To discover patterns, existing methods often convert time series data into another form, such as nominal/symbolic format, to reduce dimensionality, which inevitably deviates the data values. Moreover, existing methods mainly neglect the order relationships between time series values. To tackle these issues, inspired by order-preserving matching, this paper proposes an Order-Preserving sequential Pattern (OPP) mining method, which represents patterns based on the order relationships of the time series data. An inherent advantage of such representation is that the trend of a time series can be represented by the relative order of the values underneath the time series data. To obtain frequent trends in time series, we propose the OPP-Miner algorithm to mine patterns with the same trend (sub-sequences with the same relative order). OPP-Miner employs the filtration and verification strategies to calculate the support and uses pattern fusion strategy to generate candidate patterns. To compress the result set, we also study finding the maximal OPPs. Experiments validate that OPP-Miner is not only efficient and scalable but can also discover similar sub-sequences in time series. In addition, case studies show that our algorithms have high utility in analyzing the COVID-19 epidemic by identifying critical trends and improve the clustering performance. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | false | 279,087 |
2201.05000 | Automated Reinforcement Learning: An Overview | Reinforcement Learning and recently Deep Reinforcement Learning are popular methods for solving sequential decision making problems modeled as Markov Decision Processes. RL modeling of a problem and selecting algorithms and hyper-parameters require careful considerations as different configurations may entail completely different performances. These considerations are mainly the task of RL experts; however, RL is progressively becoming popular in other fields where the researchers and system designers are not RL experts. Besides, many modeling decisions, such as defining state and action space, size of batches and frequency of batch updating, and number of timesteps are typically made manually. For these reasons, automating different components of RL framework is of great importance and it has attracted much attention in recent years. Automated RL provides a framework in which different components of RL including MDP modeling, algorithm selection and hyper-parameter optimization are modeled and defined automatically. In this article, we explore the literature and present recent work that can be used in automated RL. Moreover, we discuss the challenges, open questions and research directions in AutoRL. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 275,251 |
2006.01563 | Exploring Cross-sentence Contexts for Named Entity Recognition with BERT | Named entity recognition (NER) is frequently addressed as a sequence classification task where each input consists of one sentence of text. It is nevertheless clear that useful information for the task can often be found outside of the scope of a single-sentence context. Recently proposed self-attention models such as BERT can both efficiently capture long-distance relationships in input as well as represent inputs consisting of several sentences, creating new opportunitites for approaches that incorporate cross-sentence information in natural language processing tasks. In this paper, we present a systematic study exploring the use of cross-sentence information for NER using BERT models in five languages. We find that adding context in the form of additional sentences to BERT input systematically increases NER performance on all of the tested languages and models. Including multiple sentences in each input also allows us to study the predictions of the same sentences in different contexts. We propose a straightforward method, Contextual Majority Voting (CMV), to combine different predictions for sentences and demonstrate this to further increase NER performance with BERT. Our approach does not require any changes to the underlying BERT architecture, rather relying on restructuring examples for training and prediction. Evaluation on established datasets, including the CoNLL'02 and CoNLL'03 NER benchmarks, demonstrates that our proposed approach can improve on the state-of-the-art NER results on English, Dutch, and Finnish, achieves the best reported BERT-based results on German, and is on par with performance reported with other BERT-based approaches in Spanish. We release all methods implemented in this work under open licenses. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 179,810 |
1706.08317 | Handling PDDL3.0 State Trajectory Constraints with Temporal Landmarks | Temporal landmarks have been proved to be a helpful mechanism to deal with temporal planning problems, specifically to improve planners performance and handle problems with deadline constraints. In this paper, we show the strength of using temporal landmarks to handle the state trajectory constraints of PDDL3.0. We analyze the formalism of TempLM, a temporal planner particularly aimed at solving planning problems with deadlines, and we present a detailed study that exploits the underlying temporal landmark-based mechanism of TempLM for representing and reasoning with trajectory constraints. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 75,976 |
1908.01146 | Developing an Unsupervised Real-time Anomaly Detection Scheme for Time
Series with Multi-seasonality | On-line detection of anomalies in time series is a key technique used in various event-sensitive scenarios such as robotic system monitoring, smart sensor networks and data center security. However, the increasing diversity of data sources and the variety of demands make this task more challenging than ever. Firstly, the rapid increase in unlabeled data means supervised learning is becoming less suitable in many cases. Secondly, a large portion of time series data have complex seasonality features. Thirdly, on-line anomaly detection needs to be fast and reliable. In light of this, we have developed a prediction-driven, unsupervised anomaly detection scheme, which adopts a backbone model combining the decomposition and the inference of time series data. Further, we propose a novel metric, Local Trend Inconsistency (LTI), and an efficient detection algorithm that computes LTI in a real-time manner and scores each data point robustly in terms of its probability of being anomalous. We have conducted extensive experimentation to evaluate our algorithm with several datasets from both public repositories and production environments. The experimental results show that our scheme outperforms existing representative anomaly detection algorithms in terms of the commonly used metric, Area Under Curve (AUC), while achieving the desired efficiency. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 140,688 |
2404.15347 | Advanced Neural Network Architecture for Enhanced Multi-Lead ECG
Arrhythmia Detection through Optimized Feature Extraction | Cardiovascular diseases are a pervasive global health concern, contributing significantly to morbidity and mortality rates worldwide. Among these conditions, arrhythmia, characterized by irregular heart rhythms, presents formidable diagnostic challenges. This study introduces an innovative approach utilizing deep learning techniques, specifically Convolutional Neural Networks (CNNs), to address the complexities of arrhythmia classification. Leveraging multi-lead Electrocardiogram (ECG) data, our CNN model, comprising six layers with a residual block, demonstrates promising outcomes in identifying five distinct heartbeat types: Left Bundle Branch Block (LBBB), Right Bundle Branch Block (RBBB), Atrial Premature Contraction (APC), Premature Ventricular Contraction (PVC), and Normal Beat. Through rigorous experimentation, we highlight the transformative potential of our methodology in enhancing diagnostic accuracy for cardiovascular arrhythmias. Arrhythmia diagnosis remains a critical challenge in cardiovascular care, often relying on manual interpretation of ECG signals, which can be time-consuming and prone to subjectivity. To address these limitations, we propose a novel approach that leverages deep learning algorithms to automate arrhythmia classification. By employing advanced CNN architectures and multi-lead ECG data, our methodology offers a robust solution for precise and efficient arrhythmia detection. Through comprehensive evaluation, we demonstrate the effectiveness of our approach in facilitating more accurate clinical decision-making, thereby improving patient outcomes in managing cardiovascular arrhythmias. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 449,058 |
2402.17533 | Black-box Adversarial Attacks Against Image Quality Assessment Models | The goal of No-Reference Image Quality Assessment (NR-IQA) is to predict the perceptual quality of an image in line with its subjective evaluation. To put the NR-IQA models into practice, it is essential to study their potential loopholes for model refinement. This paper makes the first attempt to explore the black-box adversarial attacks on NR-IQA models. Specifically, we first formulate the attack problem as maximizing the deviation between the estimated quality scores of original and perturbed images, while restricting the perturbed image distortions for visual quality preservation. Under such formulation, we then design a Bi-directional loss function to mislead the estimated quality scores of adversarial examples towards an opposite direction with maximum deviation. On this basis, we finally develop an efficient and effective black-box attack method against NR-IQA models. Extensive experiments reveal that all the evaluated NR-IQA models are vulnerable to the proposed attack method. And the generated perturbations are not transferable, enabling them to serve the investigation of specialities of disparate IQA models. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 433,039 |
1905.02176 | Computation of Circular Area and Spherical Volume Invariants via
Boundary Integrals | We show how to compute the circular area invariant of planar curves, and the spherical volume invariant of surfaces, in terms of line and surface integrals, respectively. We use the Divergence Theorem to express the area and volume integrals as line and surface integrals, respectively, against particular kernels; our results also extend to higher dimensional hypersurfaces. The resulting surface integrals are computable analytically on a triangulated mesh. This gives a simple computational algorithm for computing the spherical volume invariant for triangulated surfaces that does not involve discretizing the ambient space. We discuss potential applications to feature detection on broken bone fragments of interest in anthropology. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 129,909 |
2408.15823 | Benchmarking foundation models as feature extractors for
weakly-supervised computational pathology | Advancements in artificial intelligence have driven the development of numerous pathology foundation models capable of extracting clinically relevant information. However, there is currently limited literature independently evaluating these foundation models on truly external cohorts and clinically-relevant tasks to uncover adjustments for future improvements. In this study, we benchmarked 19 histopathology foundation models on 13 patient cohorts with 6,818 patients and 9,528 slides from lung, colorectal, gastric, and breast cancers. The models were evaluated on weakly-supervised tasks related to biomarkers, morphological properties, and prognostic outcomes. We show that a vision-language foundation model, CONCH, yielded the highest performance when compared to vision-only foundation models, with Virchow2 as close second. The experiments reveal that foundation models trained on distinct cohorts learn complementary features to predict the same label, and can be fused to outperform the current state of the art. An ensemble combining CONCH and Virchow2 predictions outperformed individual models in 55% of tasks, leveraging their complementary strengths in classification scenarios. Moreover, our findings suggest that data diversity outweighs data volume for foundation models. Our work highlights actionable adjustments to improve pathology foundation models. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 484,091 |
2307.11138 | Accurate error estimation for model reduction of nonlinear dynamical
systems via data-enhanced error closure | Accurate error estimation is crucial in model order reduction, both to obtain small reduced-order models and to certify their accuracy when deployed in downstream applications such as digital twins. In existing a posteriori error estimation approaches, knowledge about the time integration scheme is mandatory, e.g., the residual-based error estimators proposed for the reduced basis method. This poses a challenge when automatic ordinary differential equation solver libraries are used to perform the time integration. To address this, we present a data-enhanced approach for a posteriori error estimation. Our new formulation enables residual-based error estimators to be independent of any time integration method. To achieve this, we introduce a corrected reduced-order model which takes into account a data-driven closure term for improved accuracy. The closure term, subject to mild assumptions, is related to the local truncation error of the corresponding time integration scheme. We propose efficient computational schemes for approximating the closure term, at the cost of a modest amount of training data. Furthermore, the new error estimator is incorporated within a greedy process to obtain parametric reduced-order models. Numerical results on three different systems show the accuracy of the proposed error estimation approach and its ability to produce ROMs that generalize well. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 380,815 |
2112.05313 | Building Autocorrelation-Aware Representations for Fine-Scale
Spatiotemporal Prediction | Many scientific prediction problems have spatiotemporal data- and modeling-related challenges in handling complex variations in space and time using only sparse and unevenly distributed observations. This paper presents a novel deep learning architecture, Deep learning predictions for LocATion-dependent Time-sEries data (DeepLATTE), that explicitly incorporates theories of spatial statistics into neural networks to address these challenges. In addition to a feature selection module and a spatiotemporal learning module, DeepLATTE contains an autocorrelation-guided semi-supervised learning strategy to enforce both local autocorrelation patterns and global autocorrelation trends of the predictions in the learned spatiotemporal embedding space to be consistent with the observed data, overcoming the limitation of sparse and unevenly distributed observations. During the training process, both supervised and semi-supervised losses guide the updates of the entire network to: 1) prevent overfitting, 2) refine feature selection, 3) learn useful spatiotemporal representations, and 4) improve overall prediction. We conduct a demonstration of DeepLATTE using publicly available data for an important public health topic, air quality prediction, in a well-studied, complex physical environment - Los Angeles. The experiment demonstrates that the proposed approach provides accurate fine-spatial-scale air quality predictions and reveals the critical environmental factors affecting the results. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 270,804 |
1803.00389 | Poisson Image Denoising Using Best Linear Prediction: A Post-processing
Framework | In this paper, we address the problem of denoising images degraded by Poisson noise. We propose a new patch-based approach based on best linear prediction to estimate the underlying clean image. A simplified prediction formula is derived for Poisson observations, which requires the covariance matrix of the underlying clean patch. We use the assumption that similar patches in a neighborhood share the same covariance matrix, and we use off-the-shelf Poisson denoising methods in order to obtain an initial estimate of the covariance matrices. Our method can be seen as a post-processing step for Poisson denoising methods and the results show that it improves upon several Poisson denoising methods by relevant margins. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 91,665 |
1904.01730 | Sequencing and Scheduling for Multi-User Machine-Type Communication | In this paper, we propose joint sequencing and scheduling optimization for uplink machine-type communication (MTC). We consider multiple energy-constrained MTC devices that transmit data to a base station following the time division multiple access (TDMA) protocol. Conventionally, the energy efficiency performance in TDMA is optimized through multi-user scheduling, i.e., changing the transmission block length allocated to different devices. In such a system, the sequence of devices for transmission, i.e., who transmits first and who transmits second, etc., has not been considered as it does not have any impact on the energy efficiency. In this work, we consider that data compression is performed before transmission and show that the multi-user sequencing is indeed important. We apply three popular energy-minimization system objectives, which differ in terms of the overall system performance and fairness among the devices. We jointly optimize both multi-user sequencing and scheduling along with the compression and transmission rate control. Our results show that multi-user sequence optimization significantly improves the energy efficiency performance of the system. Notably, it makes the TDMA-based multi-user transmissions more likely to be feasible in the lower latency regime, and the performance gain is larger when the delay bound is stringent. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 126,228 |
1801.07357 | CHALET: Cornell House Agent Learning Environment | We present CHALET, a 3D house simulator with support for navigation and manipulation. CHALET includes 58 rooms and 10 house configuration, and allows to easily create new house and room layouts. CHALET supports a range of common household activities, including moving objects, toggling appliances, and placing objects inside closeable containers. The environment and actions available are designed to create a challenging domain to train and evaluate autonomous agents, including for tasks that combine language, vision, and planning in a dynamic environment. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 88,771 |
cs/9506102 | Induction of First-Order Decision Lists: Results on Learning the Past
Tense of English Verbs | This paper presents a method for inducing logic programs from examples that learns a new class of concepts called first-order decision lists, defined as ordered lists of clauses each ending in a cut. The method, called FOIDL, is based on FOIL (Quinlan, 1990) but employs intensional background knowledge and avoids the need for explicit negative examples. It is particularly useful for problems that involve rules with specific exceptions, such as learning the past-tense of English verbs, a task widely studied in the context of the symbolic/connectionist debate. FOIDL is able to learn concise, accurate programs for this problem from significantly fewer examples than previous methods (both connectionist and symbolic). | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 540,312 |
2006.00467 | End-to-End Change Detection for High Resolution Drone Images with GAN
Architecture | Monitoring large areas is presently feasible with high resolution drone cameras, as opposed to time-consuming and expensive ground surveys. In this work we reveal for the first time, the potential of using a state-of-the-art change detection GAN based algorithm with high resolution drone images for infrastructure inspection. We demonstrate this concept on solar panel installation. A deep learning, data-driven algorithm for identifying changes based on a change detection deep learning algorithm was proposed. We use the Conditional Adversarial Network approach to present a framework for change detection in images. The proposed network architecture is based on pix2pix GAN framework. Extensive experimental results have shown that our proposed approach outperforms the other state-of-the-art change detection methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 179,480 |
1504.04103 | Faster Algorithms for Testing under Conditional Sampling | There has been considerable recent interest in distribution-tests whose run-time and sample requirements are sublinear in the domain-size $k$. We study two of the most important tests under the conditional-sampling model where each query specifies a subset $S$ of the domain, and the response is a sample drawn from $S$ according to the underlying distribution. For identity testing, which asks whether the underlying distribution equals a specific given distribution or $\epsilon$-differs from it, we reduce the known time and sample complexities from $\tilde{\mathcal{O}}(\epsilon^{-4})$ to $\tilde{\mathcal{O}}(\epsilon^{-2})$, thereby matching the information theoretic lower bound. For closeness testing, which asks whether two distributions underlying observed data sets are equal or different, we reduce existing complexity from $\tilde{\mathcal{O}}(\epsilon^{-4} \log^5 k)$ to an even sub-logarithmic $\tilde{\mathcal{O}}(\epsilon^{-5} \log \log k)$ thus providing a better bound to an open problem in Bertinoro Workshop on Sublinear Algorithms [Fisher, 2004]. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 42,103 |
1309.6851 | Treedy: A Heuristic for Counting and Sampling Subsets | Consider a collection of weighted subsets of a ground set N. Given a query subset Q of N, how fast can one (1) find the weighted sum over all subsets of Q, and (2) sample a subset of Q proportionally to the weights? We present a tree-based greedy heuristic, Treedy, that for a given positive tolerance d answers such counting and sampling queries to within a guaranteed relative error d and total variation distance d, respectively. Experimental results on artificial instances and in application to Bayesian structure discovery in Bayesian networks show that approximations yield dramatic savings in running time compared to exact computation, and that Treedy typically outperforms a previously proposed sorting-based heuristic. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 27,313 |
2201.08559 | Individual Treatment Effect Estimation Through Controlled Neural Network
Training in Two Stages | We develop a Causal-Deep Neural Network (CDNN) model trained in two stages to infer causal impact estimates at an individual unit level. Using only the pre-treatment features in stage 1 in the absence of any treatment information, we learn an encoding for the covariates that best represents the outcome. In the $2^{nd}$ stage we further seek to predict the unexplained outcome from stage 1, by introducing the treatment indicator variables alongside the encoded covariates. We prove that even without explicitly computing the treatment residual, our method still satisfies the desirable local Neyman orthogonality, making it robust to small perturbations in the nuisance parameters. Furthermore, by establishing connections with the representation learning approaches, we create a framework from which multiple variants of our algorithm can be derived. We perform initial experiments on the publicly available data sets to compare these variants and get guidance in selecting the best variant of our CDNN method. On evaluating CDNN against the state-of-the-art approaches on three benchmarking datasets, we observe that CDNN is highly competitive and often yields the most accurate individual treatment effect estimates. We highlight the strong merits of CDNN in terms of its extensibility to multiple use cases. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 276,383 |
1909.01587 | Topological Coding and Topological Matrices Toward Network Overall
Security | A mathematical topology with matrix is a natural representation of a coding relational structure that is found in many fields of the world. Matrices are very important in computation of real applications, s ce matrices are easy saved in computer and run quickly, as well as matrices are convenient to deal with communities of current networks, such as Laplacian matrices, adjacent matrices in graph theory. Motivated from convenient, useful and powerful matrices used in computation and investigation of today's networks, we have introduced Topcode-matrices, which are matrices of order $3\times q$ and differ from popular matrices applied in linear algebra and computer science. Topcode-matrices can use numbers, letters, Chinese characters, sets, graphs, algebraic groups \emph{etc.} as their elements. One important thing is that Topcode-matrices of numbers can derive easily number strings, since number strings are text-based passwords used in information security. Topcode-matrices can be used to describe topological graphic passwords (Topsnut-gpws) used in information security and graph connected properties for solving some problems coming in the investigation of Graph Networks and Graph Neural Networks proposed by GoogleBrain and DeepMind. Our topics, in this article, are: Topsnut-matrices, Topcode-matrices, Hanzi-matrices, adjacency ve-value matrices and pan-Topcode-matrices, and some connections between these Topcode-matrices will be proven. We will discuss algebraic groups obtained from the above matrices, graph groups, graph networking groups and number string groups for encrypting different communities of dynamic networks. The operations and results on our matrices help us to set up our overall security mechanism to protect networks. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 143,949 |
2407.12331 | I2AM: Interpreting Image-to-Image Latent Diffusion Models via
Attribution Maps | Large-scale diffusion models have made significant advancements in the field of image generation, especially through the use of cross-attention mechanisms that guide image formation based on textual descriptions. While the analysis of text-guided cross-attention in diffusion models has been extensively studied in recent years, its application in image-to-image diffusion models remains underexplored. This paper introduces the Image-to-Image Attribution Maps I2AM method, which aggregates patch-level cross-attention scores to enhance the interpretability of latent diffusion models across time steps, heads, and attention layers. I2AM facilitates detailed image-to-image attribution analysis, enabling observation of how diffusion models prioritize key features over time and head during the image generation process from reference images. Through extensive experiments, we first visualize the attribution maps of both generated and reference images, verifying that critical information from the reference image is effectively incorporated into the generated image, and vice versa. To further assess our understanding, we introduce a new evaluation metric tailored for reference-based image inpainting tasks. This metric, measuring the consistency between the attribution maps of generated and reference images, shows a strong correlation with established performance metrics for inpainting tasks, validating the potential use of I2AM in future research endeavors. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 473,875 |
1905.10158 | Preventing wind turbine tower natural frequency excitation with a
quasi-LPV model predictive control scheme | With the ever increasing power rates of wind turbines, more advanced control techniques are needed to facilitate tall towers that are low-weight and cost effective, but in effect more flexible. Such soft-soft tower configurations generally have their fundamental side-side frequency in the below-rated operational domain. Because the turbine rotor practically has or develops a mass imbalance over time, a periodic and rotor-speed dependent side-side excitation is present during below-rated operation. Persistent operation at the coinciding tower and rotational frequency degrades the expected structural life span. To reduce this effect, earlier work has shown the effectiveness of active tower damping control strategies using collective pitch control. A more passive approach is frequency skipping by inclusion of speed exclusion zones, which avoids prolonged operation near the critical frequency. However, neither of the methods incorporate a convenient way of performing a trade-off between energy maximization and fatigue load minimization. Therefore, this paper introduces a quasi-linear parameter varying model predictive control (qLPV-MPC) scheme, exploiting the beneficial (convex) properties of a qLPV system description. The qLPV model is obtained by a demodulation transformation, and is subsequently augmented with a simple wind turbine model. Results show the effectiveness of the algorithm in synthetic and realistic simulations using the NREL 5-MW reference wind turbine in high-fidelity simulation code. Prolonged rotor speed operation at the tower side-side natural frequency is prevented, whereas when the trade-off is in favor of energy production, the algorithm decides to rapidly pass over the natural frequency to attain higher rotor speeds and power productions. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 131,980 |
2405.14365 | JiuZhang3.0: Efficiently Improving Mathematical Reasoning by Training
Small Data Synthesis Models | Mathematical reasoning is an important capability of large language models~(LLMs) for real-world applications. To enhance this capability, existing work either collects large-scale math-related texts for pre-training, or relies on stronger LLMs (\eg GPT-4) to synthesize massive math problems. Both types of work generally lead to large costs in training or synthesis. To reduce the cost, based on open-source available texts, we propose an efficient way that trains a small LLM for math problem synthesis, to efficiently generate sufficient high-quality pre-training data. To achieve it, we create a dataset using GPT-4 to distill its data synthesis capability into the small LLM. Concretely, we craft a set of prompts based on human education stages to guide GPT-4, to synthesize problems covering diverse math knowledge and difficulty levels. Besides, we adopt the gradient-based influence estimation method to select the most valuable math-related texts. The both are fed into GPT-4 for creating the knowledge distillation dataset to train the small LLM. We leverage it to synthesize 6 million math problems for pre-training our JiuZhang3.0 model, which only needs to invoke GPT-4 API 9.3k times and pre-train on 4.6B data. Experimental results have shown that JiuZhang3.0 achieves state-of-the-art performance on several mathematical reasoning datasets, under both natural language reasoning and tool manipulation settings. Our code and data will be publicly released in \url{https://github.com/RUCAIBox/JiuZhang3.0}. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 456,388 |
2111.10780 | FCOSR: A Simple Anchor-free Rotated Detector for Aerial Object Detection | Existing anchor-base oriented object detection methods have achieved amazing results, but these methods require some manual preset boxes, which introduces additional hyperparameters and calculations. The existing anchor-free methods usually have complex architectures and are not easy to deploy. Our goal is to propose an algorithm which is simple and easy-to-deploy for aerial image detection. In this paper, we present a one-stage anchor-free rotated object detector (FCOSR) based on FCOS, which can be deployed on most platforms. The FCOSR has a simple architecture consisting of only convolution layers. Our work focuses on the label assignment strategy for the training phase. We use ellipse center sampling method to define a suitable sampling region for oriented bounding box (OBB). The fuzzy sample assignment strategy provides reasonable labels for overlapping objects. To solve the insufficient sampling problem, a multi-level sampling module is designed. These strategies allocate more appropriate labels to training samples. Our algorithm achieves 79.25, 75.41, and 90.15 mAP on DOTA1.0, DOTA1.5, and HRSC2016 datasets, respectively. FCOSR demonstrates superior performance to other methods in single-scale evaluation. We convert a lightweight FCOSR model to TensorRT format, which achieves 73.93 mAP on DOTA1.0 at a speed of 10.68 FPS on Jetson Xavier NX with single scale. The code is available at: https://github.com/lzh420202/FCOSR | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 267,446 |
2201.13302 | Eris: Measuring discord among multidimensional data sources | Data integration is a classical problem in databases, typically decomposed into schema matching, entity matching and data fusion. To solve the latter, it is mostly assumed that ground truth can be determined. However, in general, the data gathering processes in the different sources are imperfect and cannot provide an accurate merging of values. Thus, in the absence of ways to determine ground truth, it is important to at least quantify how far from being internally consistent a dataset is. Hence, we propose definitions of concordant data and define a discordance metric as a way of measuring disagreement to improve decision making based on trustworthiness. We define the discord measurement problem of numerical attributes in which given a set of uncertain raw observations or aggregate results (such as case/hospitalization/death data relevant to COVID-19) and information on the alignment of different conceptualizations of the same reality (e.g., granularities or units), we wish to assess whether the different sources are concordant, or if not, use the discordance metric to quantify how discordant they are. We also define a set of algebraic operators to describe the alignments of different data sources with correctness guarantees, together with two alternative relational database implementations that reduce the problem to linear or quadratic programming. These are evaluated against both COVID-19 and synthetic data, and our experimental results show that discordance measurement can be performed efficiently in realistic situations. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 277,941 |
2411.11006 | BackdoorMBTI: A Backdoor Learning Multimodal Benchmark Tool Kit for
Backdoor Defense Evaluation | We introduce BackdoorMBTI, the first backdoor learning toolkit and benchmark designed for multimodal evaluation across three representative modalities from eleven commonly used datasets. BackdoorMBTI provides a systematic backdoor learning pipeline, encompassing data processing, data poisoning, backdoor training, and evaluation. The generated poison datasets and backdoor models enable detailed evaluation of backdoor defense methods. Given the diversity of modalities, BackdoorMBTI facilitates systematic evaluation across different data types. Furthermore, BackdoorMBTI offers a standardized approach to handling practical factors in backdoor learning, such as issues related to data quality and erroneous labels. We anticipate that BackdoorMBTI will expedite future research in backdoor defense methods within a multimodal context. Code is available at https://anonymous.4open.science/r/BackdoorMBTI-D6A1/README.md. | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | false | 508,889 |
2109.07359 | Modular Neural Ordinary Differential Equations | The laws of physics have been written in the language of dif-ferential equations for centuries. Neural Ordinary Differen-tial Equations (NODEs) are a new machine learning architecture which allows these differential equations to be learned from a dataset. These have been applied to classical dynamics simulations in the form of Lagrangian Neural Net-works (LNNs) and Second Order Neural Differential Equations (SONODEs). However, they either cannot represent the most general equations of motion or lack interpretability. In this paper, we propose Modular Neural ODEs, where each force component is learned with separate modules. We show how physical priors can be easily incorporated into these models. Through a number of experiments, we demonstrate these result in better performance, are more interpretable, and add flexibility due to their modularity. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 255,494 |
1609.05158 | Real-Time Single Image and Video Super-Resolution Using an Efficient
Sub-Pixel Convolutional Neural Network | Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 61,086 |
1808.03147 | A New Optimization Layer for Real-Time Bidding Advertising Campaigns | While it is relatively easy to start an online advertising campaign, obtaining a high Key Performance Indicator (KPI) can be challenging. A large body of work on this subject has already been performed and platforms known as DSPs are available on the market that deal with such an optimization. From the advertiser's point of view, each DSP is a different black box, with its pros and cons, that needs to be configured. In order to take advantage of the pros of every DSP, advertisers are well-advised to use a combination of them when setting up their campaigns. In this paper, we propose an algorithm for advertisers to add an optimization layer on top of DSPs. The algorithm we introduce, called SKOTT, maximizes the chosen KPI by optimally configuring the DSPs and putting them in competition with each other. SKOTT is a highly specialized iterative algorithm loosely based on gradient descent that is made up of three independent sub-routines, each dealing with a different problem: partitioning the budget, setting the desired average bid, and preventing under-delivery. In particular, one of the novelties of our approach lies in our taking the perspective of the advertisers rather than the DSPs. Synthetic market data is used to evaluate the efficiency of SKOTT against other state-of-the-art approaches adapted from similar problems. The results illustrate the benefits of our proposals, which greatly outperforms the other methods. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 104,887 |
2307.02641 | Active Class Selection for Few-Shot Class-Incremental Learning | For real-world applications, robots will need to continually learn in their environments through limited interactions with their users. Toward this, previous works in few-shot class incremental learning (FSCIL) and active class selection (ACS) have achieved promising results but were tested in constrained setups. Therefore, in this paper, we combine ideas from FSCIL and ACS to develop a novel framework that can allow an autonomous agent to continually learn new objects by asking its users to label only a few of the most informative objects in the environment. To this end, we build on a state-of-the-art (SOTA) FSCIL model and extend it with techniques from ACS literature. We term this model Few-shot Incremental Active class SeleCtiOn (FIASco). We further integrate a potential field-based navigation technique with our model to develop a complete framework that can allow an agent to process and reason on its sensory data through the FIASco model, navigate towards the most informative object in the environment, gather data about the object through its sensors and incrementally update the FIASco model. Experimental results on a simulated agent and a real robot show the significance of our approach for long-term real-world robotics applications. | false | false | false | false | false | false | true | true | false | false | false | true | false | false | false | false | false | false | 377,757 |
2203.05836 | Efficient and Robust Semantic Mapping for Indoor Environments | A key proficiency an autonomous mobile robot must have to perform high-level tasks is a strong understanding of its environment. This involves information about what types of objects are present, where they are, what their spatial extend is, and how they can be reached, i.e., information about free space is also crucial. Semantic maps are a powerful instrument providing such information. However, applying semantic segmentation and building 3D maps with high spatial resolution is challenging given limited resources on mobile robots. In this paper, we incorporate semantic information into efficient occupancy normal distribution transform (NDT) maps to enable real-time semantic mapping on mobile robots. On the publicly available dataset Hypersim, we show that, due to their sub-voxel accuracy, semantic NDT maps are superior to other approaches. We compare them to the recent state-of-the-art approach based on voxels and semantic Bayesian spatial kernel inference~(S-BKI) and to an optimized version of it derived in this paper. The proposed semantic NDT maps can represent semantics to the same level of detail, while mapping is 2.7 to 17.5 times faster. For the same grid resolution, they perform significantly better, while mapping is up to more than 5 times faster. Finally, we prove the real-world applicability of semantic NDT maps with qualitative results in a domestic application. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 284,931 |
2411.11069 | Skeleton-Guided Spatial-Temporal Feature Learning for Video-Based
Visible-Infrared Person Re-Identification | Video-based visible-infrared person re-identification (VVI-ReID) is challenging due to significant modality feature discrepancies. Spatial-temporal information in videos is crucial, but the accuracy of spatial-temporal information is often influenced by issues like low quality and occlusions in videos. Existing methods mainly focus on reducing modality differences, but pay limited attention to improving spatial-temporal features, particularly for infrared videos. To address this, we propose a novel Skeleton-guided spatial-Temporal feAture leaRning (STAR) method for VVI-ReID. By using skeleton information, which is robust to issues such as poor image quality and occlusions, STAR improves the accuracy of spatial-temporal features in videos of both modalities. Specifically, STAR employs two levels of skeleton-guided strategies: frame level and sequence level. At the frame level, the robust structured skeleton information is used to refine the visual features of individual frames. At the sequence level, we design a feature aggregation mechanism based on skeleton key points graph, which learns the contribution of different body parts to spatial-temporal features, further enhancing the accuracy of global features. Experiments on benchmark datasets demonstrate that STAR outperforms state-of-the-art methods. Code will be open source soon. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 508,907 |
1611.02247 | Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic | Model-free deep reinforcement learning (RL) methods have been successful in a wide variety of simulated domains. However, a major obstacle facing deep RL in the real world is their high sample complexity. Batch policy gradient methods offer stable learning, but at the cost of high variance, which often requires large batches. TD-style methods, such as off-policy actor-critic and Q-learning, are more sample-efficient but biased, and often require costly hyperparameter sweeps to stabilize. In this work, we aim to develop methods that combine the stability of policy gradients with the efficiency of off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor expansion of the off-policy critic as a control variate. Q-Prop is both sample efficient and stable, and effectively combines the benefits of on-policy and off-policy methods. We analyze the connection between Q-Prop and existing model-free algorithms, and use control variate theory to derive two variants of Q-Prop with conservative and aggressive adaptation. We show that conservative Q-Prop provides substantial gains in sample efficiency over trust region policy optimization (TRPO) with generalized advantage estimation (GAE), and improves stability over deep deterministic policy gradient (DDPG), the state-of-the-art on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control environments. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 63,528 |
2410.13490 | Novelty-based Sample Reuse for Continuous Robotics Control | In reinforcement learning, agents collect state information and rewards through environmental interactions, essential for policy refinement. This process is notably time-consuming, especially in complex robotic simulations and real-world applications. Traditional algorithms usually re-engage with the environment after processing a single batch of samples, thereby failing to fully capitalize on historical data. However, frequently observed states, with reliable value estimates, require minimal updates; in contrast, rare observed states necessitate more intensive updates for achieving accurate value estimations. To address uneven sample utilization, we propose Novelty-guided Sample Reuse (NSR). NSR provides extra updates for infrequent, novel states and skips additional updates for frequent states, maximizing sample use before interacting with the environment again. Our experiments show that NSR improves the convergence rate and success rate of algorithms without significantly increasing time consumption. Our code is publicly available at https://github.com/ppksigs/NSR-DDPG-HER. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 499,555 |
2304.04095 | A Simple Proof of the Mixing of Metropolis-Adjusted Langevin Algorithm
under Smoothness and Isoperimetry | We study the mixing time of Metropolis-Adjusted Langevin algorithm (MALA) for sampling a target density on $\mathbb{R}^d$. We assume that the target density satisfies $\psi_\mu$-isoperimetry and that the operator norm and trace of its Hessian are bounded by $L$ and $\Upsilon$ respectively. Our main result establishes that, from a warm start, to achieve $\epsilon$-total variation distance to the target density, MALA mixes in $O\left(\frac{(L\Upsilon)^{\frac12}}{\psi_\mu^2} \log\left(\frac{1}{\epsilon}\right)\right)$ iterations. Notably, this result holds beyond the log-concave sampling setting and the mixing time depends on only $\Upsilon$ rather than its upper bound $L d$. In the $m$-strongly logconcave and $L$-log-smooth sampling setting, our bound recovers the previous minimax mixing bound of MALA~\cite{wu2021minimax}. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 357,073 |
2310.12303 | Document-Level Language Models for Machine Translation | Despite the known limitations, most machine translation systems today still operate on the sentence-level. One reason for this is, that most parallel training data is only sentence-level aligned, without document-level meta information available. In this work, we set out to build context-aware translation systems utilizing document-level monolingual data instead. This can be achieved by combining any existing sentence-level translation model with a document-level language model. We improve existing approaches by leveraging recent advancements in model combination. Additionally, we propose novel weighting techniques that make the system combination more flexible and significantly reduce computational overhead. In a comprehensive evaluation on four diverse translation tasks, we show that our extensions improve document-targeted scores substantially and are also computationally more efficient. However, we also find that in most scenarios, back-translation gives even better results, at the cost of having to re-train the translation system. Finally, we explore language model fusion in the light of recent advancements in large language models. Our findings suggest that there might be strong potential in utilizing large language models via model combination. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 400,966 |
2404.02429 | AD4RL: Autonomous Driving Benchmarks for Offline Reinforcement Learning
with Value-based Dataset | Offline reinforcement learning has emerged as a promising technology by enhancing its practicality through the use of pre-collected large datasets. Despite its practical benefits, most algorithm development research in offline reinforcement learning still relies on game tasks with synthetic datasets. To address such limitations, this paper provides autonomous driving datasets and benchmarks for offline reinforcement learning research. We provide 19 datasets, including real-world human driver's datasets, and seven popular offline reinforcement learning algorithms in three realistic driving scenarios. We also provide a unified decision-making process model that can operate effectively across different scenarios, serving as a reference framework in algorithm design. Our research lays the groundwork for further collaborations in the community to explore practical aspects of existing reinforcement learning methods. Dataset and codes can be found in https://sites.google.com/view/ad4rl. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 443,853 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.