id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1707.02469 | Tailoring Artificial Neural Networks for Optimal Learning | As one of the most important paradigms of recurrent neural networks, the echo state network (ESN) has been applied to a wide range of fields, from robotics to medicine, finance, and language processing. A key feature of the ESN paradigm is its reservoir --- a directed and weighted network of neurons that projects the input time series into a high dimensional space where linear regression or classification can be applied. Despite extensive studies, the impact of the reservoir network on the ESN performance remains unclear. Combining tools from physics, dynamical systems and network science, we attempt to open the black box of ESN and offer insights to understand the behavior of general artificial neural networks. Through spectral analysis of the reservoir network we reveal a key factor that largely determines the ESN memory capacity and hence affects its performance. Moreover, we find that adding short loops to the reservoir network can tailor ESN for specific tasks and optimize learning. We validate our findings by applying ESN to forecast both synthetic and real benchmark time series. Our results provide a new way to design task-specific ESN. More importantly, it demonstrates the power of combining tools from physics, dynamical systems and network science to offer new insights in understanding the mechanisms of general artificial neural networks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 76,710 |
1906.01118 | Implication Avoiding Dynamics for Externally Observed Networks | Previous network models have imagined that connections change to promote structural balance, or to reflect hierarchies. We propose a model where agents adjust their connections to appear credible to an external observer. In particular, we envision a signed, directed network where positive edges represent endorsements or trust and negative edges represent accusations or doubt, and consider both the strategies an external observer might use to identify credible nodes and the strategies nodes might use to then appear credible by changing their outgoing edges. First, we establish that an external observer may be able to exactly identify a set of 'honest' nodes from an adversarial set of 'cheater' nodes regardless of the 'cheater' nodes' connections. However, while these results show that an external observer's task is not hopeless, some of these theorems involve network structures that are NP-hard to find. Instead, we suggest a simple heuristic that an external observer might use to identify which nodes are not credible based upon their involvement with particular implicating edge motifs. Building on these notions, and analogously to some models of structural balance, we develop a discrete time dynamical system where nodes engage in implication avoiding dynamics, where inconsistent arrangements of edges that cause a node to look 'suspicious' exert pressure for that node to change edges. We demonstrate that these dynamics provide a new way to understand group fracture when nodes are worried about appearing consistent to an external observer. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 133,596 |
2110.09148 | Body Part Regression for CT Images | One of the greatest challenges in the medical imaging domain is to successfully transfer deep learning models into clinical practice. Since models are often trained on a specific body region, a robust transfer into the clinic necessitates the selection of images with body regions that fit the algorithm to avoid false-positive predictions in unknown regions. Due to the insufficient and inaccurate nature of manually-defined imaging meta-data, automated body part recognition is a key ingredient towards the broad and reliable adoption of medical deep learning models. While some approaches to this task have been presented in the past, building and evaluating robust algorithms for fine-grained body part recognition remains challenging. So far, no easy-to-use method exists to determine the scanned body range of medical Computed Tomography (CT) volumes. In this thesis, a self-supervised body part regression model for CT volumes is developed and trained on a heterogeneous collection of CT studies. Furthermore, it is demonstrated how the algorithm can contribute to the robust and reliable transfer of medical models into the clinic. Finally, easy application of the developed method is ensured by integrating it into the medical platform toolkit Kaapana and providing it as a python package at https://github.com/MIC-DKFZ/BodyPartRegression . | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 261,698 |
2410.05239 | TuneVLSeg: Prompt Tuning Benchmark for Vision-Language Segmentation
Models | Vision-Language Models (VLMs) have shown impressive performance in vision tasks, but adapting them to new domains often requires expensive fine-tuning. Prompt tuning techniques, including textual, visual, and multimodal prompting, offer efficient alternatives by leveraging learnable prompts. However, their application to Vision-Language Segmentation Models (VLSMs) and evaluation under significant domain shifts remain unexplored. This work presents an open-source benchmarking framework, TuneVLSeg, to integrate various unimodal and multimodal prompt tuning techniques into VLSMs, making prompt tuning usable for downstream segmentation datasets with any number of classes. TuneVLSeg includes $6$ prompt tuning strategies on various prompt depths used in $2$ VLSMs totaling of $8$ different combinations. We test various prompt tuning on $8$ diverse medical datasets, including $3$ radiology datasets (breast tumor, echocardiograph, chest X-ray pathologies) and $5$ non-radiology datasets (polyp, ulcer, skin cancer), and two natural domain segmentation datasets. Our study found that textual prompt tuning struggles under significant domain shifts, from natural-domain images to medical data. Furthermore, visual prompt tuning, with fewer hyperparameters than multimodal prompt tuning, often achieves performance competitive to multimodal approaches, making it a valuable first attempt. Our work advances the understanding and applicability of different prompt-tuning techniques for robust domain-specific segmentation. The source code is available at https://github.com/naamiinepal/tunevlseg. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 495,627 |
2202.11865 | Using calibrator to improve robustness in Machine Reading Comprehension | Machine Reading Comprehension(MRC) has achieved a remarkable result since some powerful models, such as BERT, are proposed. However, these models are not robust enough and vulnerable to adversarial input perturbation and generalization examples. Some works tried to improve the performance on specific types of data by adding some related examples into training data while it leads to degradation on the original dataset, because the shift of data distribution makes the answer ranking based on the softmax probability of model unreliable. In this paper, we propose a method to improve the robustness by using a calibrator as the post-hoc reranker, which is implemented based on XGBoost model. The calibrator combines both manual features and representation learning features to rerank candidate results. Experimental results on adversarial datasets show that our model can achieve performance improvement by more than 10\% and also make improvement on the original and generalization datasets. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 282,027 |
1810.04021 | Deep Geodesic Learning for Segmentation and Anatomical Landmarking | In this paper, we propose a novel deep learning framework for anatomy segmentation and automatic landmark- ing. Specifically, we focus on the challenging problem of mandible segmentation from cone-beam computed tomography (CBCT) scans and identification of 9 anatomical landmarks of the mandible on the geodesic space. The overall approach employs three inter-related steps. In step 1, we propose a deep neu- ral network architecture with carefully designed regularization, and network hyper-parameters to perform image segmentation without the need for data augmentation and complex post- processing refinement. In step 2, we formulate the landmark localization problem directly on the geodesic space for sparsely- spaced anatomical landmarks. In step 3, we propose to use a long short-term memory (LSTM) network to identify closely- spaced landmarks, which is rather difficult to obtain using other standard detection networks. The proposed fully automated method showed superior efficacy compared to the state-of-the- art mandible segmentation and landmarking approaches in craniofacial anomalies and diseased states. We used a very challenging CBCT dataset of 50 patients with a high-degree of craniomaxillofacial (CMF) variability that is realistic in clinical practice. Complementary to the quantitative analysis, the qualitative visual inspection was conducted for distinct CBCT scans from 250 patients with high anatomical variability. We have also shown feasibility of the proposed work in an independent dataset from MICCAI Head-Neck Challenge (2015) achieving the state-of-the-art performance. Lastly, we present an in-depth analysis of the proposed deep networks with respect to the choice of hyper-parameters such as pooling and activation functions. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 109,949 |
1804.00213 | Gated Fusion Network for Single Image Dehazing | In this paper, we propose an efficient algorithm to directly restore a clear image from a hazy input. The proposed algorithm hinges on an end-to-end trainable neural network that consists of an encoder and a decoder. The encoder is exploited to capture the context of the derived input images, while the decoder is employed to estimate the contribution of each input to the final dehazed result using the learned representations attributed to the encoder. The constructed network adopts a novel fusion-based strategy which derives three inputs from an original hazy image by applying White Balance (WB), Contrast Enhancing (CE), and Gamma Correction (GC). We compute pixel-wise confidence maps based on the appearance differences between these different inputs to blend the information of the derived inputs and preserve the regions with pleasant visibility. The final dehazed image is yielded by gating the important features of the derived inputs. To train the network, we introduce a multi-scale approach such that the halo artifacts can be avoided. Extensive experimental results on both synthetic and real-world images demonstrate that the proposed algorithm performs favorably against the state-of-the-art algorithms. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 93,963 |
2312.02407 | Robust Clustering using Hyperdimensional Computing | This paper addresses the clustering of data in the hyperdimensional computing (HDC) domain. In prior work, an HDC-based clustering framework, referred to as HDCluster, has been proposed. However, the performance of the existing HDCluster is not robust. The performance of HDCluster is degraded as the hypervectors for the clusters are chosen at random during the initialization step. To overcome this bottleneck, we assign the initial cluster hypervectors by exploring the similarity of the encoded data, referred to as \textit{query} hypervectors. Intra-cluster hypervectors have a higher similarity than inter-cluster hypervectors. Harnessing the similarity results among query hypervectors, this paper proposes four HDC-based clustering algorithms: similarity-based k-means, equal bin-width histogram, equal bin-height histogram, and similarity-based affinity propagation. Experimental results illustrate that: (i) Compared to the existing HDCluster, our proposed HDC-based clustering algorithms can achieve better accuracy, more robust performance, fewer iterations, and less execution time. Similarity-based affinity propagation outperforms the other three HDC-based clustering algorithms on eight datasets by 2~38% in clustering accuracy. (ii) Even for one-pass clustering, i.e., without any iterative update of the cluster hypervectors, our proposed algorithms can provide more robust clustering accuracy than HDCluster. (iii) Over eight datasets, five out of eight can achieve higher or comparable accuracy when projected onto the hyperdimensional space. Traditional clustering is more desirable than HDC when the number of clusters, $k$, is large. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | true | 412,851 |
2501.15249 | An Automatic Sound and Complete Abstraction Method for Generalized
Planning with Baggable Types | Generalized planning is concerned with how to find a single plan to solve multiple similar planning instances. Abstractions are widely used for solving generalized planning, and QNP (qualitative numeric planning) is a popular abstract model. Recently, Cui et al. showed that a plan solves a sound and complete abstraction of a generalized planning problem if and only if the refined plan solves the original problem. However, existing work on automatic abstraction for generalized planning can hardly guarantee soundness let alone completeness. In this paper, we propose an automatic sound and complete abstraction method for generalized planning with baggable types. We use a variant of QNP, called bounded QNP (BQNP), where integer variables are increased or decreased by only one. Since BQNP is undecidable, we propose and implement a sound but incomplete solver for BQNP. We present an automatic method to abstract a BQNP problem from a classical planning instance with baggable types. The basic idea for abstraction is to introduce a counter for each bag of indistinguishable tuples of objects. We define a class of domains called proper baggable domains, and show that for such domains, the BQNP problem got by our automatic method is a sound and complete abstraction for a generalized planning problem whose instances share the same bags with the given instance but the sizes of the bags might be different. Thus, the refined plan of a solution to the BQNP problem is a solution to the generalized planning problem. Finally, we implement our abstraction method and experiments on a number of domains demonstrate the promise of our approach. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 527,465 |
2406.02407 | WE-GS: An In-the-wild Efficient 3D Gaussian Representation for
Unconstrained Photo Collections | Novel View Synthesis (NVS) from unconstrained photo collections is challenging in computer graphics. Recently, 3D Gaussian Splatting (3DGS) has shown promise for photorealistic and real-time NVS of static scenes. Building on 3DGS, we propose an efficient point-based differentiable rendering framework for scene reconstruction from photo collections. Our key innovation is a residual-based spherical harmonic coefficients transfer module that adapts 3DGS to varying lighting conditions and photometric post-processing. This lightweight module can be pre-computed and ensures efficient gradient propagation from rendered images to 3D Gaussian attributes. Additionally, we observe that the appearance encoder and the transient mask predictor, the two most critical parts of NVS from unconstrained photo collections, can be mutually beneficial. We introduce a plug-and-play lightweight spatial attention module to simultaneously predict transient occluders and latent appearance representation for each image. After training and preprocessing, our method aligns with the standard 3DGS format and rendering pipeline, facilitating seamlessly integration into various 3DGS applications. Extensive experiments on diverse datasets show our approach outperforms existing approaches on the rendering quality of novel view and appearance synthesis with high converge and rendering speed. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 460,756 |
1712.00504 | A Neural Stochastic Volatility Model | In this paper, we show that the recent integration of statistical models with deep recurrent neural networks provides a new way of formulating volatility (the degree of variation of time series) models that have been widely used in time series analysis and prediction in finance. The model comprises a pair of complementary stochastic recurrent neural networks: the generative network models the joint distribution of the stochastic volatility process; the inference network approximates the conditional distribution of the latent variables given the observables. Our focus here is on the formulation of temporal dynamics of volatility over time under a stochastic recurrent neural network framework. Experiments on real-world stock price datasets demonstrate that the proposed model generates a better volatility estimation and prediction that outperforms mainstream methods, e.g., deterministic models such as GARCH and its variants, and stochastic models namely the MCMC-based model \emph{stochvol} as well as the Gaussian process volatility model \emph{GPVol}, on average negative log-likelihood. | false | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 85,908 |
2105.06625 | Biometrics: Trust, but Verify | Over the past two decades, biometric recognition has exploded into a plethora of different applications around the globe. This proliferation can be attributed to the high levels of authentication accuracy and user convenience that biometric recognition systems afford end-users. However, in-spite of the success of biometric recognition systems, there are a number of outstanding problems and concerns pertaining to the various sub-modules of biometric recognition systems that create an element of mistrust in their use - both by the scientific community and also the public at large. Some of these problems include: i) questions related to system recognition performance, ii) security (spoof attacks, adversarial attacks, template reconstruction attacks and demographic information leakage), iii) uncertainty over the bias and fairness of the systems to all users, iv) explainability of the seemingly black-box decisions made by most recognition systems, and v) concerns over data centralization and user privacy. In this paper, we provide an overview of each of the aforementioned open-ended challenges. We survey work that has been conducted to address each of these concerns and highlight the issues requiring further attention. Finally, we provide insights into how the biometric community can address core biometric recognition systems design issues to better instill trust, fairness, and security for all. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 235,182 |
1602.02063 | On the power of dominated players in team competitions | We investigate multi-round team competitions between two teams, where each team selects one of its players simultaneously in each round and each player can play at most once. The competition defines an extensive-form game with perfect recall and can be solved efficiently by standard methods. We are interested in the properties of the subgame perfect equilibria of this game. We first show that uniformly random strategy is a subgame perfect equilibrium strategy for both teams when there are no redundant players (i.e., the number of players in each team equals to the number of rounds of the competition). Secondly, a team can safely abandon its weak players if it has redundant players and the strength of players is transitive. We then focus on the more interesting case where there are redundant players and the strength of players is not transitive. In this case, we obtain several counterintuitive results. First of all, a player might help improve the payoff of its team, even if it is dominated by the entire other team. We give a necessary condition for a dominated player to be useful. We also study the extent to which the dominated players can increase the payoff. These results bring insights into playing and designing general team competitions. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | true | 51,785 |
2005.04168 | Equilibrium Propagation with Continual Weight Updates | Equilibrium Propagation (EP) is a learning algorithm that bridges Machine Learning and Neuroscience, by computing gradients closely matching those of Backpropagation Through Time (BPTT), but with a learning rule local in space. Given an input $x$ and associated target $y$, EP proceeds in two phases: in the first phase neurons evolve freely towards a first steady state; in the second phase output neurons are nudged towards $y$ until they reach a second steady state. However, in existing implementations of EP, the learning rule is not local in time: the weight update is performed after the dynamics of the second phase have converged and requires information of the first phase that is no longer available physically. In this work, we propose a version of EP named Continual Equilibrium Propagation (C-EP) where neuron and synapse dynamics occur simultaneously throughout the second phase, so that the weight update becomes local in time. Such a learning rule local both in space and time opens the possibility of an extremely energy efficient hardware implementation of EP. We prove theoretically that, provided the learning rates are sufficiently small, at each time step of the second phase the dynamics of neurons and synapses follow the gradients of the loss given by BPTT (Theorem 1). We demonstrate training with C-EP on MNIST and generalize C-EP to neural networks where neurons are connected by asymmetric connections. We show through experiments that the more the network updates follows the gradients of BPTT, the best it performs in terms of training. These results bring EP a step closer to biology by better complying with hardware constraints while maintaining its intimate link with backpropagation. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 176,379 |
2204.00470 | Continuous Integration of Data Histories into Consistent Namespaces | We describe a policy-based approach to the scaling of shared data services, using a hierarchy of calibrated data pipelines to automate the continuous integration of data flows. While there is no unique solution to the problem of time order, we show how to use a fair interleaving to reproduce reliable `latest version' semantics in a controlled way, by trading locality for temporal resolution. We thus establish an invariant global ordering from a spanning tree over all shards, with controlled scalability. This forms a versioned coordinate system (or versioned namespace) with consistent semantics and self-protecting rate-limited versioning, analogous to publish-subscribe addressing schemes for Content Delivery Network (CDN) or Name Data Networking (NDN) schemes. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | true | 289,267 |
2501.09186 | Guiding Retrieval using LLM-based Listwise Rankers | Large Language Models (LLMs) have shown strong promise as rerankers, especially in ``listwise'' settings where an LLM is prompted to rerank several search results at once. However, this ``cascading'' retrieve-and-rerank approach is limited by the bounded recall problem: relevant documents not retrieved initially are permanently excluded from the final ranking. Adaptive retrieval techniques address this problem, but do not work with listwise rerankers because they assume a document's score is computed independently from other documents. In this paper, we propose an adaptation of an existing adaptive retrieval method that supports the listwise setting and helps guide the retrieval process itself (thereby overcoming the bounded recall problem for LLM rerankers). Specifically, our proposed algorithm merges results both from the initial ranking and feedback documents provided by the most relevant documents seen up to that point. Through extensive experiments across diverse LLM rerankers, first stage retrievers, and feedback sources, we demonstrate that our method can improve nDCG@10 by up to 13.23% and recall by 28.02%--all while keeping the total number of LLM inferences constant and overheads due to the adaptive process minimal. The work opens the door to leveraging LLM-based search in settings where the initial pool of results is limited, e.g., by legacy systems, or by the cost of deploying a semantic first-stage. | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | false | 525,046 |
1912.07629 | Learning Mixtures of Linear Regressions in Subexponential Time via
Fourier Moments | We consider the problem of learning a mixture of linear regressions (MLRs). An MLR is specified by $k$ nonnegative mixing weights $p_1, \ldots, p_k$ summing to $1$, and $k$ unknown regressors $w_1,...,w_k\in\mathbb{R}^d$. A sample from the MLR is drawn by sampling $i$ with probability $p_i$, then outputting $(x, y)$ where $y = \langle x, w_i \rangle + \eta$, where $\eta\sim\mathcal{N}(0,\varsigma^2)$ for noise rate $\varsigma$. Mixtures of linear regressions are a popular generative model and have been studied extensively in machine learning and theoretical computer science. However, all previous algorithms for learning the parameters of an MLR require running time and sample complexity scaling exponentially with $k$. In this paper, we give the first algorithm for learning an MLR that runs in time which is sub-exponential in $k$. Specifically, we give an algorithm which runs in time $\widetilde{O}(d)\cdot\exp(\widetilde{O}(\sqrt{k}))$ and outputs the parameters of the MLR to high accuracy, even in the presence of nontrivial regression noise. We demonstrate a new method that we call "Fourier moment descent" which uses univariate density estimation and low-degree moments of the Fourier transform of suitable univariate projections of the MLR to iteratively refine our estimate of the parameters. To the best of our knowledge, these techniques have never been used in the context of high dimensional distribution learning, and may be of independent interest. We also show that our techniques can be used to give a sub-exponential time algorithm for learning mixtures of hyperplanes, a natural hard instance of the subspace clustering problem. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 157,643 |
2110.02915 | Unrolling Particles: Unsupervised Learning of Sampling Distributions | Particle filtering is used to compute good nonlinear estimates of complex systems. It samples trajectories from a chosen distribution and computes the estimate as a weighted average. Easy-to-sample distributions often lead to degenerate samples where only one trajectory carries all the weight, negatively affecting the resulting performance of the estimate. While much research has been done on the design of appropriate sampling distributions that would lead to controlled degeneracy, in this paper our objective is to \emph{learn} sampling distributions. Leveraging the framework of algorithm unrolling, we model the sampling distribution as a multivariate normal, and we use neural networks to learn both the mean and the covariance. We carry out unsupervised training of the model to minimize weight degeneracy, relying only on the observed measurements of the system. We show in simulations that the resulting particle filter yields good estimates in a wide range of scenarios. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 259,302 |
2201.13157 | Equivariant neural networks for recovery of Hadamard matrices | We propose a message passing neural network architecture designed to be equivariant to column and row permutations of a matrix. We illustrate its advantages over traditional architectures like multi-layer perceptrons (MLPs), convolutional neural networks (CNNs) and even Transformers, on the combinatorial optimization task of recovering a set of deleted entries of a Hadamard matrix. We argue that this is a powerful application of the principles of Geometric Deep Learning to fundamental mathematics, and a potential stepping stone toward more insights on the Hadamard conjecture using Machine Learning techniques. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 277,898 |
2005.05092 | Active Training of Physics-Informed Neural Networks to Aggregate and
Interpolate Parametric Solutions to the Navier-Stokes Equations | The goal of this work is to train a neural network which approximates solutions to the Navier-Stokes equations across a region of parameter space, in which the parameters define physical properties such as domain shape and boundary conditions. The contributions of this work are threefold: 1) To demonstrate that neural networks can be efficient aggregators of whole families of parameteric solutions to physical problems, trained using data created with traditional, trusted numerical methods such as finite elements. Advantages include extremely fast evaluation of pressure and velocity at any point in physical and parameter space (asymptotically, ~3 $\mu s$ / query), and data compression (the network requires 99\% less storage space compared to its own training data). 2) To demonstrate that the neural networks can accurately interpolate between finite element solutions in parameter space, allowing them to be instantly queried for pressure and velocity field solutions to problems for which traditional simulations have never been performed. 3) To introduce an active learning algorithm, so that during training, a finite element solver can automatically be queried to obtain additional training data in locations where the neural network's predictions are in most need of improvement, thus autonomously acquiring and efficiently distributing training data throughout parameter space. In addition to the obvious utility of Item 2, above, we demonstrate an application of the network in rapid parameter sweeping, very precisely predicting the degree of narrowing in a tube which would result in a 50\% increase in end-to-end pressure difference at a given flow rate. This capability could have applications in both medical diagnosis of arterial disease, and in computer-aided design. | false | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 176,643 |
2405.06933 | Syndrome-based Fusion Rules in Heterogeneous Distributed Quickest Change
Detection | In this paper, the heterogeneous distributed quickest change detection (HetDQCD) with 1-bit non-anonymous feedback is studied. The concept of syndromes is introduced and the family of syndrome-based fusion rules is proposed, which encompasses all deterministic fusion rules as special cases. Through the Hasse diagram of syndromes, upper and lower bounds on the second-order performance of expected detection delay as a function of average run length to false alarm are provided. An interesting instance, the weighted voting rule previously proposed in our prior work, is then revisited, for which an efficient pruning method for breadth-first search in the Hasse diagram is proposed to analyze the performance. This in turn assists in the design of the weight threshold in the weighted voting rule. Simulation results corroborate that our analysis is instrumental in identifying a proper design for the weighted voting rule, demonstrating consistent superiority over both the anonymous voting rule and the group selection rule in HetDQCD. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 453,513 |
2210.06686 | Real Spike: Learning Real-valued Spikes for Spiking Neural Networks | Brain-inspired spiking neural networks (SNNs) have recently drawn more and more attention due to their event-driven and energy-efficient characteristics. The integration of storage and computation paradigm on neuromorphic hardwares makes SNNs much different from Deep Neural Networks (DNNs). In this paper, we argue that SNNs may not benefit from the weight-sharing mechanism, which can effectively reduce parameters and improve inference efficiency in DNNs, in some hardwares, and assume that an SNN with unshared convolution kernels could perform better. Motivated by this assumption, a training-inference decoupling method for SNNs named as Real Spike is proposed, which not only enjoys both unshared convolution kernels and binary spikes in inference-time but also maintains both shared convolution kernels and Real-valued Spikes during training. This decoupling mechanism of SNN is realized by a re-parameterization technique. Furthermore, based on the training-inference-decoupled idea, a series of different forms for implementing Real Spike on different levels are presented, which also enjoy shared convolutions in the inference and are friendly to both neuromorphic and non-neuromorphic hardware platforms. A theoretical proof is given to clarify that the Real Spike-based SNN network is superior to its vanilla counterpart. Experimental results show that all different Real Spike versions can consistently improve the SNN performance. Moreover, the proposed method outperforms the state-of-the-art models on both non-spiking static and neuromorphic datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 323,394 |
2203.00734 | Knock, knock. Who's there? -- Identifying football player jersey numbers
with synthetic data | Automatic player identification is an essential and complex task in sports video analysis. Different strategies have been devised over the years, but identification based on jersey numbers is one of the most common approaches given its versatility and relative simplicity. However, automatic detection of jersey numbers is still challenging due to changing camera angles, low video resolution, small object size in wide-range shots and transient changes in the player's posture and movement. In this paper we present a novel approach for jersey number identification in a small, highly imbalanced dataset from the Seattle Seahawks practice videos. Our results indicate that simple models can achieve an acceptable performance on the jersey number detection task and that synthetic data can improve the performance dramatically (accuracy increase of ~9% overall, ~18% on low frequency numbers) making our approach achieve state of the art results. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 283,096 |
2109.07322 | DeFungi: Direct Mycological Examination of Microscopic Fungi Images | Traditionally, diagnosis and treatment of fungal infections in humans depend heavily on face-to-face consultations or examinations made by specialized laboratory scientists known as mycologists. In many cases, such as the recent mucormycosis spread in the COVID-19 pandemic, an initial treatment can be safely suggested to the patient during the earliest stage of the mycological diagnostic process by performing a direct examination of biopsies or samples through a microscope. Computer-aided diagnosis systems using deep learning models have been trained and used for the late mycological diagnostic stages. However, there are no reference literature works made for the early stages. A mycological laboratory in Colombia donated the images used for the development of this research work. They were manually labelled into five classes and curated with a subject matter expert assistance. The images were later cropped and patched with automated code routines to produce the final dataset. This paper presents experimental results classifying five fungi types using two different deep learning approaches and three different convolutional neural network models, VGG16, Inception V3, and ResNet50. The first approach benchmarks the classification performance for the models trained from scratch, while the second approach benchmarks the classification performance using pre-trained models based on the ImageNet dataset. Using k-fold cross-validation testing on the 5-class dataset, the best performing model trained from scratch was Inception V3, reporting 73.2% accuracy. Also, the best performing model using transfer learning was VGG16 reporting 85.04%. The statistics provided by the two approaches create an initial point of reference to encourage future research works to improve classification performance. Furthermore, the dataset built is published in Kaggle and GitHub to foster future research. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 255,480 |
2011.11090 | Cross-Domain Generalization Through Memorization: A Study of Nearest
Neighbors in Neural Duplicate Question Detection | Duplicate question detection (DQD) is important to increase efficiency of community and automatic question answering systems. Unfortunately, gathering supervised data in a domain is time-consuming and expensive, and our ability to leverage annotations across domains is minimal. In this work, we leverage neural representations and study nearest neighbors for cross-domain generalization in DQD. We first encode question pairs of the source and target domain in a rich representation space and then using a k-nearest neighbour retrieval-based method, we aggregate the neighbors' labels and distances to rank pairs. We observe robust performance of this method in different cross-domain scenarios of StackExchange, Spring and Quora datasets, outperforming cross-entropy classification in multiple cases. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 207,717 |
1602.02009 | Computing with hardware neurons: spiking or classical? Perspectives of
applied Spiking Neural Networks from the hardware side | While classical neural networks take a position of a leading method in the machine learning community, spiking neuromorphic systems bring attention and large projects in neuroscience. Spiking neural networks were shown to be able to substitute networks of classical neurons in applied tasks. This work explores recent hardware designs focusing on perspective applications (like convolutional neural networks) for both neuron types from the energy efficiency side to analyse whether there is a possibility for spiking neuromorphic hardware to grow up for a wider use. Our comparison shows that spiking hardware is at least on the same level of energy efficiency or even higher than non-spiking on a level of basic operations. However, on a system level, spiking systems are outmatched and consume much more energy due to inefficient data representation with a long series of spikes. If spike-driven applications, minimizing an amount of spikes, are developed, spiking neural systems may reach the energy efficiency level of classical neural systems. However, in the near future, both type of neuromorphic systems may benefit from emerging memory technologies, minimizing the energy consumption of computation and memory for both neuron types. That would make infrastructure and data transfer energy dominant on the system level. We expect that spiking neurons have some benefits, which would allow achieving better energy results. Still the problem of an amount of spikes will still be the major bottleneck for spiking hardware systems. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 51,775 |
2112.04624 | Deep Molecular Representation Learning via Fusing Physical and Chemical
Information | Molecular representation learning is the first yet vital step in combining deep learning and molecular science. To push the boundaries of molecular representation learning, we present PhysChem, a novel neural architecture that learns molecular representations via fusing physical and chemical information of molecules. PhysChem is composed of a physicist network (PhysNet) and a chemist network (ChemNet). PhysNet is a neural physical engine that learns molecular conformations through simulating molecular dynamics with parameterized forces; ChemNet implements geometry-aware deep message-passing to learn chemical / biomedical properties of molecules. Two networks specialize in their own tasks and cooperate by providing expertise to each other. By fusing physical and chemical information, PhysChem achieved state-of-the-art performances on MoleculeNet, a standard molecular machine learning benchmark. The effectiveness of PhysChem was further corroborated on cutting-edge datasets of SARS-CoV-2. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 270,585 |
1508.07590 | New Classes of Permutation Binomials and Permutation Trinomials over
Finite Fields | Permutation polynomials over finite fields play important roles in finite fields theory. They also have wide applications in many areas of science and engineering such as coding theory, cryptography, combinatorial design, communication theory and so on. Permutation binomials and trinomials attract people's interest due to their simple algebraic form and additional extraordinary properties. In this paper, several new classes of permutation binomials and permutation trinomials are constructed. Some of these permutation polynomials are generalizations of known ones. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 46,420 |
2112.02779 | Revisiting LiDAR Registration and Reconstruction: A Range Image
Perspective | Spinning LiDAR data are prevalent for 3D vision tasks. Since LiDAR data is presented in the form of point clouds, expensive 3D operations are usually required. This paper revisits spinning LiDAR scan formation and presents a cylindrical range image representation with a ray-wise projection/unprojection model. It is built upon raw scans and supports lossless conversion from 2D to 3D, allowing fast 2D operations, including 2D index-based neighbor search and downsampling. We then propose, to the best of our knowledge, the first multi-scale registration and dense signed distance function (SDF) reconstruction system for LiDAR range images. We further collect a dataset of indoor and outdoor LiDAR scenes in the posed range image format. A comprehensive evaluation of registration and reconstruction is conducted on the proposed dataset and the KITTI dataset. Experiments demonstrate that our approach outperforms surface reconstruction baselines and achieves similar performance to state-of-the-art LiDAR registration methods, including a modern learning-based registration approach. Thanks to the simplicity, our registration runs at 100Hz and SDF reconstruction in real time. The dataset and a modularized C++/Python toolbox will be released. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 269,957 |
2006.08231 | Differentiable Neural Architecture Transformation for Reproducible
Architecture Improvement | Recently, Neural Architecture Search (NAS) methods are introduced and show impressive performance on many benchmarks. Among those NAS studies, Neural Architecture Transformer (NAT) aims to improve the given neural architecture to have better performance while maintaining computational costs. However, NAT has limitations about a lack of reproducibility. In this paper, we propose differentiable neural architecture transformation that is reproducible and efficient. The proposed method shows stable performance on various architectures. Extensive reproducibility experiments on two datasets, i.e., CIFAR-10 and Tiny Imagenet, present that the proposed method definitely outperforms NAT and be applicable to other models and datasets. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 182,115 |
1911.06226 | Beyond Pairwise Comparisons in Social Choice: A Setwise Kemeny
Aggregation Problem | In this paper, we advocate the use of setwise contests for aggregating a set of input rankings into an output ranking. We propose a generalization of the Kemeny rule where one minimizes the number of k-wise disagreements instead of pairwise disagreements (one counts 1 disagreement each time the top choice in a subset of alternatives of cardinality at most k differs between an input ranking and the output ranking). After an algorithmic study of this k-wise Kemeny aggregation problem, we introduce a k-wise counterpart of the majority graph. This graph reveals useful to divide the aggregation problem into several sub-problems, which enables to speed up the exact computation of a consensus ranking. By introducing a k-wise counterpart of the Spearman distance, we also provide a 2-approximation algorithm for the k-wise Kemeny aggregation problem. We conclude with numerical tests. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 153,486 |
2212.02500 | PhysDiff: Physics-Guided Human Motion Diffusion Model | Denoising diffusion models hold great promise for generating diverse and realistic human motions. However, existing motion diffusion models largely disregard the laws of physics in the diffusion process and often generate physically-implausible motions with pronounced artifacts such as floating, foot sliding, and ground penetration. This seriously impacts the quality of generated motions and limits their real-world application. To address this issue, we present a novel physics-guided motion diffusion model (PhysDiff), which incorporates physical constraints into the diffusion process. Specifically, we propose a physics-based motion projection module that uses motion imitation in a physics simulator to project the denoised motion of a diffusion step to a physically-plausible motion. The projected motion is further used in the next diffusion step to guide the denoising diffusion process. Intuitively, the use of physics in our model iteratively pulls the motion toward a physically-plausible space, which cannot be achieved by simple post-processing. Experiments on large-scale human motion datasets show that our approach achieves state-of-the-art motion quality and improves physical plausibility drastically (>78% for all datasets). | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | true | 334,804 |
2306.11897 | Reinforcement Learning-based Virtual Fixtures for Teleoperation of
Hydraulic Construction Machine | The utilization of teleoperation is a crucial aspect of the construction industry, as it enables operators to control machines safely from a distance. However, remote operation of these machines at a joint level using individual joysticks necessitates extensive training for operators to achieve proficiency due to their multiple degrees of freedom. Additionally, verifying the machine resulting motion is only possible after execution, making optimal control challenging. In addressing this issue, this study proposes a reinforcement learning-based approach to optimize task performance. The control policy acquired through learning is used to provide instructions on efficiently controlling and coordinating multiple joints. To evaluate the effectiveness of the proposed framework, a user study is conducted with a Brokk 170 construction machine by assessing its performance in a typical construction task involving inserting a chisel into a borehole. The effectiveness of the proposed framework is evaluated by comparing the performance of participants in the presence and absence of virtual fixtures. This study results demonstrate the proposed framework potential in enhancing the teleoperation process in the construction industry. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 374,751 |
1506.00747 | Sensor placement by maximal projection on minimum eigenspace for linear
inverse problems | This paper presents two new greedy sensor placement algorithms, named minimum nonzero eigenvalue pursuit (MNEP) and maximal projection on minimum eigenspace (MPME), for linear inverse problems, with greater emphasis on the MPME algorithm for performance comparison with existing approaches. We select the sensing locations one-by-one. In this way, the least number of required sensors can be determined by checking whether the estimation accuracy is satisfied after each sensing location is determined. The minimum eigenspace is defined as the eigenspace associated with the minimum eigenvalue of the dual observation matrix. For each sensing location, the projection of its observation vector onto the minimum eigenspace is shown to be monotonically decreasing w.r.t. the worst case error variance (WCEV) of the estimated parameters. We select the sensing location whose observation vector has the maximum projection onto the minimum eigenspace of the current dual observation matrix. The proposed MPME is shown to be one of the most computationally efficient algorithms. Our Monte-Carlo simulations showed that MPME outperforms the convex relaxation method [1], the SparSenSe method [2], and the FrameSense method [3] in terms of WCEV and the mean square error (MSE) of the estimated parameters, especially when the number of available sensor nodes is very limited. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 43,708 |
2108.09164 | A Neural Conversation Generation Model via Equivalent Shared Memory
Investigation | Conversation generation as a challenging task in Natural Language Generation (NLG) has been increasingly attracting attention over the last years. A number of recent works adopted sequence-to-sequence structures along with external knowledge, which successfully enhanced the quality of generated conversations. Nevertheless, few works utilized the knowledge extracted from similar conversations for utterance generation. Taking conversations in customer service and court debate domains as examples, it is evident that essential entities/phrases, as well as their associated logic and inter-relationships can be extracted and borrowed from similar conversation instances. Such information could provide useful signals for improving conversation generation. In this paper, we propose a novel reading and memory framework called Deep Reading Memory Network (DRMN) which is capable of remembering useful information of similar conversations for improving utterance generation. We apply our model to two large-scale conversation datasets of justice and e-commerce fields. Experiments prove that the proposed model outperforms the state-of-the-art approaches. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 251,520 |
2303.09452 | Learning-Based Modeling of Human-Autonomous Vehicle Interaction for
Improved Safety in Mixed-Vehicle Platooning Control | The rising presence of autonomous vehicles (AVs) on public roads necessitates the development of advanced control strategies that account for the unpredictable nature of human-driven vehicles (HVs). This study introduces a learning-based method for modeling HV behavior, combining a traditional first-principles approach with a Gaussian process (GP) learning component. This hybrid model enhances the accuracy of velocity predictions and provides measurable uncertainty estimates. We leverage this model to develop a GP-based model predictive control (GP-MPC) strategy to improve safety in mixed vehicle platoons by integrating uncertainty assessments into distance constraints. Comparative simulations between our GP-MPC approach and a conventional model predictive control (MPC) strategy reveal that the GP-MPC ensures safer distancing and more efficient travel within the mixed platoon. By incorporating sparse GP modeling for HVs and a dynamic GP prediction in MPC, we significantly reduce the computation time of GP-MPC, making it only marginally longer than standard MPC and approximately 100 times faster than previous models not employing these techniques. Our findings underscore the effectiveness of learning-based HV modeling in enhancing safety and efficiency in mixed-traffic environments involving AV and HV interactions. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 352,048 |
1811.01526 | Unsupervised RGBD Video Object Segmentation Using GANs | Video object segmentation is a fundamental step in many advanced vision applications. Most existing algorithms are based on handcrafted features such as HOG, super-pixel segmentation or texture-based techniques, while recently deep features have been found to be more efficient. Existing algorithms observe performance degradation in the presence of challenges such as illumination variations, shadows, and color camouflage. To handle these challenges we propose a fusion based moving object segmentation algorithm which exploits color as well as depth information using GAN to achieve more accuracy. Our goal is to segment moving objects in the presence of challenging background scenes, in real environments. To address this problem, GAN is trained in an unsupervised manner on color and depth information independently with challenging video sequences. During testing, the trained GAN generates backgrounds similar to that in the test sample. The generated background samples are then compared with the test sample to segment moving objects. The final result is computed by fusion of object boundaries in both modalities, RGB and the depth. The comparison of our proposed algorithm with five state-of-the-art methods on publicly available dataset has shown the strength of our algorithm for moving object segmentation in videos in the presence of challenging real scenarios. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 112,391 |
2208.08037 | LayoutFormer++: Conditional Graphic Layout Generation via Constraint
Serialization and Decoding Space Restriction | Conditional graphic layout generation, which generates realistic layouts according to user constraints, is a challenging task that has not been well-studied yet. First, there is limited discussion about how to handle diverse user constraints flexibly and uniformly. Second, to make the layouts conform to user constraints, existing work often sacrifices generation quality significantly. In this work, we propose LayoutFormer++ to tackle the above problems. First, to flexibly handle diverse constraints, we propose a constraint serialization scheme, which represents different user constraints as sequences of tokens with a predefined format. Then, we formulate conditional layout generation as a sequence-to-sequence transformation, and leverage encoder-decoder framework with Transformer as the basic architecture. Furthermore, to make the layout better meet user requirements without harming quality, we propose a decoding space restriction strategy. Specifically, we prune the predicted distribution by ignoring the options that definitely violate user constraints and likely result in low-quality layouts, and make the model samples from the restricted distribution. Experiments demonstrate that LayoutFormer++ outperforms existing approaches on all the tasks in terms of both better generation quality and less constraint violation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 313,226 |
2412.14058 | Towards Generalist Robot Policies: What Matters in Building
Vision-Language-Action Models | Foundation Vision Language Models (VLMs) exhibit strong capabilities in multi-modal representation learning, comprehension, and reasoning. By injecting action components into the VLMs, Vision-Language-Action Models (VLAs) can be naturally formed and also show promising performance. Existing work has demonstrated the effectiveness and generalization of VLAs in multiple scenarios and tasks. Nevertheless, the transfer from VLMs to VLAs is not trivial since existing VLAs differ in their backbones, action-prediction formulations, data distributions, and training recipes. This leads to a missing piece for a systematic understanding of the design choices of VLAs. In this work, we disclose the key factors that significantly influence the performance of VLA and focus on answering three essential design choices: which backbone to select, how to formulate the VLA architectures, and when to add cross-embodiment data. The obtained results convince us firmly to explain why we need VLA and develop a new family of VLAs, RoboVLMs, which require very few manual designs and achieve a new state-of-the-art performance in three simulation tasks and real-world experiments. Through our extensive experiments, which include over 8 VLM backbones, 4 policy architectures, and over 600 distinct designed experiments, we provide a detailed guidebook for the future design of VLAs. In addition to the study, the highly flexible RoboVLMs framework, which supports easy integrations of new VLMs and free combinations of various design choices, is made public to facilitate future research. We open-source all details, including codes, models, datasets, and toolkits, along with detailed training and evaluation recipes at: robovlms.github.io. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 518,558 |
1401.0660 | Plurals: individuals and sets in a richly typed semantics | We developed a type-theoretical framework for natural lan- guage semantics that, in addition to the usual Montagovian treatment of compositional semantics, includes a treatment of some phenomena of lex- ical semantic: coercions, meaning, transfers, (in)felicitous co-predication. In this setting we see how the various readings of plurals (collective, dis- tributive, coverings,...) can be modelled. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 29,576 |
2404.09683 | Post-Training Network Compression for 3D Medical Image Segmentation:
Reducing Computational Efforts via Tucker Decomposition | We address the computational barrier of deploying advanced deep learning segmentation models in clinical settings by studying the efficacy of network compression through tensor decomposition. We propose a post-training Tucker factorization that enables the decomposition of pre-existing models to reduce computational requirements without impeding segmentation accuracy. We applied Tucker decomposition to the convolutional kernels of the TotalSegmentator (TS) model, an nnU-Net model trained on a comprehensive dataset for automatic segmentation of 117 anatomical structures. Our approach reduced the floating-point operations (FLOPs) and memory required during inference, offering an adjustable trade-off between computational efficiency and segmentation quality. This study utilized the publicly available TS dataset, employing various downsampling factors to explore the relationship between model size, inference speed, and segmentation performance. The application of Tucker decomposition to the TS model substantially reduced the model parameters and FLOPs across various compression rates, with limited loss in segmentation accuracy. We removed up to 88% of the model's parameters with no significant performance changes in the majority of classes after fine-tuning. Practical benefits varied across different graphics processing unit (GPU) architectures, with more distinct speed-ups on less powerful hardware. Post-hoc network compression via Tucker decomposition presents a viable strategy for reducing the computational demand of medical image segmentation models without substantially sacrificing accuracy. This approach enables the broader adoption of advanced deep learning technologies in clinical practice, offering a way to navigate the constraints of hardware capabilities. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 446,789 |
2303.10597 | Partial Network Cloning | In this paper, we study a novel task that enables partial knowledge transfer from pre-trained models, which we term as Partial Network Cloning (PNC). Unlike prior methods that update all or at least part of the parameters in the target network throughout the knowledge transfer process, PNC conducts partial parametric "cloning" from a source network and then injects the cloned module to the target, without modifying its parameters. Thanks to the transferred module, the target network is expected to gain additional functionality, such as inference on new classes; whenever needed, the cloned module can be readily removed from the target, with its original parameters and competence kept intact. Specifically, we introduce an innovative learning scheme that allows us to identify simultaneously the component to be cloned from the source and the position to be inserted within the target network, so as to ensure the optimal performance. Experimental results on several datasets demonstrate that, our method yields a significant improvement of 5% in accuracy and 50% in locality when compared with parameter-tuning based methods. Our code is available at https://github.com/JngwenYe/PNCloning. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 352,513 |
2204.02392 | Deep Interactive Motion Prediction and Planning: Playing Games with
Motion Prediction Models | In most classical Autonomous Vehicle (AV) stacks, the prediction and planning layers are separated, limiting the planner to react to predictions that are not informed by the planned trajectory of the AV. This work presents a module that tightly couples these layers via a game-theoretic Model Predictive Controller (MPC) that uses a novel interactive multi-agent neural network policy as part of its predictive model. In our setting, the MPC planner considers all the surrounding agents by informing the multi-agent policy with the planned state sequence. Fundamental to the success of our method is the design of a novel multi-agent policy network that can steer a vehicle given the state of the surrounding agents and the map information. The policy network is trained implicitly with ground-truth observation data using backpropagation through time and a differentiable dynamics model to roll out the trajectory forward in time. Finally, we show that our multi-agent policy network learns to drive while interacting with the environment, and, when combined with the game-theoretic MPC planner, can successfully generate interactive behaviors. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | true | false | false | false | 289,927 |
2209.15320 | Bounded Robustness in Reinforcement Learning via Lexicographic
Objectives | Policy robustness in Reinforcement Learning may not be desirable at any cost: the alterations caused by robustness requirements from otherwise optimal policies should be explainable, quantifiable and formally verifiable. In this work we study how policies can be maximally robust to arbitrary observational noise by analysing how they are altered by this noise through a stochastic linear operator interpretation of the disturbances, and establish connections between robustness and properties of the noise kernel and of the underlying MDPs. Then, we construct sufficient conditions for policy robustness, and propose a robustness-inducing scheme, applicable to any policy gradient algorithm, that formally trades off expected policy utility for robustness through lexicographic optimisation, while preserving convergence and sub-optimality in the policy synthesis. | false | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | false | 320,568 |
2407.17078 | Active Loop Closure for OSM-guided Robotic Mapping in Large-Scale Urban
Environments | The autonomous mapping of large-scale urban scenes presents significant challenges for autonomous robots. To mitigate the challenges, global planning, such as utilizing prior GPS trajectories from OpenStreetMap (OSM), is often used to guide the autonomous navigation of robots for mapping. However, due to factors like complex terrain, unexpected body movement, and sensor noise, the uncertainty of the robot's pose estimates inevitably increases over time, ultimately leading to the failure of robotic mapping. To address this issue, we propose a novel active loop closure procedure, enabling the robot to actively re-plan the previously planned GPS trajectory. The method can guide the robot to re-visit the previous places where the loop-closure detection can be performed to trigger the back-end optimization, effectively reducing errors and uncertainties in pose estimation. The proposed active loop closure mechanism is implemented and embedded into a real-time OSM-guided robot mapping framework. Empirical results on several large-scale outdoor scenarios demonstrate its effectiveness and promising performance. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 475,842 |
1801.06104 | Invariants of multidimensional time series based on their
iterated-integral signature | We introduce a novel class of features for multidimensional time series, that are invariant with respect to transformations of the ambient space. The general linear group, the group of rotations and the group of permutations of the axes are considered. The starting point for their construction is Chen's iterated-integral signature. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 88,557 |
1909.00228 | Connecting the Dots: Document-level Neural Relation Extraction with
Edge-oriented Graphs | Document-level relation extraction is a complex human process that requires logical inference to extract relationships between named entities in text. Existing approaches use graph-based neural models with words as nodes and edges as relations between them, to encode relations across sentences. These models are node-based, i.e., they form pair representations based solely on the two target node representations. However, entity relations can be better expressed through unique edge representations formed as paths between nodes. We thus propose an edge-oriented graph neural model for document-level relation extraction. The model utilises different types of nodes and edges to create a document-level graph. An inference mechanism on the graph edges enables to learn intra- and inter-sentence relations using multi-instance learning internally. Experiments on two document-level biomedical datasets for chemical-disease and gene-disease associations show the usefulness of the proposed edge-oriented approach. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 143,571 |
2112.04629 | Transferability Properties of Graph Neural Networks | Graph neural networks (GNNs) are composed of layers consisting of graph convolutions and pointwise nonlinearities. Due to their invariance and stability properties, GNNs are provably successful at learning representations from data supported on moderate-scale graphs. However, they are difficult to learn on large-scale graphs. In this paper, we study the problem of training GNNs on graphs of moderate size and transferring them to large-scale graphs. We use graph limits called graphons to define limit objects for graph filters and GNNs -- graphon filters and graphon neural networks (WNNs) -- which we interpret as generative models for graph filters and GNNs. We then show that graphon filters and WNNs can be approximated by graph filters and GNNs sampled from them on weighted and stochastic graphs. Because the error of these approximations can be upper bounded, by a triangle inequality argument we can further bound the error of transferring a graph filter or a GNN across graphs. Our results show that (i) the transference error decreases with the graph size, and (ii) that graph filters have a transferability-discriminability tradeoff that in GNNs is alleviated by the scattering behavior of the nonlinearity. These findings are demonstrated empirically in a movie recommendation problem and in a decentralized control task. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 270,587 |
2402.17296 | Learning Exposure Correction in Dynamic Scenes | Exposure correction aims to enhance visual data suffering from improper exposures, which can greatly improve satisfactory visual effects. However, previous methods mainly focus on the image modality, and the video counterpart is less explored in the literature. Directly applying prior image-based methods to videos results in temporal incoherence with low visual quality. Through thorough investigation, we find that the development of relevant communities is limited by the absence of a benchmark dataset. Therefore, in this paper, we construct the first real-world paired video dataset, including both underexposure and overexposure dynamic scenes. To achieve spatial alignment, we utilize two DSLR cameras and a beam splitter to simultaneously capture improper and normal exposure videos. Additionally, we propose an end-to-end video exposure correction network, in which a dual-stream module is designed to deal with both underexposure and overexposure factors, enhancing the illumination based on Retinex theory. The extensive experiments based on various metrics and user studies demonstrate the significance of our dataset and the effectiveness of our method. The code and dataset are available at https://github.com/kravrolens/VECNet. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 432,931 |
2305.15912 | Neural Characteristic Activation Analysis and Geometric Parameterization
for ReLU Networks | We introduce a novel approach for analyzing the training dynamics of ReLU networks by examining the characteristic activation boundaries of individual ReLU neurons. Our proposed analysis reveals a critical instability in common neural network parameterizations and normalizations during stochastic optimization, which impedes fast convergence and hurts generalization performance. Addressing this, we propose Geometric Parameterization (GmP), a novel neural network parameterization technique that effectively separates the radial and angular components of weights in the hyperspherical coordinate system. We show theoretically that GmP resolves the aforementioned instability issue. We report empirical results on various models and benchmarks to verify GmP's advantages of optimization stability, convergence speed and generalization performance. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 367,842 |
2412.07979 | AmCLR: Unified Augmented Learning for Cross-Modal Representations | Contrastive learning has emerged as a pivotal framework for representation learning, underpinning advances in both unimodal and bimodal applications like SimCLR and CLIP. To address fundamental limitations like large batch size dependency and bimodality, methods such as SogCLR leverage stochastic optimization for the global contrastive objective. Inspired by SogCLR's efficiency and adaptability, we introduce AmCLR and xAmCLR objective functions tailored for bimodal vision-language models to further enhance the robustness of contrastive learning. AmCLR integrates diverse augmentations, including text paraphrasing and image transformations, to reinforce the alignment of contrastive representations, keeping batch size limited to a few hundred samples unlike CLIP which needs batch size of 32,768 to produce reasonable results. xAmCLR further extends this paradigm by incorporating intra-modal alignments between original and augmented modalities for richer feature learning. These advancements yield a more resilient and generalizable contrastive learning process, aimed at overcoming bottlenecks in scaling and augmentative diversity. Since we have built our framework on the existing SogCLR, we are able to demonstrate improved representation quality with fewer computational resources, establishing a foundation for scalable and robust multi-modal learning. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 515,884 |
1812.11027 | Exploring Weight Symmetry in Deep Neural Networks | We propose to impose symmetry in neural network parameters to improve parameter usage and make use of dedicated convolution and matrix multiplication routines. Due to significant reduction in the number of parameters as a result of the symmetry constraints, one would expect a dramatic drop in accuracy. Surprisingly, we show that this is not the case, and, depending on network size, symmetry can have little or no negative effect on network accuracy, especially in deep overparameterized networks. We propose several ways to impose local symmetry in recurrent and convolutional neural networks, and show that our symmetry parameterizations satisfy universal approximation property for single hidden layer networks. We extensively evaluate these parameterizations on CIFAR, ImageNet and language modeling datasets, showing significant benefits from the use of symmetry. For instance, our ResNet-101 with channel-wise symmetry has almost 25% less parameters and only 0.2% accuracy loss on ImageNet. Code for our experiments is available at https://github.com/hushell/deep-symmetry | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 117,485 |
1503.02086 | Gender-Based Violence in 140 Characters or Fewer: A #BigData Case Study
of Twitter | Public institutions are increasingly reliant on data from social media sites to measure public attitude and provide timely public engagement. Such reliance includes the exploration of public views on important social issues such as gender-based violence (GBV). In this study, we examine big (social) data consisting of nearly fourteen million tweets collected from Twitter over a period of ten months to analyze public opinion regarding GBV, highlighting the nature of tweeting practices by geographical location and gender. We demonstrate the utility of Computational Social Science to mine insight from the corpus while accounting for the influence of both transient events and sociocultural factors. We reveal public awareness regarding GBV tolerance and suggest opportunities for intervention and the measurement of intervention effectiveness assisting both governmental and non-governmental organizations in policy development. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 40,892 |
2206.09576 | FedSSO: A Federated Server-Side Second-Order Optimization Algorithm | In this work, we propose FedSSO, a server-side second-order optimization method for federated learning (FL). In contrast to previous works in this direction, we employ a server-side approximation for the Quasi-Newton method without requiring any training data from the clients. In this way, we not only shift the computation burden from clients to server, but also eliminate the additional communication for second-order updates between clients and server entirely. We provide theoretical guarantee for convergence of our novel method, and empirically demonstrate our fast convergence and communication savings in both convex and non-convex settings. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 303,624 |
1607.06429 | A Fine-grained Indoor Location-based Social Network | Existing Location-based social networks (LBSNs), e.g., Foursquare, depend mainly on GPS or cellular-based localization to infer users' locations. However, GPS is unavailable indoors and cellular-based localization provides coarse-grained accuracy. This limits the accuracy of current LBSNs in indoor environments, where people spend 89% of their time. This in turn affects the user experience, in terms of the accuracy of the ranked list of venues, especially for the small screens of mobile devices; misses business opportunities; and leads to reduced venues coverage. In this paper, we present CheckInside: a system that can provide a fine-grained indoor location-based social network. CheckInside leverages the crowd-sensed data collected from users' mobile devices during the check-in operation and knowledge extracted from current LBSNs to associate a place with a logical name and a semantic fingerprint. This semantic fingerprint is used to obtain a more accurate list of nearby places as well as to automatically detect new places with similar signature. A novel algorithm for detecting fake check-ins and inferring a semantically-enriched floorplan is proposed as well as an algorithm for enhancing the system performance based on the user implicit feedback. Furthermore, CheckInside encompasses a coverage extender module to automatically predict names of new venues increasing the coverage of current LBSNs. Experimental evaluation of CheckInside in four malls over the course of six weeks with 20 participants shows that it can infer the actual user place within the top five venues 99% of the time. This is compared to 17% only in the case of current LBSNs. In addition, it increases the coverage of existing LBSNs by more than 37%. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 58,895 |
2305.19338 | The union-closed sets conjecture for non-uniform distributions | The union-closed sets conjecture, attributed to P\'eter Frankl from 1979, states that for any non-empty finite union-closed family of finite sets not consisting of only the empty set, there is an element that is in at least half of the sets in the family. We prove a version of Frankl's conjecture for families distributed according to any one of infinitely many distributions. As a corollary, in the intersection-closed reformulation of Frankl's conjecture, we obtain that it is true for families distributed according to any one of infinitely many Maxwell--Boltzmann distributions with inverse temperatures bounded below by a positive universal constant. Frankl's original conjecture corresponds to zero inverse temperature. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 369,477 |
1808.03465 | TwoWingOS: A Two-Wing Optimization Strategy for Evidential Claim
Verification | Determining whether a given claim is supported by evidence is a fundamental NLP problem that is best modeled as Textual Entailment. However, given a large collection of text, finding evidence that could support or refute a given claim is a challenge in itself, amplified by the fact that different evidence might be needed to support or refute a claim. Nevertheless, most prior work decouples evidence identification from determining the truth value of the claim given the evidence. We propose to consider these two aspects jointly. We develop TwoWingOS (two-wing optimization strategy), a system that, while identifying appropriate evidence for a claim, also determines whether or not the claim is supported by the evidence. Given the claim, TwoWingOS attempts to identify a subset of the evidence candidates; given the predicted evidence, it then attempts to determine the truth value of the corresponding claim. We treat this challenge as coupled optimization problems, training a joint model for it. TwoWingOS offers two advantages: (i) Unlike pipeline systems, it facilitates flexible-size evidence set, and (ii) Joint training improves both the claim entailment and the evidence identification. Experiments on a benchmark dataset show state-of-the-art performance. Code: https://github.com/yinwenpeng/FEVER | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 104,942 |
1911.10666 | Who did They Respond to? Conversation Structure Modeling using Masked
Hierarchical Transformer | Conversation structure is useful for both understanding the nature of conversation dynamics and for providing features for many downstream applications such as summarization of conversations. In this work, we define the problem of conversation structure modeling as identifying the parent utterance(s) to which each utterance in the conversation responds to. Previous work usually took a pair of utterances to decide whether one utterance is the parent of the other. We believe the entire ancestral history is a very important information source to make accurate prediction. Therefore, we design a novel masking mechanism to guide the ancestor flow, and leverage the transformer model to aggregate all ancestors to predict parent utterances. Our experiments are performed on the Reddit dataset (Zhang, Culbertson, and Paritosh 2017) and the Ubuntu IRC dataset (Kummerfeld et al. 2019). In addition, we also report experiments on a new larger corpus from the Reddit platform and release this dataset. We show that the proposed model, that takes into account the ancestral history of the conversation, significantly outperforms several strong baselines including the BERT model on all datasets | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 154,900 |
2010.15980 | AutoPrompt: Eliciting Knowledge from Language Models with Automatically
Generated Prompts | The remarkable success of pretrained language models has motivated the study of what kinds of knowledge these models learn during pretraining. Reformulating tasks as fill-in-the-blanks problems (e.g., cloze tests) is a natural approach for gauging such knowledge, however, its usage is limited by the manual effort and guesswork required to write suitable prompts. To address this, we develop AutoPrompt, an automated method to create prompts for a diverse set of tasks, based on a gradient-guided search. Using AutoPrompt, we show that masked language models (MLMs) have an inherent capability to perform sentiment analysis and natural language inference without additional parameters or finetuning, sometimes achieving performance on par with recent state-of-the-art supervised models. We also show that our prompts elicit more accurate factual knowledge from MLMs than the manually created prompts on the LAMA benchmark, and that MLMs can be used as relation extractors more effectively than supervised relation extraction models. These results demonstrate that automatically generated prompts are a viable parameter-free alternative to existing probing methods, and as pretrained LMs become more sophisticated and capable, potentially a replacement for finetuning. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 203,922 |
2208.09590 | Data-Driven Causal Effect Estimation Based on Graphical Causal
Modelling: A Survey | In many fields of scientific research and real-world applications, unbiased estimation of causal effects from non-experimental data is crucial for understanding the mechanism underlying the data and for decision-making on effective responses or interventions. A great deal of research has been conducted to address this challenging problem from different angles. For estimating causal effect in observational data, assumptions such as Markov condition, faithfulness and causal sufficiency are always made. Under the assumptions, full knowledge such as, a set of covariates or an underlying causal graph, is typically required. A practical challenge is that in many applications, no such full knowledge or only some partial knowledge is available. In recent years, research has emerged to use search strategies based on graphical causal modelling to discover useful knowledge from data for causal effect estimation, with some mild assumptions, and has shown promise in tackling the practical challenge. In this survey, we review these data-driven methods on causal effect estimation for a single treatment with a single outcome of interest and focus on the challenges faced by data-driven causal effect estimation. We concisely summarise the basic concepts and theories that are essential for data-driven causal effect estimation using graphical causal modelling but are scattered around the literature. We identify and discuss the challenges faced by data-driven causal effect estimation and characterise the existing methods by their assumptions and the approaches to tackling the challenges. We analyse the strengths and limitations of the different types of methods and present an empirical evaluation to support the discussions. We hope this review will motivate more researchers to design better data-driven methods based on graphical causal modelling for the challenging problem of causal effect estimation. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 313,749 |
2312.10911 | The Pros and Cons of Adversarial Robustness | Robustness is widely regarded as a fundamental problem in the analysis of machine learning (ML) models. Most often robustness equates with deciding the non-existence of adversarial examples, where adversarial examples denote situations where small changes on some inputs cause a change in the prediction. The perceived importance of ML model robustness explains the continued progress observed for most of the last decade. Whereas robustness is often assessed locally, i.e. given some target point in feature space, robustness can also be defined globally, i.e. where any point in feature space can be considered. The importance of ML model robustness is illustrated for example by the existence of competitions evaluating the progress of robustness tools, namely in the case of neural networks (NNs) but also by efforts towards robustness certification. More recently, robustness tools have also been used for computing rigorous explanations of ML models. In contrast with the observed successes of robustness, this paper uncovers some limitations with existing definitions of robustness, both global and local, but also with efforts towards robustness certification. The paper also investigates uses of adversarial examples besides those related with robustness. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 416,361 |
2208.03097 | An ASP Framework for Efficient Urban Traffic Optimization | Avoiding congestion and controlling traffic in urban scenarios is becoming nowadays of paramount importance due to the rapid growth of our cities' population and vehicles. The effective control of urban traffic as a means to mitigate congestion can be beneficial in an economic, environmental and health way. In this paper, a framework which allows to efficiently simulate and optimize traffic flow in a large roads' network with hundreds of vehicles is presented. The framework leverages on an Answer Set Programming (ASP) encoding to formally describe the movements of vehicles inside a network. Taking advantage of the ability to specify optimization constraints in ASP and the off-the-shelf solver Clingo, it is then possible to optimize the routes of vehicles inside the network to reduce a range of relevant metrics (e.g., travel times or emissions). Finally, an analysis on real-world traffic data is performed, utilizing the state-of-the-art Urban Mobility Simulator (SUMO) to keep track of the state of the network, test the correctness of the solution and to prove the efficiency and capabilities of the presented solution. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 311,681 |
1608.00474 | Comparison of Geometric and Probabilistic Shaping with Application to
ATSC 3.0 | In this work, geometric shaping (GS) and probabilistic shaping (PS) for the AWGN channel is reviewed. Both approaches are investigated in terms of symbol-metric decoding (SMD) and bit-metric decoding (BMD). For GS, an optimization algorithm based on differential evolution is formulated. Achievable rate analysis reveals that GS suffers from a 0.4 dB performance degradation compared to PS when BMD is used. Forward-error correction simulations of the ATSC 3.0 modulation and coding formats (modcods) confirm the theoretical findings. In particular, PS enables seamless rate adaptation with one single modcod and it outperforms ATSC 3.0 GS modcods by more than 0.5 dB for spectral efficiencies larger than 3.2 bits per channel use. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 59,286 |
2407.05575 | Towards Reflected Object Detection: A Benchmark | Object detection has greatly improved over the past decade thanks to advances in deep learning and large-scale datasets. However, detecting objects reflected in surfaces remains an underexplored area. Reflective surfaces are ubiquitous in daily life, appearing in homes, offices, public spaces, and natural environments. Accurate detection and interpretation of reflected objects are essential for various applications. This paper addresses this gap by introducing a extensive benchmark specifically designed for Reflected Object Detection. Our Reflected Object Detection Dataset (RODD) features a diverse collection of images showcasing reflected objects in various contexts, providing standard annotations for both real and reflected objects. This distinguishes it from traditional object detection benchmarks. RODD encompasses 10 categories and includes 21,059 images of real and reflected objects across different backgrounds, complete with standard bounding box annotations and the classification of objects as real or reflected. Additionally, we present baseline results by adapting five state-of-the-art object detection models to address this challenging task. Experimental results underscore the limitations of existing methods when applied to reflected object detection, highlighting the need for specialized approaches. By releasing RODD, we aim to support and advance future research on detecting reflected objects. Dataset and code are available at: https: //github.com/Tqybu-hans/RODD. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 471,034 |
2305.14222 | Abstraction of Nondeterministic Situation Calculus Action Theories --
Extended Version | We develop a general framework for abstracting the behavior of an agent that operates in a nondeterministic domain, i.e., where the agent does not control the outcome of the nondeterministic actions, based on the nondeterministic situation calculus and the ConGolog programming language. We assume that we have both an abstract and a concrete nondeterministic basic action theory, and a refinement mapping which specifies how abstract actions, decomposed into agent actions and environment reactions, are implemented by concrete ConGolog programs. This new setting supports strategic reasoning and strategy synthesis, by allowing us to quantify separately on agent actions and environment reactions. We show that if the agent has a (strong FOND) plan/strategy to achieve a goal/complete a task at the abstract level, and it can always execute the nondeterministic abstract actions to completion at the concrete level, then there exists a refinement of it that is a (strong FOND) plan/strategy to achieve the refinement of the goal/task at the concrete level. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 366,913 |
1911.08896 | Shift Convolution Network for Stereo Matching | In this paper, we present Shift Convolution Network (ShiftConvNet) to provide matching capability between two feature maps for stereo estimation. The proposed method can speedily produce a highly accurate disparity map from stereo images. A module called shift convolution layer is proposed to replace the traditional correlation layer to perform patch comparisons between two feature maps. By using a novel architecture of convolutional network to learn the matching process, ShiftConvNet can produce better results than DispNet-C[1], also running faster with 5 fps. Moreover, with a proposed auto shift convolution refine part, further improvement is obtained. The proposed approach was evaluated on FlyingThings 3D. It achieves state-of-the-art results on the benchmark dataset. Codes will be made available at github. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 154,344 |
2408.09527 | ALS-HAR: Harnessing Wearable Ambient Light Sensors to Enhance IMU-based
Human Activity Recogntion | Despite the widespread integration of ambient light sensors (ALS) in smart devices commonly used for screen brightness adaptation, their application in human activity recognition (HAR), primarily through body-worn ALS, is largely unexplored. In this work, we developed ALS-HAR, a robust wearable light-based motion activity classifier. Although ALS-HAR achieves comparable accuracy to other modalities, its natural sensitivity to external disturbances, such as changes in ambient light, weather conditions, or indoor lighting, makes it challenging for daily use. To address such drawbacks, we introduce strategies to enhance environment-invariant IMU-based activity classifications through augmented multi-modal and contrastive classifications by transferring the knowledge extracted from the ALS. Our experiments on a real-world activity dataset for three different scenarios demonstrate that while ALS-HAR's accuracy strongly relies on external lighting conditions, cross-modal information can still improve other HAR systems, such as IMU-based classifiers.Even in scenarios where ALS performs insufficiently, the additional knowledge enables improved accuracy and macro F1 score by up to 4.2 % and 6.4 %, respectively, for IMU-based classifiers and even surpasses multi-modal sensor fusion models in two of our three experiment scenarios. Our research highlights the untapped potential of ALS integration in advancing sensor-based HAR technology, paving the way for practical and efficient wearable ALS-based activity recognition systems with potential applications in healthcare, sports monitoring, and smart indoor environments. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 481,477 |
2402.09990 | TIAViz: A Browser-based Visualization Tool for Computational Pathology
Models | Digital pathology has gained significant traction in modern healthcare systems. This shift from optical microscopes to digital imagery brings with it the potential for improved diagnosis, efficiency, and the integration of AI tools into the pathologists workflow. A critical aspect of this is visualization. Throughout the development of a machine learning (ML) model in digital pathology, it is crucial to have flexible, openly available tools to visualize models, from their outputs and predictions to the underlying annotations and images used to train or test a model. We introduce TIAViz, a Python-based visualization tool built into TIAToolbox which allows flexible, interactive, fully zoomable overlay of a wide variety of information onto whole slide images, including graphs, heatmaps, segmentations, annotations and other WSIs. The UI is browser-based, allowing use either locally, on a remote machine, or on a server to provide publicly available demos. This tool is open source and is made available at: https://github.com/TissueImageAnalytics/tiatoolbox and via pip installation (pip install tiatoolbox) and conda as part of TIAToolbox. | true | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 429,760 |
2401.16164 | Constrained Bi-Level Optimization: Proximal Lagrangian Value function
Approach and Hessian-free Algorithm | This paper presents a new approach and algorithm for solving a class of constrained Bi-Level Optimization (BLO) problems in which the lower-level problem involves constraints coupling both upper-level and lower-level variables. Such problems have recently gained significant attention due to their broad applicability in machine learning. However, conventional gradient-based methods unavoidably rely on computationally intensive calculations related to the Hessian matrix. To address this challenge, we begin by devising a smooth proximal Lagrangian value function to handle the constrained lower-level problem. Utilizing this construct, we introduce a single-level reformulation for constrained BLOs that transforms the original BLO problem into an equivalent optimization problem with smooth constraints. Enabled by this reformulation, we develop a Hessian-free gradient-based algorithm-termed proximal Lagrangian Value function-based Hessian-free Bi-level Algorithm (LV-HBA)-that is straightforward to implement in a single loop manner. Consequently, LV-HBA is especially well-suited for machine learning applications. Furthermore, we offer non-asymptotic convergence analysis for LV-HBA, eliminating the need for traditional strong convexity assumptions for the lower-level problem while also being capable of accommodating non-singleton scenarios. Empirical results substantiate the algorithm's superior practical performance. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 424,720 |
2501.01263 | Stealthy Backdoor Attack to Real-world Models in Android Apps | Powered by their superior performance, deep neural networks (DNNs) have found widespread applications across various domains. Many deep learning (DL) models are now embedded in mobile apps, making them more accessible to end users through on-device DL. However, deploying on-device DL to users' smartphones simultaneously introduces several security threats. One primary threat is backdoor attacks. Extensive research has explored backdoor attacks for several years and has proposed numerous attack approaches. However, few studies have investigated backdoor attacks on DL models deployed in the real world, or they have shown obvious deficiencies in effectiveness and stealthiness. In this work, we explore more effective and stealthy backdoor attacks on real-world DL models extracted from mobile apps. Our main justification is that imperceptible and sample-specific backdoor triggers generated by DNN-based steganography can enhance the efficacy of backdoor attacks on real-world models. We first confirm the effectiveness of steganography-based backdoor attacks on four state-of-the-art DNN models. Subsequently, we systematically evaluate and analyze the stealthiness of the attacks to ensure they are difficult to perceive. Finally, we implement the backdoor attacks on real-world models and compare our approach with three baseline methods. We collect 38,387 mobile apps, extract 89 DL models from them, and analyze these models to obtain the prerequisite model information for the attacks. After identifying the target models, our approach achieves an average of 12.50% higher attack success rate than DeepPayload while better maintaining the normal performance of the models. Extensive experimental results demonstrate that our method enables more effective, robust, and stealthy backdoor attacks on real-world models. | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | false | 521,997 |
2102.05314 | Forecasting Nonnegative Time Series via Sliding Mask Method (SMM) and
Latent Clustered Forecast (LCF) | We consider nonnegative time series forecasting framework. Based on recent advances in Nonnegative Matrix Factorization (NMF) and Archetypal Analysis, we introduce two procedures referred to as Sliding Mask Method (SMM) and Latent Clustered Forecast (LCF). SMM is a simple and powerful method based on time window prediction using Completion of Nonnegative Matrices. This new procedure combines low nonnegative rank decomposition and matrix completion where the hidden values are to be forecasted. LCF is two stage: it leverages archetypal analysis for dimension reduction and clustering of time series, then it uses any black-box supervised forecast solver on the clustered latent representation. Theoretical guarantees on uniqueness and robustness of the solution of NMF Completion-type problems are also provided for the first time. Finally, numerical experiments on real-world and synthetic data-set confirms forecasting accuracy for both the methodologies. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 219,402 |
2307.14825 | Simplified Concrete Dropout -- Improving the Generation of Attribution
Masks for Fine-grained Classification | Fine-grained classification is a particular case of a classification problem, aiming to classify objects that share the visual appearance and can only be distinguished by subtle differences. Fine-grained classification models are often deployed to determine animal species or individuals in automated animal monitoring systems. Precise visual explanations of the model's decision are crucial to analyze systematic errors. Attention- or gradient-based methods are commonly used to identify regions in the image that contribute the most to the classification decision. These methods deliver either too coarse or too noisy explanations, unsuitable for identifying subtle visual differences reliably. However, perturbation-based methods can precisely identify pixels causally responsible for the classification result. Fill-in of the dropout (FIDO) algorithm is one of those methods. It utilizes the concrete dropout (CD) to sample a set of attribution masks and updates the sampling parameters based on the output of the classification model. A known problem of the algorithm is a high variance in the gradient estimates, which the authors have mitigated until now by mini-batch updates of the sampling parameters. This paper presents a solution to circumvent these computational instabilities by simplifying the CD sampling and reducing reliance on large mini-batch sizes. First, it allows estimating the parameters with smaller mini-batch sizes without losing the quality of the estimates but with a reduced computational effort. Furthermore, our solution produces finer and more coherent attribution masks. Finally, we use the resulting attribution masks to improve the classification performance of a trained model without additional fine-tuning of the model. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 382,062 |
2207.00068 | Sparse Periodic Systolic Dataflow for Lowering Latency and Power
Dissipation of Convolutional Neural Network Accelerators | This paper introduces the sparse periodic systolic (SPS) dataflow, which advances the state-of-the-art hardware accelerator for supporting lightweight neural networks. Specifically, the SPS dataflow enables a novel hardware design approach unlocked by an emergent pruning scheme, periodic pattern-based sparsity (PPS). By exploiting the regularity of PPS, our sparsity-aware compiler optimally reorders the weights and uses a simple indexing unit in hardware to create matches between the weights and activations. Through the compiler-hardware codesign, SPS dataflow enjoys higher degrees of parallelism while being free of the high indexing overhead and without model accuracy loss. Evaluated on popular benchmarks such as VGG and ResNet, the SPS dataflow and accompanying neural network compiler outperform prior work in convolutional neural network (CNN) accelerator designs targeting FPGA devices. Against other sparsity-supporting weight storage formats, SPS results in 4.49x energy efficiency gain while lowering storage requirements by 3.67x for total weight storage (non-pruned weights plus indexing) and 22,044x for indexing memory. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 305,625 |
2302.13046 | In Search of Deep Learning Architectures for Load Forecasting: A
Comparative Analysis and the Impact of the Covid-19 Pandemic on Model
Performance | In power grids, short-term load forecasting (STLF) is crucial as it contributes to the optimization of their reliability, emissions, and costs, while it enables the participation of energy companies in the energy market. STLF is a challenging task, due to the complex demand of active and reactive power from multiple types of electrical loads and their dependence on numerous exogenous variables. Amongst them, special circumstances, such as the COVID-19 pandemic, can often be the reason behind distribution shifts of load series. This work conducts a comparative study of Deep Learning (DL) architectures, namely Neural Basis Expansion Analysis Time Series Forecasting (N-BEATS), Long Short-Term Memory (LSTM), and Temporal Convolutional Networks (TCN), with respect to forecasting accuracy and training sustainability, meanwhile examining their out-of-distribution generalization capabilities during the COVID-19 pandemic era. A Pattern Sequence Forecasting (PSF) model is used as baseline. The case study focuses on day-ahead forecasts for the Portuguese national 15-minute resolution net load time series. The results can be leveraged by energy companies and network operators (i) to reinforce their forecasting toolkit with state-of-the-art DL models; (ii) to become aware of the serious consequences of crisis events on model performance; (iii) as a high-level model evaluation, deployment, and sustainability guide within a smart grid context. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 347,790 |
2404.19330 | G2LTraj: A Global-to-Local Generation Approach for Trajectory Prediction | Predicting future trajectories of traffic agents accurately holds substantial importance in various applications such as autonomous driving. Previous methods commonly infer all future steps of an agent either recursively or simultaneously. However, the recursive strategy suffers from the accumulated error, while the simultaneous strategy overlooks the constraints among future steps, resulting in kinematically infeasible predictions. To address these issues, in this paper, we propose G2LTraj, a plug-and-play global-to-local generation approach for trajectory prediction. Specifically, we generate a series of global key steps that uniformly cover the entire future time range. Subsequently, the local intermediate steps between the adjacent key steps are recursively filled in. In this way, we prevent the accumulated error from propagating beyond the adjacent key steps. Moreover, to boost the kinematical feasibility, we not only introduce the spatial constraints among key steps but also strengthen the temporal constraints among the intermediate steps. Finally, to ensure the optimal granularity of key steps, we design a selectable granularity strategy that caters to each predicted trajectory. Our G2LTraj significantly improves the performance of seven existing trajectory predictors across the ETH, UCY and nuScenes datasets. Experimental results demonstrate its effectiveness. Code will be available at https://github.com/Zhanwei-Z/G2LTraj. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 450,602 |
2305.09506 | Fuzzy Temporal Protoforms for the Quantitative Description of Processes
in Natural Language | In this paper, we propose a series of fuzzy temporal protoforms in the framework of the automatic generation of quantitative and qualitative natural language descriptions of processes. The model includes temporal and causal information from processes and attributes, quantifies attributes in time during the process life-span and recalls causal relations and temporal distances between events, among other features. Through integrating process mining techniques and fuzzy sets within the usual Data-to-Text architecture, our framework is able to extract relevant quantitative temporal as well as structural information from a process and describe it in natural language involving uncertain terms. A real use-case in the cardiology domain is presented, showing the potential of our model for providing natural language explanations addressed to domain experts. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 364,658 |
2303.12368 | MAIR: Multi-view Attention Inverse Rendering with 3D Spatially-Varying
Lighting Estimation | We propose a scene-level inverse rendering framework that uses multi-view images to decompose the scene into geometry, a SVBRDF, and 3D spatially-varying lighting. Because multi-view images provide a variety of information about the scene, multi-view images in object-level inverse rendering have been taken for granted. However, owing to the absence of multi-view HDR synthetic dataset, scene-level inverse rendering has mainly been studied using single-view image. We were able to successfully perform scene-level inverse rendering using multi-view images by expanding OpenRooms dataset and designing efficient pipelines to handle multi-view images, and splitting spatially-varying lighting. Our experiments show that the proposed method not only achieves better performance than single-view-based methods, but also achieves robust performance on unseen real-world scene. Also, our sophisticated 3D spatially-varying lighting volume allows for photorealistic object insertion in any 3D location. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 353,242 |
2008.08157 | Heteroscedastic Uncertainty for Robust Generative Latent Dynamics | Learning or identifying dynamics from a sequence of high-dimensional observations is a difficult challenge in many domains, including reinforcement learning and control. The problem has recently been studied from a generative perspective through latent dynamics: high-dimensional observations are embedded into a lower-dimensional space in which the dynamics can be learned. Despite some successes, latent dynamics models have not yet been applied to real-world robotic systems where learned representations must be robust to a variety of perceptual confounds and noise sources not seen during training. In this paper, we present a method to jointly learn a latent state representation and the associated dynamics that is amenable for long-term planning and closed-loop control under perceptually difficult conditions. As our main contribution, we describe how our representation is able to capture a notion of heteroscedastic or input-specific uncertainty at test time by detecting novel or out-of-distribution (OOD) inputs. We present results from prediction and control experiments on two image-based tasks: a simulated pendulum balancing task and a real-world robotic manipulator reaching task. We demonstrate that our model produces significantly more accurate predictions and exhibits improved control performance, compared to a model that assumes homoscedastic uncertainty only, in the presence of varying degrees of input degradation. | false | false | false | false | false | false | true | true | false | false | true | false | false | false | false | false | false | false | 192,328 |
2212.13401 | A Novel Dataset and a Deep Learning Method for Mitosis Nuclei
Segmentation and Classification | Mitosis nuclei count is one of the important indicators for the pathological diagnosis of breast cancer. The manual annotation needs experienced pathologists, which is very time-consuming and inefficient. With the development of deep learning methods, some models with good performance have emerged, but the generalization ability should be further strengthened. In this paper, we propose a two-stage mitosis segmentation and classification method, named SCMitosis. Firstly, the segmentation performance with a high recall rate is achieved by the proposed depthwise separable convolution residual block and channel-spatial attention gate. Then, a classification network is cascaded to further improve the detection performance of mitosis nuclei. The proposed model is verified on the ICPR 2012 dataset, and the highest F-score value of 0.8687 is obtained compared with the current state-of-the-art algorithms. In addition, the model also achieves good performance on GZMH dataset, which is prepared by our group and will be firstly released with the publication of this paper. The code will be available at: https://github.com/antifen/mitosis-nuclei-segmentation. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 338,293 |
2404.02514 | Freditor: High-Fidelity and Transferable NeRF Editing by Frequency
Decomposition | This paper enables high-fidelity, transferable NeRF editing by frequency decomposition. Recent NeRF editing pipelines lift 2D stylization results to 3D scenes while suffering from blurry results, and fail to capture detailed structures caused by the inconsistency between 2D editings. Our critical insight is that low-frequency components of images are more multiview-consistent after editing compared with their high-frequency parts. Moreover, the appearance style is mainly exhibited on the low-frequency components, and the content details especially reside in high-frequency parts. This motivates us to perform editing on low-frequency components, which results in high-fidelity edited scenes. In addition, the editing is performed in the low-frequency feature space, enabling stable intensity control and novel scene transfer. Comprehensive experiments conducted on photorealistic datasets demonstrate the superior performance of high-fidelity and transferable NeRF editing. The project page is at \url{https://aigc3d.github.io/freditor}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 443,889 |
1308.5010 | Sentiment in New York City: A High Resolution Spatial and Temporal View | Measuring public sentiment is a key task for researchers and policymakers alike. The explosion of available social media data allows for a more time-sensitive and geographically specific analysis than ever before. In this paper we analyze data from the micro-blogging site Twitter and generate a sentiment map of New York City. We develop a classifier specifically tuned for 140-character Twitter messages, or tweets, using key words, phrases and emoticons to determine the mood of each tweet. This method, combined with geotagging provided by users, enables us to gauge public sentiment on extremely fine-grained spatial and temporal scales. We find that public mood is generally highest in public parks and lowest at transportation hubs, and locate other areas of strong sentiment such as cemeteries, medical centers, a jail, and a sewage facility. Sentiment progressively improves with proximity to Times Square. Periodic patterns of sentiment fluctuate on both a daily and a weekly scale: more positive tweets are posted on weekends than on weekdays, with a daily peak in sentiment around midnight and a nadir between 9:00 a.m. and noon. | false | false | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | 26,588 |
1902.04983 | Model based string stability of adaptive cruise control systems using
field data | This article is motivated by the lack of empirical data on the performance of commercially available Society of Automotive Engineers level one automated driving systems. To address this, a set of car following experiments are conducted to collect data from a 2015 luxury electric vehicle equipped with a commercial adaptive cruise control (ACC) system. Velocity, relative velocity, and spacing data collected during the experiments are used to calibrate an optimal velocity relative velocity car following model for both the minimum and maximum following settings. The string stability of both calibrated models is assessed, and it is determined that the best-fit models are string unstable, indicating they are not able to prevent all traffic disturbances from amplifying into phantom jams. Based on the calibrated models, we identify the consequences of the string unstable ACC system on synthetic and empirical lead vehicle disturbances, highlighting that some disturbances can be dampened even with string unstable commercial ACC platoons of moderate size. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 121,452 |
2304.05977 | ImageReward: Learning and Evaluating Human Preferences for Text-to-Image
Generation | We present a comprehensive solution to learn and improve text-to-image models from human preference feedback. To begin with, we build ImageReward -- the first general-purpose text-to-image human preference reward model -- to effectively encode human preferences. Its training is based on our systematic annotation pipeline including rating and ranking, which collects 137k expert comparisons to date. In human evaluation, ImageReward outperforms existing scoring models and metrics, making it a promising automatic metric for evaluating text-to-image synthesis. On top of it, we propose Reward Feedback Learning (ReFL), a direct tuning algorithm to optimize diffusion models against a scorer. Both automatic and human evaluation support ReFL's advantages over compared methods. All code and datasets are provided at \url{https://github.com/THUDM/ImageReward}. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 357,802 |
1611.04254 | Towards Information Privacy for the Internet of Things | In an Internet of Things network, multiple sensors send information to a fusion center for it to infer a public hypothesis of interest. However, the same sensor information may be used by the fusion center to make inferences of a private nature that the sensors wish to protect. To model this, we adopt a decentralized hypothesis testing framework with binary public and private hypotheses. Each sensor makes a private observation and utilizes a local sensor decision rule or privacy mapping to summarize that observation independently of the other sensors. The local decision made by a sensor is then sent to the fusion center. Without assuming knowledge of the joint distribution of the sensor observations and hypotheses, we adopt a nonparametric learning approach to design local privacy mappings. We introduce the concept of an empirical normalized risk, which provides a theoretical guarantee for the network to achieve information privacy for the private hypothesis with high probability when the number of training samples is large. We develop iterative optimization algorithms to determine an appropriate privacy threshold and the best sensor privacy mappings, and show that they converge. Finally, we extend our approach to the case of a private multiple hypothesis. Numerical results on both synthetic and real data sets suggest that our proposed approach yields low error rates for inferring the public hypothesis, but high error rates for detecting the private hypothesis. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 63,821 |
2006.02636 | The Importance of Prior Knowledge in Precise Multimodal Prediction | Roads have well defined geometries, topologies, and traffic rules. While this has been widely exploited in motion planning methods to produce maneuvers that obey the law, little work has been devoted to utilize these priors in perception and motion forecasting methods. In this paper we propose to incorporate these structured priors as a loss function. In contrast to imposing hard constraints, this approach allows the model to handle non-compliant maneuvers when those happen in the real world. Safe motion planning is the end goal, and thus a probabilistic characterization of the possible future developments of the scene is key to choose the plan with the lowest expected cost. Towards this goal, we design a framework that leverages REINFORCE to incorporate non-differentiable priors over sample trajectories from a probabilistic model, thus optimizing the whole distribution. We demonstrate the effectiveness of our approach on real-world self-driving datasets containing complex road topologies and multi-agent interactions. Our motion forecasts not only exhibit better precision and map understanding, but most importantly result in safer motion plans taken by our self-driving vehicle. We emphasize that despite the importance of this evaluation, it has been often overlooked by previous perception and motion forecasting works. | false | false | false | false | true | false | true | true | false | false | false | true | false | false | false | false | false | false | 180,100 |
1606.04335 | LLFR: A Lanczos-Based Latent Factor Recommender for Big Data Scenarios | The purpose if this master's thesis is to study and develop a new algorithmic framework for Collaborative Filtering to produce recommendations in the top-N recommendation problem. Thus, we propose Lanczos Latent Factor Recommender (LLFR); a novel "big data friendly" collaborative filtering algorithm for top-N recommendation. Using a computationally efficient Lanczos-based procedure, LLFR builds a low dimensional item similarity model, that can be readily exploited to produce personalized ranking vectors over the item space. A number of experiments on real datasets indicate that LLFR outperforms other state-of-the-art top-N recommendation methods from a computational as well as a qualitative perspective. Our experimental results also show that its relative performance gains, compared to competing methods, increase as the data get sparser, as in the Cold Start Problem. More specifically, this is true both when the sparsity is generalized - as in the New Community Problem, a very common problem faced by real recommender systems in their beginning stages, when there is not sufficient number of ratings for the collaborative filtering algorithms to uncover similarities between items or users - and in the very interesting case where the sparsity is localized in a small fraction of the dataset - as in the New Users Problem, where new users are introduced to the system, they have not rated many items and thus, the CF algorithm can not make reliable personalized recommendations yet. | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 57,237 |
2409.11351 | Closed-loop Analysis of ADMM-based Suboptimal Linear Model Predictive
Control | Many practical applications of optimal control are subject to real-time computational constraints. When applying model predictive control (MPC) in these settings, respecting timing constraints is achieved by limiting the number of iterations of the optimization algorithm used to compute control actions at each time step, resulting in so-called suboptimal MPC. This paper proposes a suboptimal MPC scheme based on the alternating direction method of multipliers (ADMM). With a focus on the linear quadratic regulator problem with state and input constraints, we show how ADMM can be used to split the MPC problem into iterative updates of an unconstrained optimal control problem (with an analytical solution), and a dynamics-free feasibility step. We show that using a warm-start approach combined with enough iterations per time-step, yields an ADMM-based suboptimal MPC scheme which asymptotically stabilizes the system and maintains recursive feasibility. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 489,116 |
1806.08251 | Learning Multimodal Representations for Unseen Activities | We present a method to learn a joint multimodal representation space that enables recognition of unseen activities in videos. We first compare the effect of placing various constraints on the embedding space using paired text and video data. We also propose a method to improve the joint embedding space using an adversarial formulation, allowing it to benefit from unpaired text and video data. By using unpaired text data, we show the ability to learn a representation that better captures unseen activities. In addition to testing on publicly available datasets, we introduce a new, large-scale text/video dataset. We experimentally confirm that using paired and unpaired data to learn a shared embedding space benefits three difficult tasks (i) zero-shot activity classification, (ii) unsupervised activity discovery, and (iii) unseen activity captioning, outperforming the state-of-the-arts. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 101,127 |
2207.10398 | D2-TPred: Discontinuous Dependency for Trajectory Prediction under
Traffic Lights | A profound understanding of inter-agent relationships and motion behaviors is important to achieve high-quality planning when navigating in complex scenarios, especially at urban traffic intersections. We present a trajectory prediction approach with respect to traffic lights, D2-TPred, which uses a spatial dynamic interaction graph (SDG) and a behavior dependency graph (BDG) to handle the problem of discontinuous dependency in the spatial-temporal space. Specifically, the SDG is used to capture spatial interactions by reconstructing sub-graphs for different agents with dynamic and changeable characteristics during each frame. The BDG is used to infer motion tendency by modeling the implicit dependency of the current state on priors behaviors, especially the discontinuous motions corresponding to acceleration, deceleration, or turning direction. Moreover, we present a new dataset for vehicle trajectory prediction under traffic lights called VTP-TL. Our experimental results show that our model achieves more than {20.45% and 20.78% }improvement in terms of ADE and FDE, respectively, on VTP-TL as compared to other trajectory prediction algorithms. The dataset and code are available at: https://github.com/VTP-TL/D2-TPred. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 309,252 |
2404.10935 | Molecular relaxation by reverse diffusion with time step prediction | Molecular relaxation, finding the equilibrium state of a non-equilibrium structure, is an essential component of computational chemistry to understand reactivity. Classical force field (FF) methods often rely on insufficient local energy minimization, while neural network FF models require large labeled datasets encompassing both equilibrium and non-equilibrium structures. As a remedy, we propose MoreRed, molecular relaxation by reverse diffusion, a conceptually novel and purely statistical approach where non-equilibrium structures are treated as noisy instances of their corresponding equilibrium states. To enable the denoising of arbitrarily noisy inputs via a generative diffusion model, we further introduce a novel diffusion time step predictor. Notably, MoreRed learns a simpler pseudo potential energy surface (PES) instead of the complex physical PES. It is trained on a significantly smaller, and thus computationally cheaper, dataset consisting of solely unlabeled equilibrium structures, avoiding the computation of non-equilibrium structures altogether. We compare MoreRed to classical FFs, equivariant neural network FFs trained on a large dataset of equilibrium and non-equilibrium data, as well as a semi-empirical tight-binding model. To assess this quantitatively, we evaluate the root-mean-square deviation between the found equilibrium structures and the reference equilibrium structures as well as their energies. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 447,310 |
1406.3295 | Stable, Robust and Super Fast Reconstruction of Tensors Using Multi-Way
Projections | In the framework of multidimensional Compressed Sensing (CS), we introduce an analytical reconstruction formula that allows one to recover an $N$th-order $(I_1\times I_2\times \cdots \times I_N)$ data tensor $\underline{\mathbf{X}}$ from a reduced set of multi-way compressive measurements by exploiting its low multilinear-rank structure. Moreover, we show that, an interesting property of multi-way measurements allows us to build the reconstruction based on compressive linear measurements taken only in two selected modes, independently of the tensor order $N$. In addition, it is proved that, in the matrix case and in a particular case with $3$rd-order tensors where the same 2D sensor operator is applied to all mode-3 slices, the proposed reconstruction $\underline{\mathbf{X}}_\tau$ is stable in the sense that the approximation error is comparable to the one provided by the best low-multilinear-rank approximation, where $\tau$ is a threshold parameter that controls the approximation error. Through the analysis of the upper bound of the approximation error we show that, in the 2D case, an optimal value for the threshold parameter $\tau=\tau_0 > 0$ exists, which is confirmed by our simulation results. On the other hand, our experiments on 3D datasets show that very good reconstructions are obtained using $\tau=0$, which means that this parameter does not need to be tuned. Our extensive simulation results demonstrate the stability and robustness of the method when it is applied to real-world 2D and 3D signals. A comparison with state-of-the-arts sparsity based CS methods specialized for multidimensional signals is also included. A very attractive characteristic of the proposed method is that it provides a direct computation, i.e. it is non-iterative in contrast to all existing sparsity based CS algorithms, thus providing super fast computations, even for large datasets. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 33,834 |
2408.14441 | Attend-Fusion: Efficient Audio-Visual Fusion for Video Classification | Exploiting both audio and visual modalities for video classification is a challenging task, as the existing methods require large model architectures, leading to high computational complexity and resource requirements. Smaller architectures, on the other hand, struggle to achieve optimal performance. In this paper, we propose Attend-Fusion, an audio-visual (AV) fusion approach that introduces a compact model architecture specifically designed to capture intricate audio-visual relationships in video data. Through extensive experiments on the challenging YouTube-8M dataset, we demonstrate that Attend-Fusion achieves an F1 score of 75.64\% with only 72M parameters, which is comparable to the performance of larger baseline models such as Fully-Connected Late Fusion (75.96\% F1 score, 341M parameters). Attend-Fusion achieves similar performance to the larger baseline model while reducing the model size by nearly 80\%, highlighting its efficiency in terms of model complexity. Our work demonstrates that the Attend-Fusion model effectively combines audio and visual information for video classification, achieving competitive performance with significantly reduced model size. This approach opens new possibilities for deploying high-performance video understanding systems in resource-constrained environments across various applications. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 483,535 |
2311.16419 | A review on the charging station planning and fleet operation for
electric freight vehicles | Freight electrification introduces new opportunities and challenges for planning and operation. Although research on charging infrastructure planning and operation is widely available for general electric vehicles, unique physical and operational characteristics of EFVs coupled with specific patterns of logistics require dedicated research. This paper presents a comprehensive literature review to gain a better understanding of the state-of-the-art research efforts related to planning (charging station siting and sizing) and operation (routing, charge scheduling, platoon scheduling, and fleet sizing) for EFVs. We classified the existing literature based on the research topics, innovations, methodologies, and solution approaches, and future research directions are identified. Different types of methodologies, such as heuristic, simulation, and mathematical programming approaches, were applied in the reviewed literature where mathematical models account for the majority. We further narrated the specific modeling considerations for different logistic patterns and research goals with proper reasoning. To solve the proposed models, different solution approaches, including exact algorithms, metaheuristic algorithms, and software simulation, were evaluated in terms of applicability, advantages, and disadvantages. This paper helps to draw more attention to the planning and operation issues and solutions for freight electrification and facilitates future studies on EFV to ensure a smooth transition to a clean freight system. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 410,876 |
2008.10267 | A Computational Analysis of Real-World DJ Mixes using Mix-To-Track
Subsequence Alignment | A DJ mix is a sequence of music tracks concatenated seamlessly, typically rendered for audiences in a live setting by a DJ on stage. As a DJ mix is produced in a studio or the live version is recorded for music streaming services, computational methods to analyze DJ mixes, for example, extracting track information or understanding DJ techniques, have drawn research interests. Many of previous works are, however, limited to identifying individual tracks in a mix or segmenting it, and the sizes of the datasets are usually small. In this paper, we provide an in-depth analysis of DJ music by aligning a mix to its original music tracks. We set up the subsequence alignment such that the audio features are less sensitive to the tempo or key change of the original track in a mix. This approach provides temporally tight mix-to-track matching from which we can obtain cue-points, transition length, mix segmentation, and musical changes in DJ performance. Using 1,557 mixes from 1001Tracklists including 13,728 tracks and 20,765 transitions, we conduct the proposed analysis and show a wide range of statistics, which may elucidate the creative process of DJ music making. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 192,952 |
2003.11991 | Estimating Treatment Effects with Observed Confounders and Mediators | Given a causal graph, the do-calculus can express treatment effects as functionals of the observational joint distribution that can be estimated empirically. Sometimes the do-calculus identifies multiple valid formulae, prompting us to compare the statistical properties of the corresponding estimators. For example, the backdoor formula applies when all confounders are observed and the frontdoor formula applies when an observed mediator transmits the causal effect. In this paper, we investigate the over-identified scenario where both confounders and mediators are observed, rendering both estimators valid. Addressing the linear Gaussian causal model, we demonstrate that either estimator can dominate the other by an unbounded constant factor. Next, we derive an optimal estimator, which leverages all observed variables, and bound its finite-sample variance. We show that it strictly outperforms the backdoor and frontdoor estimators and that this improvement can be unbounded. We also present a procedure for combining two datasets, one with observed confounders and another with observed mediators. Finally, we evaluate our methods on both simulated data and the IHDP and JTPA datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 169,778 |
1503.01185 | Gradient Compared Lp-LMS Algorithms for Sparse System Identification | In this paper, we propose two novel p-norm penalty least mean square (Lp-LMS) algorithms as supplements of the conventional Lp-LMS algorithm established for sparse adaptive filtering recently. A gradient comparator is employed to selectively apply the zero attractor of p-norm constraint for only those taps that have the same polarity as that of the gradient of the squared instantaneous error, which leads to the new proposed gradient compared p-norm constraint LMS algorithm (LpGC-LMS). We explain that the LpGC-LMS can achieve lower mean square error than the standard Lp-LMS algorithm theoretically and experimentally. To further improve the performance of the filter, the LpNGC-LMS algorithm is derived using a new gradient comparator which takes the sign-smoothed version of the previous one. The performance of the LpNGC-LMS is superior to that of the LpGC-LMS in theory and in simulations. Moreover, these two comparators can be easily applied to other norm constraint LMS algorithms to derive some new approaches for sparse adaptive filtering. The numerical simulation results show that the two proposed algorithms achieve better performance than the standard LMS algorithm and Lp-LMS algorithm in terms of convergence rate and steady-state behavior in sparse system identification settings. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 40,796 |
2107.08429 | Support vector machines for learning reactive islands | We develop a machine learning framework that can be applied to data sets derived from the trajectories of Hamilton's equations. The goal is to learn the phase space structures that play the governing role for phase space transport relevant to particular applications. Our focus is on learning reactive islands in two degrees-of-freedom Hamiltonian systems. Reactive islands are constructed from the stable and unstable manifolds of unstable periodic orbits and play the role of quantifying transition dynamics. We show that support vector machines (SVM) is an appropriate machine learning framework for this purpose as it provides an approach for finding the boundaries between qualitatively distinct dynamical behaviors, which is in the spirit of the phase space transport framework. We show how our method allows us to find reactive islands directly in the sense that we do not have to first compute unstable periodic orbits and their stable and unstable manifolds. We apply our approach to the H\'enon-Heiles Hamiltonian system, which is a benchmark system in the dynamical systems community. We discuss different sampling and learning approaches and their advantages and disadvantages. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 246,729 |
2411.06543 | Magnetic Field Aided Vehicle Localization with Acceleration Correction | This paper presents a novel approach for vehicle localization by leveraging the ambient magnetic field within a given environment. Our approach involves introducing a global mathematical function for magnetic field mapping, combined with Euclidean distance-based matching technique for accurately estimating vehicle position in suburban settings. The mathematical function based map structure ensures efficiency and scalability of the magnetic field map, while the batch processing based localization provides continuity in pose estimation. Additionally, we establish a bias estimation pipeline for an onboard accelerometer by utilizing the updated poses obtained through magnetic field matching. Our work aims to showcase the potential utility of magnetic fields as supplementary aids to existing localization methods, particularly beneficial in scenarios where Global Positioning System (GPS) signal is restricted or where cost-effective navigation systems are required. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 507,156 |
2407.17228 | A Hybrid Federated Kernel Regularized Least Squares Algorithm | Federated learning is becoming an increasingly viable and accepted strategy for building machine learning models in critical privacy-preserving scenarios such as clinical settings. Often, the data involved is not limited to clinical data but also includes additional omics features (e.g. proteomics). Consequently, data is distributed not only across hospitals but also across omics centers, which are labs capable of generating such additional features from biosamples. This scenario leads to a hybrid setting where data is scattered both in terms of samples and features. In this hybrid setting, we present an efficient reformulation of the Kernel Regularized Least Squares algorithm, introduce two variants and validate them using well-established datasets. Lastly, we discuss security measures to defend against possible attacks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 475,894 |
2109.04825 | Artificial Text Detection via Examining the Topology of Attention Maps | The impressive capabilities of recent generative models to create texts that are challenging to distinguish from the human-written ones can be misused for generating fake news, product reviews, and even abusive content. Despite the prominent performance of existing methods for artificial text detection, they still lack interpretability and robustness towards unseen models. To this end, we propose three novel types of interpretable topological features for this task based on Topological Data Analysis (TDA) which is currently understudied in the field of NLP. We empirically show that the features derived from the BERT model outperform count- and neural-based baselines up to 10\% on three common datasets, and tend to be the most robust towards unseen GPT-style generation models as opposed to existing methods. The probing analysis of the features reveals their sensitivity to the surface and syntactic properties. The results demonstrate that TDA is a promising line with respect to NLP tasks, specifically the ones that incorporate surface and structural information. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 254,552 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.