id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2409.10025 | DiffATR: Diffusion-based Generative Modeling for Audio-Text Retrieval | Existing audio-text retrieval (ATR) methods are essentially discriminative models that aim to maximize the conditional likelihood, represented as p(candidates|query). Nevertheless, this methodology fails to consider the intrinsic data distribution p(query), leading to difficulties in discerning out-of-distribution data. In this work, we attempt to tackle this constraint through a generative perspective and model the relationship between audio and text as their joint probability p(candidates,query). To this end, we present a diffusion-based ATR framework (DiffATR), which models ATR as an iterative procedure that progressively generates joint distribution from noise. Throughout its training phase, DiffATR is optimized from both generative and discriminative viewpoints: the generator is refined through a generation loss, while the feature extractor benefits from a contrastive loss, thus combining the merits of both methodologies. Experiments on the AudioCaps and Clotho datasets with superior performances, verify the effectiveness of our approach. Notably, without any alterations, our DiffATR consistently exhibits strong performance in out-of-domain retrieval settings. | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 488,582 |
1707.00797 | Learning Deep Energy Models: Contrastive Divergence vs. Amortized MLE | We propose a number of new algorithms for learning deep energy models and demonstrate their properties. We show that our SteinCD performs well in term of test likelihood, while SteinGAN performs well in terms of generating realistic looking images. Our results suggest promising directions for learning better models by combining GAN-style methods with traditional energy-based learning. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 76,415 |
1803.07306 | Capacity Analysis of Index Modulations over Spatial, Polarization and
Frequency Dimensions | Determining the capacity of a modulation scheme is a fundamental topic of interest. Index Modulations (IM), such as Spatial Modulation (SMod), Polarized Modulation (PMod) or Frequency Index Modulation (FMod), are widely studied in the literature. However, finding a closed-form analytical expression for their capacity still remains an open topic. In this paper, we formulate closed-form expressions for the instantaneous capacity of IM, together with its $2$nd and $4$th order approximations. We show that, in average, the $2$nd approximation error tends to zero for low Signal to Noise Ratio (SNR) and is $o\left(\textrm{SNR}\right)$. Also, a detailed analysis of the ergodic capacity over Rayleigh, Rice and Nakagami-$m$ channel distributions is provided. As application of the capacity analysis, we leverage the proposed expressions to compute the ergodic capacities of SMod for different antenna configuration and correlations, PMod for different channel components and conditions, and FMod for different frequency separations. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 93,016 |
2208.02894 | Redesigning Multi-Scale Neural Network for Crowd Counting | Perspective distortions and crowd variations make crowd counting a challenging task in computer vision. To tackle it, many previous works have used multi-scale architecture in deep neural networks (DNNs). Multi-scale branches can be either directly merged (e.g. by concatenation) or merged through the guidance of proxies (e.g. attentions) in the DNNs. Despite their prevalence, these combination methods are not sophisticated enough to deal with the per-pixel performance discrepancy over multi-scale density maps. In this work, we redesign the multi-scale neural network by introducing a hierarchical mixture of density experts, which hierarchically merges multi-scale density maps for crowd counting. Within the hierarchical structure, an expert competition and collaboration scheme is presented to encourage contributions from all scales; pixel-wise soft gating nets are introduced to provide pixel-wise soft weights for scale combinations in different hierarchies. The network is optimized using both the crowd density map and the local counting map, where the latter is obtained by local integration on the former. Optimizing both can be problematic because of their potential conflicts. We introduce a new relative local counting loss based on relative count differences among hard-predicted local regions in an image, which proves to be complementary to the conventional absolute error loss on the density map. Experiments show that our method achieves the state-of-the-art performance on five public datasets, i.e. ShanghaiTech, UCF_CC_50, JHU-CROWD++, NWPU-Crowd and Trancos. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 311,607 |
2406.14934 | Learning Autonomous Race Driving with Action Mapping Reinforcement
Learning | Autonomous race driving poses a complex control challenge as vehicles must be operated at the edge of their handling limits to reduce lap times while respecting physical and safety constraints. This paper presents a novel reinforcement learning (RL)-based approach, incorporating the action mapping (AM) mechanism to manage state-dependent input constraints arising from limited tire-road friction. A numerical approximation method is proposed to implement AM, addressing the complex dynamics associated with the friction constraints. The AM mechanism also allows the learned driving policy to be generalized to different friction conditions. Experimental results in our developed race simulator demonstrate that the proposed AM-RL approach achieves superior lap times and better success rates compared to the conventional RL-based approaches. The generalization capability of driving policy with AM is also validated in the experiments. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 466,542 |
1009.3613 | On the Doubt about Margin Explanation of Boosting | Margin theory provides one of the most popular explanations to the success of \texttt{AdaBoost}, where the central point lies in the recognition that \textit{margin} is the key for characterizing the performance of \texttt{AdaBoost}. This theory has been very influential, e.g., it has been used to argue that \texttt{AdaBoost} usually does not overfit since it tends to enlarge the margin even after the training error reaches zero. Previously the \textit{minimum margin bound} was established for \texttt{AdaBoost}, however, \cite{Breiman1999} pointed out that maximizing the minimum margin does not necessarily lead to a better generalization. Later, \cite{Reyzin:Schapire2006} emphasized that the margin distribution rather than minimum margin is crucial to the performance of \texttt{AdaBoost}. In this paper, we first present the \textit{$k$th margin bound} and further study on its relationship to previous work such as the minimum margin bound and Emargin bound. Then, we improve the previous empirical Bernstein bounds \citep{Maurer:Pontil2009,Audibert:Munos:Szepesvari2009}, and based on such findings, we defend the margin-based explanation against Breiman's doubts by proving a new generalization error bound that considers exactly the same factors as \cite{Schapire:Freund:Bartlett:Lee1998} but is sharper than \cite{Breiman1999}'s minimum margin bound. By incorporating factors such as average margin and variance, we present a generalization error bound that is heavily related to the whole margin distribution. We also provide margin distribution bounds for generalization error of voting classifiers in finite VC-dimension space. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 7,586 |
2007.16135 | Improved Time Warp Edit Distance -- A Parallel Dynamic Program in Linear
Memory | Edit Distance is a classic family of dynamic programming problems, among which Time Warp Edit Distance refines the problem with the notion of a metric and temporal elasticity. A novel Improved Time Warp Edit Distance algorithm that is both massively parallelizable and requiring only linear storage is presented. This method uses the procession of a three diagonal band to cover the original dynamic program space. Every element of the diagonal update can be computed in parallel. The core method is a feature of the TWED Longest Common Subsequence data dependence and is applicable to dynamic programs that share similar band subproblem structure. The algorithm has been implemented as a CUDA C library with Python bindings. Speedups for challenging problems are phenomenal. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 189,858 |
2405.15941 | A Unified Theory of Stochastic Proximal Point Methods without Smoothness | This paper presents a comprehensive analysis of a broad range of variations of the stochastic proximal point method (SPPM). Proximal point methods have attracted considerable interest owing to their numerical stability and robustness against imperfect tuning, a trait not shared by the dominant stochastic gradient descent (SGD) algorithm. A framework of assumptions that we introduce encompasses methods employing techniques such as variance reduction and arbitrary sampling. A cornerstone of our general theoretical approach is a parametric assumption on the iterates, correction and control vectors. We establish a single theorem that ensures linear convergence under this assumption and the $\mu$-strong convexity of the loss function, and without the need to invoke smoothness. This integral theorem reinstates best known complexity and convergence guarantees for several existing methods which demonstrates the robustness of our approach. We expand our study by developing three new variants of SPPM, and through numerical experiments we elucidate various properties inherent to them. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 457,174 |
2301.06943 | Self-supervised Domain Adaptation for Breaking the Limits of Low-quality
Fundus Image Quality Enhancement | Retinal fundus images have been applied for the diagnosis and screening of eye diseases, such as Diabetic Retinopathy (DR) or Diabetic Macular Edema (DME). However, both low-quality fundus images and style inconsistency potentially increase uncertainty in the diagnosis of fundus disease and even lead to misdiagnosis by ophthalmologists. Most of the existing image enhancement methods mainly focus on improving the image quality by leveraging the guidance of high-quality images, which is difficult to be collected in medical applications. In this paper, we tackle image quality enhancement in a fully unsupervised setting, i.e., neither paired images nor high-quality images. To this end, we explore the potential of the self-supervised task for improving the quality of fundus images without the requirement of high-quality reference images. Specifically, we construct multiple patch-wise domains via an auxiliary pre-trained quality assessment network and a style clustering. To achieve robust low-quality image enhancement and address style inconsistency, we formulate two self-supervised domain adaptation tasks to disentangle the features of image content, low-quality factor and style information by exploring intrinsic supervision signals within the low-quality images. Extensive experiments are conducted on EyeQ and Messidor datasets, and results show that our DASQE method achieves new state-of-the-art performance when only low-quality images are available. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 340,788 |
2404.17608 | Synthesizing Audio from Silent Video using Sequence to Sequence Modeling | Generating audio from a video's visual context has multiple practical applications in improving how we interact with audio-visual media - for example, enhancing CCTV footage analysis, restoring historical videos (e.g., silent movies), and improving video generation models. We propose a novel method to generate audio from video using a sequence-to-sequence model, improving on prior work that used CNNs and WaveNet and faced sound diversity and generalization challenges. Our approach employs a 3D Vector Quantized Variational Autoencoder (VQ-VAE) to capture the video's spatial and temporal structures, decoding with a custom audio decoder for a broader range of sounds. Trained on the Youtube8M dataset segment, focusing on specific domains, our model aims to enhance applications like CCTV footage analysis, silent movie restoration, and video generation models. | false | false | true | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 449,922 |
2306.13440 | Trading-off price for data quality to achieve fair online allocation | We consider the problem of online allocation subject to a long-term fairness penalty. Contrary to existing works, however, we do not assume that the decision-maker observes the protected attributes -- which is often unrealistic in practice. Instead they can purchase data that help estimate them from sources of different quality; and hence reduce the fairness penalty at some cost. We model this problem as a multi-armed bandit problem where each arm corresponds to the choice of a data source, coupled with the online allocation problem. We propose an algorithm that jointly solves both problems and show that it has a regret bounded by $\mathcal{O}(\sqrt{T})$. A key difficulty is that the rewards received by selecting a source are correlated by the fairness penalty, which leads to a need for randomization (despite a stochastic setting). Our algorithm takes into account contextual information available before the source selection, and can adapt to many different fairness notions. We also show that in some instances, the estimates used can be learned on the fly. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 375,278 |
2502.00744 | CoNNect: A Swiss-Army-Knife Regularizer for Pruning of Neural Networks | Pruning encompasses a range of techniques aimed at increasing the sparsity of neural networks (NNs). These techniques can generally be framed as minimizing a loss function subject to an $L_0$-norm constraint. This paper introduces CoNNect, a novel differentiable regularizer for sparse NN training that ensures connectivity between input and output layers. CoNNect integrates with established pruning strategies and supports both structured and unstructured pruning. We proof that CoNNect approximates $L_0$-regularization, guaranteeing maximally connected network structures while avoiding issues like layer collapse. Numerical experiments demonstrate that CoNNect improves classical pruning strategies and enhances state-of-the-art one-shot pruners, such as DepGraph and LLM-pruner. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 529,529 |
2006.09462 | Selective Question Answering under Domain Shift | To avoid giving wrong answers, question answering (QA) models need to know when to abstain from answering. Moreover, users often ask questions that diverge from the model's training data, making errors more likely and thus abstention more critical. In this work, we propose the setting of selective question answering under domain shift, in which a QA model is tested on a mixture of in-domain and out-of-domain data, and must answer (i.e., not abstain on) as many questions as possible while maintaining high accuracy. Abstention policies based solely on the model's softmax probabilities fare poorly, since models are overconfident on out-of-domain inputs. Instead, we train a calibrator to identify inputs on which the QA model errs, and abstain when it predicts an error is likely. Crucially, the calibrator benefits from observing the model's behavior on out-of-domain data, even if from a different domain than the test data. We combine this method with a SQuAD-trained QA model and evaluate on mixtures of SQuAD and five other QA datasets. Our method answers 56% of questions while maintaining 80% accuracy; in contrast, directly using the model's probabilities only answers 48% at 80% accuracy. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 182,547 |
2203.05794 | BERTopic: Neural topic modeling with a class-based TF-IDF procedure | Topic models can be useful tools to discover latent topics in collections of documents. Recent studies have shown the feasibility of approach topic modeling as a clustering task. We present BERTopic, a topic model that extends this process by extracting coherent topic representation through the development of a class-based variation of TF-IDF. More specifically, BERTopic generates document embedding with pre-trained transformer-based language models, clusters these embeddings, and finally, generates topic representations with the class-based TF-IDF procedure. BERTopic generates coherent topics and remains competitive across a variety of benchmarks involving classical models and those that follow the more recent clustering approach of topic modeling. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 284,915 |
2407.12790 | GPT Czech Poet: Generation of Czech Poetic Strophes with Language Models | High-quality automated poetry generation systems are currently only available for a small subset of languages. We introduce a new model for generating poetry in Czech language, based on fine-tuning a pre-trained Large Language Model. We demonstrate that guiding the generation process by explicitly specifying strophe parameters within the poem text strongly improves the effectiveness of the model. We also find that appropriate tokenization is crucial, showing that tokenization methods based on syllables or individual characters instead of subwords prove superior in generating poetic strophes. We further enhance the results by introducing \textit{Forced~generation}, adding explicit specifications of meter and verse parameters at inference time based on the already generated text. We evaluate a range of setups, showing that our proposed approach achieves high accuracies in rhyming and metric aspects of formal quality of the generated poems. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 474,075 |
2307.15439 | Optimal Alignment of Temporal Knowledge Bases | Answering temporal CQs over temporalized Description Logic knowledge bases (TKB) is a main technique to realize ontology-based situation recognition. In case the collected data in such a knowledge base is inaccurate, important query answers can be missed. In this paper we introduce the TKB Alignment problem, which computes a variant of the TKB that minimally changes the TKB, but entails the given temporal CQ and is in that sense (cost-)optimal. We investigate this problem for ALC TKBs and conjunctive queries with LTL operators and devise a solution technique to compute (cost-optimal) alignments of TKBs that extends techniques for the alignment problem for propositional LTL over finite traces. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 382,271 |
2301.01649 | Attention-Based Recurrence for Multi-Agent Reinforcement Learning under
Stochastic Partial Observability | Stochastic partial observability poses a major challenge for decentralized coordination in multi-agent reinforcement learning but is largely neglected in state-of-the-art research due to a strong focus on state-based centralized training for decentralized execution (CTDE) and benchmarks that lack sufficient stochasticity like StarCraft Multi-Agent Challenge (SMAC). In this paper, we propose Attention-based Embeddings of Recurrence In multi-Agent Learning (AERIAL) to approximate value functions under stochastic partial observability. AERIAL replaces the true state with a learned representation of multi-agent recurrence, considering more accurate information about decentralized agent decisions than state-based CTDE. We then introduce MessySMAC, a modified version of SMAC with stochastic observations and higher variance in initial states, to provide a more general and configurable benchmark regarding stochastic partial observability. We evaluate AERIAL in Dec-Tiger as well as in a variety of SMAC and MessySMAC maps, and compare the results with state-based CTDE. Furthermore, we evaluate the robustness of AERIAL and state-based CTDE against various stochasticity configurations in MessySMAC. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 339,287 |
2408.00759 | Text-Guided Video Masked Autoencoder | Recent video masked autoencoder (MAE) works have designed improved masking algorithms focused on saliency. These works leverage visual cues such as motion to mask the most salient regions. However, the robustness of such visual cues depends on how often input videos match underlying assumptions. On the other hand, natural language description is an information dense representation of video that implicitly captures saliency without requiring modality-specific assumptions, and has not been explored yet for video MAE. To this end, we introduce a novel text-guided masking algorithm (TGM) that masks the video regions with highest correspondence to paired captions. Without leveraging any explicit visual cues for saliency, our TGM is competitive with state-of-the-art masking algorithms such as motion-guided masking. To further benefit from the semantics of natural language for masked reconstruction, we next introduce a unified framework for joint MAE and masked video-text contrastive learning. We show that across existing masking algorithms, unifying MAE and masked video-text contrastive learning improves downstream performance compared to pure MAE on a variety of video recognition tasks, especially for linear probe. Within this unified framework, our TGM achieves the best relative performance on five action recognition and one egocentric datasets, highlighting the complementary nature of natural language for masked video modeling. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 477,961 |
2208.05032 | An Integrated Actuation-Perception Framework for Robotic Leaf Retrieval:
Detection, Localization, and Cutting | Contemporary robots in precision agriculture focus primarily on automated harvesting or remote sensing to monitor crop health. Comparatively less work has been performed with respect to collecting physical leaf samples in the field and retaining them for further analysis. Typically, orchard growers manually collect sample leaves and utilize them for stem water potential measurements to analyze tree health and determine irrigation routines. While this technique benefits orchard management, the process of collecting, assessing, and interpreting measurements requires significant human labor and often leads to infrequent sampling. Automated sampling can provide highly accurate and timely information to growers. The first step in such automated in-situ leaf analysis is identifying and cutting a leaf from a tree. This retrieval process requires new methods for actuation and perception. We present a technique for detecting and localizing candidate leaves using point cloud data from a depth camera. This technique is tested on both indoor and outdoor point clouds from avocado trees. We then use a custom-built leaf-cutting end-effector on a 6-DOF robotic arm to test the proposed detection and localization technique by cutting leaves from an avocado tree. Experimental testing with a real avocado tree demonstrates our proposed approach can enable our mobile manipulator and custom end-effector system to successfully detect, localize, and cut leaves. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 312,293 |
2310.02753 | MUNCH: Modelling Unique 'N Controllable Heads | The automated generation of 3D human heads has been an intriguing and challenging task for computer vision researchers. Prevailing methods synthesize realistic avatars but with limited control over the diversity and quality of rendered outputs and suffer from limited correlation between shape and texture of the character. We propose a method that offers quality, diversity, control, and realism along with explainable network design, all desirable features to game-design artists in the domain. First, our proposed Geometry Generator identifies disentangled latent directions and generate novel and diverse samples. A Render Map Generator then learns to synthesize multiply high-fidelty physically-based render maps including Albedo, Glossiness, Specular, and Normals. For artists preferring fine-grained control over the output, we introduce a novel Color Transformer Model that allows semantic color control over generated maps. We also introduce quantifiable metrics called Uniqueness and Novelty and a combined metric to test the overall performance of our model. Demo for both shapes and textures can be found: https://munch-seven.vercel.app/. We will release our model along with the synthetic dataset. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | true | 396,983 |
2407.09535 | Assessing Annotation Accuracy in Ice Sheets Using Quantitative Metrics | The increasing threat of sea level rise due to climate change necessitates a deeper understanding of ice sheet structures. This study addresses the need for accurate ice sheet data interpretation by introducing a suite of quantitative metrics designed to validate ice sheet annotation techniques. Focusing on both manual and automated methods, including ARESELP and its modified version, MARESELP, we assess their accuracy against expert annotations. Our methodology incorporates several computer vision metrics, traditionally underutilized in glaciological research, to evaluate the continuity and connectivity of ice layer annotations. The results demonstrate that while manual annotations provide invaluable expert insights, automated methods, particularly MARESELP, improve layer continuity and alignment with expert labels. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 472,625 |
1912.05845 | Local Context Normalization: Revisiting Local Normalization | Normalization layers have been shown to improve convergence in deep neural networks, and even add useful inductive biases. In many vision applications the local spatial context of the features is important, but most common normalization schemes including Group Normalization (GN), Instance Normalization (IN), and Layer Normalization (LN) normalize over the entire spatial dimension of a feature. This can wash out important signals and degrade performance. For example, in applications that use satellite imagery, input images can be arbitrarily large; consequently, it is nonsensical to normalize over the entire area. Positional Normalization (PN), on the other hand, only normalizes over a single spatial position at a time. A natural compromise is to normalize features by local context, while also taking into account group level information. In this paper, we propose Local Context Normalization (LCN): a normalization layer where every feature is normalized based on a window around it and the filters in its group. We propose an algorithmic solution to make LCN efficient for arbitrary window sizes, even if every point in the image has a unique window. LCN outperforms its Batch Normalization (BN), GN, IN, and LN counterparts for object detection, semantic segmentation, and instance segmentation applications in several benchmark datasets, while keeping performance independent of the batch size and facilitating transfer learning. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 157,205 |
1706.05441 | Improved Convergence Rates for Distributed Resource Allocation | In this paper, we develop a class of decentralized algorithms for solving a convex resource allocation problem in a network of $n$ agents, where the agent objectives are decoupled while the resource constraints are coupled. The agents communicate over a connected undirected graph, and they want to collaboratively determine a solution to the overall network problem, while each agent only communicates with its neighbors. We first study the connection between the decentralized resource allocation problem and the decentralized consensus optimization problem. Then, using a class of algorithms for solving consensus optimization problems, we propose a novel class of decentralized schemes for solving resource allocation problems in a distributed manner. Specifically, we first propose an algorithm for solving the resource allocation problem with an $o(1/k)$ convergence rate guarantee when the agents' objective functions are generally convex (could be nondifferentiable) and per agent local convex constraints are allowed; We then propose a gradient-based algorithm for solving the resource allocation problem when per agent local constraints are absent and show that such scheme can achieve geometric rate when the objective functions are strongly convex and have Lipschitz continuous gradients. We have also provided scalability/network dependency analysis. Based on these two algorithms, we have further proposed a gradient projection-based algorithm which can handle smooth objective and simple constraints more efficiently. Numerical experiments demonstrates the viability and performance of all the proposed algorithms. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | true | 75,514 |
1811.10106 | Sparse PCA from Sparse Linear Regression | Sparse Principal Component Analysis (SPCA) and Sparse Linear Regression (SLR) have a wide range of applications and have attracted a tremendous amount of attention in the last two decades as canonical examples of statistical problems in high dimension. A variety of algorithms have been proposed for both SPCA and SLR, but an explicit connection between the two had not been made. We show how to efficiently transform a black-box solver for SLR into an algorithm for SPCA: assuming the SLR solver satisfies prediction error guarantees achieved by existing efficient algorithms such as those based on the Lasso, the SPCA algorithm derived from it achieves near state of the art guarantees for testing and for support recovery for the single spiked covariance model as obtained by the current best polynomialtime algorithms. Our reduction not only highlights the inherent similarity between the two problems, but also, from a practical standpoint, allows one to obtain a collection of algorithms for SPCA directly from known algorithms for SLR. We provide experimental results on simulated data comparing our proposed framework to other algorithms for SPCA. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 114,409 |
2006.14831 | Covariance-engaged Classification of Sets via Linear Programming | Set classification aims to classify a set of observations as a whole, as opposed to classifying individual observations separately. To formally understand the unfamiliar concept of binary set classification, we first investigate the optimal decision rule under the normal distribution, which utilizes the empirical covariance of the set to be classified. We show that the number of observations in the set plays a critical role in bounding the Bayes risk. Under this framework, we further propose new methods of set classification. For the case where only a few parameters of the model drive the difference between two classes, we propose a computationally-efficient approach to parameter estimation using linear programming, leading to the Covariance-engaged LInear Programming Set (CLIPS) classifier. Its theoretical properties are investigated for both independent case and various (short-range and long-range dependent) time series structures among observations within each set. The convergence rates of estimation errors and risk of the CLIPS classifier are established to show that having multiple observations in a set leads to faster convergence rates, compared to the standard classification situation in which there is only one observation in the set. The applicable domains in which the CLIPS performs better than competitors are highlighted in a comprehensive simulation study. Finally, we illustrate the usefulness of the proposed methods in classification of real image data in histopathology. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 184,347 |
2501.06081 | Averaged Adam accelerates stochastic optimization in the training of
deep neural network approximations for partial differential equation and
optimal control problems | Deep learning methods - usually consisting of a class of deep neural networks (DNNs) trained by a stochastic gradient descent (SGD) optimization method - are nowadays omnipresent in data-driven learning problems as well as in scientific computing tasks such as optimal control (OC) and partial differential equation (PDE) problems. In practically relevant learning tasks, often not the plain-vanilla standard SGD optimization method is employed to train the considered class of DNNs but instead more sophisticated adaptive and accelerated variants of the standard SGD method such as the popular Adam optimizer are used. Inspired by the classical Polyak-Ruppert averaging approach, in this work we apply averaged variants of the Adam optimizer to train DNNs to approximately solve exemplary scientific computing problems in the form of PDEs and OC problems. We test the averaged variants of Adam in a series of learning problems including physics-informed neural network (PINN), deep backward stochastic differential equation (deep BSDE), and deep Kolmogorov approximations for PDEs (such as heat, Black-Scholes, Burgers, and Allen-Cahn PDEs), including DNN approximations for OC problems, and including DNN approximations for image classification problems (ResNet for CIFAR-10). In each of the numerical examples the employed averaged variants of Adam outperform the standard Adam and the standard SGD optimizers, particularly, in the situation of the scientific machine learning problems. The Python source codes for the numerical experiments associated to this work can be found on GitHub at https://github.com/deeplearningmethods/averaged-adam. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 523,829 |
1305.1787 | Evolution of the user's content: An Overview of the state of the art | The evolution of the user's content still remains a problem for an accurate recommendation.This is why the current research aims to design Recommender Systems (RS) able to continually adapt information that matches the user's interests. This paper aims to explain this problematic point in outlining the proposals that have been made in research with their advantages and disadvantages. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 24,469 |
2311.10935 | Short-term Volatility Estimation for High Frequency Trades using
Gaussian processes (GPs) | The fundamental theorem behind financial markets is that stock prices are intrinsically complex and stochastic. One of the complexities is the volatility associated with stock prices. Volatility is a tendency for prices to change unexpectedly [1]. Price volatility is often detrimental to the return economics, and thus, investors should factor it in whenever making investment decisions, choices, and temporal or permanent moves. It is, therefore, crucial to make necessary and regular short and long-term stock price volatility forecasts for the safety and economics of investors returns. These forecasts should be accurate and not misleading. Different models and methods, such as ARCH GARCH models, have been intuitively implemented to make such forecasts. However, such traditional means fail to capture the short-term volatility forecasts effectively. This paper, therefore, investigates and implements a combination of numeric and probabilistic models for short-term volatility and return forecasting for high-frequency trades. The essence is that one-day-ahead volatility forecasts were made with Gaussian Processes (GPs) applied to the outputs of a Numerical market prediction (NMP) model. Firstly, the stock price data from NMP was corrected by a GP. Since it is not easy to set price limits in a market due to its free nature and randomness, a Censored GP was used to model the relationship between the corrected stock prices and returns. Forecasting errors were evaluated using the implied and estimated data. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 408,725 |
2008.04145 | A Full Second-Order Analysis of the Widely Linear MVDR Beamformer for
Noncircular Signals | A full performance analysis of the widely linear (WL) minimum variance distortionless response (MVDR) beamformer is introduced. While the WL MVDR is known to outperform its strictly linear counterpart, the Capon beamformer, for noncircular complex signals, the existing approaches provide limited physical insights, since they explicitly or implicitly omit the complementary second-order (SO) statistics of the output interferences and noise (IN). To this end, we exploit the full SO statistics of the output IN to introduce a full SO performance analysis framework for the WL MVDR beamformer. This makes it possible to separate the overall signal-to-interference plus noise ratio (SINR) gain of the WL MVDR beamformer w.r.t. the Capon one into the individual contributions along the in-phase (I) and quadrature (Q) channels. Next, by considering the reception of the unknown signal of interest (SOI) corrupted by an arbitrary number of orthogonal noncircular interferences, we further unveil the distribution of SINR gains in both the I and Q channels, and show that in almost all the spatial cases, these performance advantages are more pronounced when the SO noncircularity rate of the interferences increases. Illustrative numerical simulations are provided to support the theoretical results. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 191,148 |
2307.01325 | Robust Uncertainty Estimation for Classification of Maritime Objects | We explore the use of uncertainty estimation in the maritime domain, showing the efficacy on toy datasets (CIFAR10) and proving it on an in-house dataset, SHIPS. We present a method joining the intra-class uncertainty achieved using Monte Carlo Dropout, with recent discoveries in the field of outlier detection, to gain more holistic uncertainty measures. We explore the relationship between the introduced uncertainty measures and examine how well they work on CIFAR10 and in a real-life setting. Our work improves the FPR95 by 8% compared to the current highest-performing work when the models are trained without out-of-distribution data. We increase the performance by 77% compared to a vanilla implementation of the Wide ResNet. We release the SHIPS dataset and show the effectiveness of our method by improving the FPR95 by 44.2% with respect to the baseline. Our approach is model agnostic, easy to implement, and often does not require model retraining. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 377,312 |
2406.09112 | Large-Scale Evaluation of Open-Set Image Classification Techniques | The goal for classification is to correctly assign labels to unseen samples. However, most methods misclassify samples with unseen labels and assign them to one of the known classes. Open-Set Classification (OSC) algorithms aim to maximize both closed and open-set recognition capabilities. Recent studies showed the utility of such algorithms on small-scale data sets, but limited experimentation makes it difficult to assess their performances in real-world problems. Here, we provide a comprehensive comparison of various OSC algorithms, including training-based (SoftMax, Garbage, EOS) and post-processing methods (Maximum SoftMax Scores, Maximum Logit Scores, OpenMax, EVM, PROSER), the latter are applied on features from the former. We perform our evaluation on three large-scale protocols that mimic real-world challenges, where we train on known and negative open-set samples, and test on known and unknown instances. Our results show that EOS helps to improve performance of almost all post-processing algorithms. Particularly, OpenMax and PROSER are able to exploit better-trained networks, demonstrating the utility of hybrid models. However, while most algorithms work well on negative test samples -- samples of open-set classes seen during training -- they tend to perform poorly when tested on samples of previously unseen unknown classes, especially in challenging conditions. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 463,768 |
2002.07826 | Computer classification of linear codes | We present algorithms for classification of linear codes over finite fields, based on canonical augmentation and on lattice point enumeration. We apply these algorithms to obtain classification results over fields with 2, 3 and 4 elements. We validate a correct implementation of the algorithms with known classification results from the literature, which we partially extend to larger ranges of parameters. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 164,576 |
0805.2949 | Performability Aspects of the Atlas Vo; Using Lmbench Suite | The ATLAS Virtual Organization is grid's largest Virtual Organization which is currently in full production stage. Hereby a case is being made that a user working within that VO is going to face a wide spectrum of different systems, whose heterogeneity is enough to count as "orders of magnitude" according to a number of metrics; including integer/float operations, memory throughput (STREAM) and communication latencies. Furthermore, the spread of performance does not appear to follow any known distribution pattern, which is demonstrated in graphs produced during May 2007 measurements. It is implied that the current practice where either "all-WNs-are-equal" or, the alternative of SPEC-based rating used by LCG/EGEE is an oversimplification which is inappropriate and expensive from an operational point of view, therefore new techniques are needed for optimal grid resources allocation. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 1,789 |
2403.17101 | AI Consciousness is Inevitable: A Theoretical Computer Science
Perspective | We look at consciousness through the lens of Theoretical Computer Science, a branch of mathematics that studies computation under resource limitations. From this perspective, we develop a formal machine model for consciousness. The model is inspired by Alan Turing's simple yet powerful model of computation and Bernard Baars' theater model of consciousness. Though extremely simple, the model (1) aligns at a high level with many of the major scientific theories of human and animal consciousness, (2) provides explanations at a high level for many phenomena associated with consciousness, and (3) gives insight into how a machine can have subjective consciousness. This combination supports our claim that machine consciousness is not only plausible but inevitable. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 441,311 |
2312.00746 | Deciphering Digital Detectives: Understanding LLM Behaviors and
Capabilities in Multi-Agent Mystery Games | In this study, we explore the application of Large Language Models (LLMs) in \textit{Jubensha}, a Chinese detective role-playing game and a novel area in Artificial Intelligence (AI) driven gaming. We introduce the first dataset specifically for Jubensha, including character scripts and game rules, to foster AI agent development in this complex narrative environment. Our work also presents a unique multi-agent interaction framework using LLMs, allowing AI agents to autonomously engage in this game. To evaluate the gaming performance of these AI agents, we developed novel methods measuring their mastery of case information and reasoning skills. Furthermore, we incorporated the latest advancements in in-context learning to improve the agents' performance in information gathering, murderer identification, and logical reasoning. The experimental results validate the effectiveness of our proposed methods. This work aims to offer a novel perspective on understanding LLM capabilities and establish a new benchmark for evaluating large language model-based agents. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 412,157 |
2107.04004 | 3D Neural Scene Representations for Visuomotor Control | Humans have a strong intuitive understanding of the 3D environment around us. The mental model of the physics in our brain applies to objects of different materials and enables us to perform a wide range of manipulation tasks that are far beyond the reach of current robots. In this work, we desire to learn models for dynamic 3D scenes purely from 2D visual observations. Our model combines Neural Radiance Fields (NeRF) and time contrastive learning with an autoencoding framework, which learns viewpoint-invariant 3D-aware scene representations. We show that a dynamics model, constructed over the learned representation space, enables visuomotor control for challenging manipulation tasks involving both rigid bodies and fluids, where the target is specified in a viewpoint different from what the robot operates on. When coupled with an auto-decoding framework, it can even support goal specification from camera viewpoints that are outside the training distribution. We further demonstrate the richness of the learned 3D dynamics model by performing future prediction and novel view synthesis. Finally, we provide detailed ablation studies regarding different system designs and qualitative analysis of the learned representations. | false | false | false | false | false | false | true | true | false | false | false | true | false | false | false | false | false | false | 245,318 |
1911.09208 | Permissioned Blockchain Through the Looking Glass: Architectural and
Implementation Lessons Learned | Since the inception of Bitcoin, the distributed systems community has shown interest in the design of efficient blockchain systems. However, initial blockchain applications (like Bitcoin) attain very low throughput, which has promoted the design of permissioned blockchain systems. These permissioned blockchain systems employ classical Byzantine-Fault Tolerant (BFT) protocols to reach consensus. However, existing permissioned blockchain systems still attain low throughputs (of the order 10K txns/s). As a result, existing works blame this low throughput on the associated BFT protocol and expend resources in developing optimized protocols. We believe such blames only depict a one-sided story. In specific, we raise a simple question: can a well-crafted system based on a classical BFT protocol outperform a modern protocol? We show that designing such a well-crafted system is possible and illustrate that even if such a system employs a three-phase protocol, it can outperform another system utilizing a single-phase protocol. This endeavor requires us to dissect a permissioned blockchain system and highlight different factors that affect its performance. Based on our insights, we present the design of our enterprise-grade, high-throughput yielding permissioned blockchain system, ResilientDB, that employs multi-threaded deep pipelines, to balance tasks at a replica, and provides guidelines for future designs. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | true | 154,431 |
2406.14962 | Contextual Interaction via Primitive-based Adversarial Training For
Compositional Zero-shot Learning | Compositional Zero-shot Learning (CZSL) aims to identify novel compositions via known attribute-object pairs. The primary challenge in CZSL tasks lies in the significant discrepancies introduced by the complex interaction between the visual primitives of attribute and object, consequently decreasing the classification performance towards novel compositions. Previous remarkable works primarily addressed this issue by focusing on disentangling strategy or utilizing object-based conditional probabilities to constrain the selection space of attributes. Unfortunately, few studies have explored the problem from the perspective of modeling the mechanism of visual primitive interactions. Inspired by the success of vanilla adversarial learning in Cross-Domain Few-Shot Learning, we take a step further and devise a model-agnostic and Primitive-Based Adversarial training (PBadv) method to deal with this problem. Besides, the latest studies highlight the weakness of the perception of hard compositions even under data-balanced conditions. To this end, we propose a novel over-sampling strategy with object-similarity guidance to augment target compositional training data. We performed detailed quantitative analysis and retrieval experiments on well-established datasets, such as UT-Zappos50K, MIT-States, and C-GQA, to validate the effectiveness of our proposed method, and the state-of-the-art (SOTA) performance demonstrates the superiority of our approach. The code is available at https://github.com/lisuyi/PBadv_czsl. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 466,557 |
1805.01955 | Improve Uncertainty Estimation for Unknown Classes in Bayesian Neural
Networks with Semi-Supervised /One Set Classification | Although deep neural network (DNN) has achieved many state-of-the-art results, estimating the uncertainty presented in the DNN model and the data is a challenging task. Problems related to uncertainty such as classifying unknown classes (class which does not appear in the training data) data as known class with high confidence, is critically concerned in the safety domain area (e.g, autonomous driving, medical diagnosis). In this paper, we show that applying current Bayesian Neural Network (BNN) techniques alone does not effectively capture the uncertainty. To tackle this problem, we introduce a simple way to improve the BNN by using one class classification (in this paper, we use the term "set classification" instead). We empirically show the result of our method on an experiment which involves three datasets: MNIST, notMNIST and FMNIST. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 96,738 |
1602.03995 | An automatic method for segmentation of fission tracks in epidote
crystal photomicrographs | Manual identification of fission tracks has practical problems, such as variation due to observer-observation efficiency. An automatic processing method that could identify fission tracks in a photomicrograph could solve this problem and improve the speed of track counting. However, separation of non-trivial images is one of the most difficult tasks in image processing. Several commercial and free softwares are available, but these softwares are meant to be used in specific images. In this paper, an automatic method based on starlet wavelets is presented in order to separate fission tracks in mineral photomicrographs. Automatization is obtained by Matthews correlation coefficient, and results are evaluated by precision, recall and accuracy. This technique is an improvement of a method aimed at segmentation of scanning electron microscopy images. This method is applied in photomicrographs of epidote phenocrystals, in which accuracy higher than 89% was obtained in fission track segmentation, even for difficult images. Algorithms corresponding to the proposed method are available for download. Using the method presented here, an user could easily determine fission tracks in photomicrographs of mineral samples. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 52,077 |
2405.03462 | A Lightweight Neural Architecture Search Model for Medical Image
Classification | Accurate classification of medical images is essential for modern diagnostics. Deep learning advancements led clinicians to increasingly use sophisticated models to make faster and more accurate decisions, sometimes replacing human judgment. However, model development is costly and repetitive. Neural Architecture Search (NAS) provides solutions by automating the design of deep learning architectures. This paper presents ZO-DARTS+, a differentiable NAS algorithm that improves search efficiency through a novel method of generating sparse probabilities by bi-level optimization. Experiments on five public medical datasets show that ZO-DARTS+ matches the accuracy of state-of-the-art solutions while reducing search times by up to three times. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 452,192 |
1203.4764 | On the Design of a Novel Joint Network-Channel Coding Scheme for the
Multiple Access Relay Channel | This paper proposes a novel joint non-binary network-channel code for the Time-Division Decode-and-Forward Multiple Access Relay Channel (TD-DF-MARC), where the relay linearly combines -- over a non-binary finite field -- the coded sequences from the source nodes. A method based on an EXIT chart analysis is derived for selecting the best coefficients of the linear combination. Moreover, it is shown that for different setups of the system, different coefficients should be chosen in order to improve the performance. This conclusion contrasts with previous works where a random selection was considered. Monte Carlo simulations show that the proposed scheme outperforms, in terms of its gap to the outage probabilities, the previously published joint network-channel coding approaches. Besides, this gain is achieved by using very short-length codewords, which makes the scheme particularly attractive for low-latency applications. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 15,057 |
2408.00378 | A deep spatio-temporal attention model of dynamic functional network
connectivity shows sensitivity to Alzheimer's in asymptomatic individuals | Alzheimer's disease (AD) progresses from asymptomatic changes to clinical symptoms, emphasizing the importance of early detection for proper treatment. Functional magnetic resonance imaging (fMRI), particularly dynamic functional network connectivity (dFNC), has emerged as an important biomarker for AD. Nevertheless, studies probing at-risk subjects in the pre-symptomatic stage using dFNC are limited. To identify at-risk subjects and understand alterations of dFNC in different stages, we leverage deep learning advancements and introduce a transformer-convolution framework for predicting at-risk subjects based on dFNC, incorporating spatial-temporal self-attention to capture brain network dependencies and temporal dynamics. Our model significantly outperforms other popular machine learning methods. By analyzing individuals with diagnosed AD and mild cognitive impairment (MCI), we studied the AD progression and observed a higher similarity between MCI and asymptomatic AD. The interpretable analysis highlights the cognitive-control network's diagnostic importance, with the model focusing on intra-visual domain dFNC when predicting asymptomatic AD subjects. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 477,818 |
0812.2926 | New parallel programming language design: a bridge between brain models
and multi-core/many-core computers? | The recurrent theme of this paper is that sequences of long temporal patterns as opposed to sequences of simple statements are to be fed into computation devices, being them (new proposed) models for brain activity or multi-core/many-core computers. In such models, parts of these long temporal patterns are already committed while other are predicted. This combination of matching patterns and making predictions appears as a key element in producing intelligent processing in brain models and getting efficient speculative execution on multi-core/many-core computers. A bridge between these far-apart models of computation could be provided by appropriate design of massively parallel, interactive programming languages. Agapia is a recently proposed language of this kind, where user controlled long high-level temporal structures occur at the interaction interfaces of processes. In this paper Agapia is used to link HTMs brain models with TRIPS multi-core/many-core architectures. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 2,805 |
2407.00578 | UniQuad: A Unified and Versatile Quadrotor Platform Series for UAV
Research and Application | As quadrotors take on an increasingly diverse range of roles, researchers often need to develop new hardware platforms tailored for specific tasks, introducing significant engineering overhead. In this article, we introduce the UniQuad series, a unified and versatile quadrotor platform series that offers high flexibility to adapt to a wide range of common tasks, excellent customizability for advanced demands, and easy maintenance in case of crashes. This project is fully open-source at https://hkust-aerial-robotics.github.io/UniQuad. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 468,921 |
2201.10285 | Efficient Approximations of the Fisher Matrix in Neural Networks using
Kronecker Product Singular Value Decomposition | Several studies have shown the ability of natural gradient descent to minimize the objective function more efficiently than ordinary gradient descent based methods. However, the bottleneck of this approach for training deep neural networks lies in the prohibitive cost of solving a large dense linear system corresponding to the Fisher Information Matrix (FIM) at each iteration. This has motivated various approximations of either the exact FIM or the empirical one. The most sophisticated of these is KFAC, which involves a Kronecker-factored block diagonal approximation of the FIM. With only a slight additional cost, a few improvements of KFAC from the standpoint of accuracy are proposed. The common feature of the four novel methods is that they rely on a direct minimization problem, the solution of which can be computed via the Kronecker product singular value decomposition technique. Experimental results on the three standard deep auto-encoder benchmarks showed that they provide more accurate approximations to the FIM. Furthermore, they outperform KFAC and state-of-the-art first-order methods in terms of optimization speed. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 276,940 |
2405.15468 | Semantic Aware Diffusion Inverse Tone Mapping | The range of real-world scene luminance is larger than the capture capability of many digital camera sensors which leads to details being lost in captured images, most typically in bright regions. Inverse tone mapping attempts to boost these captured Standard Dynamic Range (SDR) images back to High Dynamic Range (HDR) by creating a mapping that linearizes the well exposed values from the SDR image, and provides a luminance boost to the clipped content. However, in most cases, the details in the clipped regions cannot be recovered or estimated. In this paper, we present a novel inverse tone mapping approach for mapping SDR images to HDR that generates lost details in clipped regions through a semantic-aware diffusion based inpainting approach. Our method proposes two major contributions - first, we propose to use a semantic graph to guide SDR diffusion based inpainting in masked regions in a saturated image. Second, drawing inspiration from traditional HDR imaging and bracketing methods, we propose a principled formulation to lift the SDR inpainted regions to HDR that is compatible with generative inpainting methods. Results show that our method demonstrates superior performance across different datasets on objective metrics, and subjective experiments show that the proposed method matches (and in most cases outperforms) state-of-art inverse tone mapping operators in terms of objective metrics and outperforms them for visual fidelity. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 456,950 |
2312.10894 | Effectiveness of Constant Stepsize in Markovian LSA and Statistical
Inference | In this paper, we study the effectiveness of using a constant stepsize in statistical inference via linear stochastic approximation (LSA) algorithms with Markovian data. After establishing a Central Limit Theorem (CLT), we outline an inference procedure that uses averaged LSA iterates to construct confidence intervals (CIs). Our procedure leverages the fast mixing property of constant-stepsize LSA for better covariance estimation and employs Richardson-Romberg (RR) extrapolation to reduce the bias induced by constant stepsize and Markovian data. We develop theoretical results for guiding stepsize selection in RR extrapolation, and identify several important settings where the bias provably vanishes even without extrapolation. We conduct extensive numerical experiments and compare against classical inference approaches. Our results show that using a constant stepsize enjoys easy hyperparameter tuning, fast convergence, and consistently better CI coverage, especially when data is limited. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 416,353 |
1309.0270 | High-Accuracy Total Variation for Compressed Video Sensing | Numerous total variation (TV) regularizers, engaged in image restoration problem, encode the gradients by means of simple $[-1,1]$ FIR filter. Despite its low computational processing, this filter severely deviates signal's high frequency components pertinent to edge/discontinuous information and cause several deficiency issues known as texture and geometric loss. This paper addresses this problem by proposing an alternative model to the TV regularization problem via high order accuracy differential FIR filters to preserve rapid transitions in signal recovery. A numerical encoding scheme is designed to extend the TV model into multidimensional representation (tensorial decomposition). We adopt this design to regulate the spatial and temporal redundancy in compressed video sensing problem to jointly recover frames from under-sampled measurements. We then seek the solution via alternating direction methods of multipliers and find a unique solution to quadratic minimization step with capability of handling different boundary conditions. The resulting algorithm uses much lower sampling rate and highly outperforms alternative state-of-the-art methods. This is evaluated both in terms of restoration accuracy and visual quality of the recovered frames. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 26,774 |
2108.12966 | Digging into Uncertainty in Self-supervised Multi-view Stereo | Self-supervised Multi-view stereo (MVS) with a pretext task of image reconstruction has achieved significant progress recently. However, previous methods are built upon intuitions, lacking comprehensive explanations about the effectiveness of the pretext task in self-supervised MVS. To this end, we propose to estimate epistemic uncertainty in self-supervised MVS, accounting for what the model ignores. Specially, the limitations can be categorized into two types: ambiguious supervision in foreground and invalid supervision in background. To address these issues, we propose a novel Uncertainty reduction Multi-view Stereo (UMVS) framework for self-supervised learning. To alleviate ambiguous supervision in foreground, we involve extra correspondence prior with a flow-depth consistency loss. The dense 2D correspondence of optical flows is used to regularize the 3D stereo correspondence in MVS. To handle the invalid supervision in background, we use Monte-Carlo Dropout to acquire the uncertainty map and further filter the unreliable supervision signals on invalid regions. Extensive experiments on DTU and Tank&Temples benchmark show that our U-MVS framework achieves the best performance among unsupervised MVS methods, with competitive performance with its supervised opponents. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 252,667 |
2209.06701 | Natural Language Inference Prompts for Zero-shot Emotion Classification
in Text across Corpora | Within textual emotion classification, the set of relevant labels depends on the domain and application scenario and might not be known at the time of model development. This conflicts with the classical paradigm of supervised learning in which the labels need to be predefined. A solution to obtain a model with a flexible set of labels is to use the paradigm of zero-shot learning as a natural language inference task, which in addition adds the advantage of not needing any labeled training data. This raises the question how to prompt a natural language inference model for zero-shot learning emotion classification. Options for prompt formulations include the emotion name anger alone or the statement "This text expresses anger". With this paper, we analyze how sensitive a natural language inference-based zero-shot-learning classifier is to such changes to the prompt under consideration of the corpus: How carefully does the prompt need to be selected? We perform experiments on an established set of emotion datasets presenting different language registers according to different sources (tweets, events, blogs) with three natural language inference models and show that indeed the choice of a particular prompt formulation needs to fit to the corpus. We show that this challenge can be tackled with combinations of multiple prompts. Such ensemble is more robust across corpora than individual prompts and shows nearly the same performance as the individual best prompt for a particular corpus. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 317,487 |
2206.14753 | A phase field electro-chemo-mechanical formulation for predicting void
evolution at the Li-electrolyte interface in all-solid-state batteries | We present a mechanistic theory for predicting void evolution in the Li metal electrode during the charge and discharge of all-solid-state battery cells. A phase field formulation is developed to model vacancy annihilation and nucleation, and to enable the tracking of the void-Li metal interface. This is coupled with a viscoplastic description of Li deformation, to capture creep effects, and a mass transfer formulation accounting for substitutional (bulk and surface) Li diffusion and current-driven flux. Moreover, we incorporate the interaction between the electrode and the solid electrolyte, resolving the coupled electro-chemical-mechanical problem in both domains. This enables predicting the electrolyte current distribution and thus the emergence of local current 'hot spots', which act as precursors for dendrite formation and cell death. The theoretical framework is numerically implemented, and single and multiple void case studies are carried out to predict the evolution of voids and current hot spots as a function of the applied pressure, material properties and charge (magnitude and cycle history). For both plating and stripping, insight is gained into the interplay between bulk diffusion, Li dissolution and deposition, creep, and the nucleation and annihilation of vacancies. The model is shown to capture the main experimental observations, including not only key features of electrolyte current and void morphology but also the sensitivity to the applied current, the role of pressure in increasing the electrode-electrolyte contact area, and the dominance of creep over vacancy diffusion. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 305,388 |
1806.06798 | Implicit Policy for Reinforcement Learning | We introduce Implicit Policy, a general class of expressive policies that can flexibly represent complex action distributions in reinforcement learning, with efficient algorithms to compute entropy regularized policy gradients. We empirically show that, despite its simplicity in implementation, entropy regularization combined with a rich policy class can attain desirable properties displayed under maximum entropy reinforcement learning framework, such as robustness and multi-modality. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 100,765 |
2205.06491 | Tighter Regret Analysis and Optimization of Online Federated Learning | In federated learning (FL), it is commonly assumed that all data are placed at clients in the beginning of machine learning (ML) optimization (i.e., offline learning). However, in many real-world applications, it is expected to proceed in an online fashion. To this end, online FL (OFL) has been introduced, which aims at learning a sequence of global models from decentralized streaming data such that the so-called cumulative regret is minimized. Combining online gradient descent and model averaging, in this framework, FedOGD is constructed as the counterpart of FedSGD in FL. While it can enjoy an optimal sublinear regret, FedOGD suffers from heavy communication costs. In this paper, we present a communication-efficient method (named OFedIQ) by means of intermittent transmission (enabled by client subsampling and periodic transmission) and quantization. For the first time, we derive the regret bound that captures the impact of data-heterogeneity and the communication-efficient techniques. Through this, we efficiently optimize the parameters of OFedIQ such as sampling rate, transmission period, and quantization levels. Also, it is proved that the optimized OFedIQ can asymptotically achieve the performance of FedOGD while reducing the communication costs by 99%. Via experiments with real datasets, we demonstrate the effectiveness of the optimized OFedIQ. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 296,257 |
2410.07071 | Retrieval-Augmented Decision Transformer: External Memory for In-context
RL | In-context learning (ICL) is the ability of a model to learn a new task by observing a few exemplars in its context. While prevalent in NLP, this capability has recently also been observed in Reinforcement Learning (RL) settings. Prior in-context RL methods, however, require entire episodes in the agent's context. Given that complex environments typically lead to long episodes with sparse rewards, these methods are constrained to simple environments with short episodes. To address these challenges, we introduce Retrieval-Augmented Decision Transformer (RA-DT). RA-DT employs an external memory mechanism to store past experiences from which it retrieves only sub-trajectories relevant for the current situation. The retrieval component in RA-DT does not require training and can be entirely domain-agnostic. We evaluate the capabilities of RA-DT on grid-world environments, robotics simulations, and procedurally-generated video games. On grid-worlds, RA-DT outperforms baselines, while using only a fraction of their context length. Furthermore, we illuminate the limitations of current in-context RL methods on complex environments and discuss future directions. To facilitate future research, we release datasets for four of the considered environments. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 496,453 |
2310.03897 | Break-Resilient Codes for Forensic 3D Fingerprinting | 3D printing brings about a revolution in consumption and distribution of goods, but poses a significant risk to public safety. Any individual with internet access and a commodity printer can now produce untraceable firearms, keys, and dangerous counterfeit products. To aid government authorities in combating these new security threats, objects are often tagged with identifying information. This information, also known as fingerprints, is written into the object using various bit embedding techniques, such as varying the width of the molten thermoplastic layers. Yet, due to the adversarial nature of the problem, it is important to devise tamper resilient fingerprinting techniques, so that the fingerprint could be extracted even if the object was damaged. While fingerprinting various forms of digital media (such as videos, images, etc.) has been studied extensively in the past, 3D printing is a relatively new medium which is exposed to different types of adversarial physical tampering that do not exist in the digital world. This paper focuses on one such type of adversarial tampering, where the adversary breaks the object to at most a certain number of parts. This gives rise to a new adversarial coding problem, which is formulated and investigated herein. We survey the existing technology, present an abstract problem definition, provide lower bounds for the required redundancy, and construct a code which attains it up to asymptotically small factors. Notably, the problem bears some resemblance to the torn paper channel, which was recently studied for applications in DNA storage. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 397,460 |
2401.04248 | Uniform Distribution on $(n-1)$-Sphere: Rate-Distortion under Squared
Error Distortion | This paper investigates the rate-distortion function, under a squared error distortion $D$, for an $n$-dimensional random vector uniformly distributed on an $(n-1)$-sphere of radius $R$. First, an expression for the rate-distortion function is derived for any values of $n$, $D$, and $R$. Second, two types of asymptotics with respect to the rate-distortion function of a Gaussian source are characterized. More specifically, these asymptotics concern the low-distortion regime (that is, $D \to 0$) and the high-dimensional regime (that is, $n \to \infty$). | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 420,371 |
2107.11085 | Data-driven deep density estimation | Density estimation plays a crucial role in many data analysis tasks, as it infers a continuous probability density function (PDF) from discrete samples. Thus, it is used in tasks as diverse as analyzing population data, spatial locations in 2D sensor readings, or reconstructing scenes from 3D scans. In this paper, we introduce a learned, data-driven deep density estimation (DDE) to infer PDFs in an accurate and efficient manner, while being independent of domain dimensionality or sample size. Furthermore, we do not require access to the original PDF during estimation, neither in parametric form, nor as priors, or in the form of many samples. This is enabled by training an unstructured convolutional neural network on an infinite stream of synthetic PDFs, as unbound amounts of synthetic training data generalize better across a deck of natural PDFs than any natural finite training data will do. Thus, we hope that our publicly available DDE method will be beneficial in many areas of data analysis, where continuous models are to be estimated from discrete observations. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 247,499 |
2202.05689 | Conservative Extensions for Existential Rules | We study the problem to decide, given sets T1,T2 of tuple-generating dependencies (TGDs), also called existential rules, whether T2 is a conservative extension of T1. We consider two natural notions of conservative extension, one pertaining to answers to conjunctive queries over databases and one to homomorphisms between chased databases. Our main results are that these problems are undecidable for linear TGDs, undecidable for guarded TGDs even when T1 is empty, and decidable for frontier-one TGDs. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | true | 279,950 |
2102.12660 | Distributionally Robust Federated Averaging | In this paper, we study communication efficient distributed algorithms for distributionally robust federated learning via periodic averaging with adaptive sampling. In contrast to standard empirical risk minimization, due to the minimax structure of the underlying optimization problem, a key difficulty arises from the fact that the global parameter that controls the mixture of local losses can only be updated infrequently on the global stage. To compensate for this, we propose a Distributionally Robust Federated Averaging (DRFA) algorithm that employs a novel snapshotting scheme to approximate the accumulation of history gradients of the mixing parameter. We analyze the convergence rate of DRFA in both convex-linear and nonconvex-linear settings. We also generalize the proposed idea to objectives with regularization on the mixture parameter and propose a proximal variant, dubbed as DRFA-Prox, with provable convergence rates. We also analyze an alternative optimization method for regularized cases in strongly-convex-strongly-concave and non-convex (under PL condition)-strongly-concave settings. To the best of our knowledge, this paper is the first to solve distributionally robust federated learning with reduced communication, and to analyze the efficiency of local descent methods on distributed minimax problems. We give corroborating experimental evidence for our theoretical results in federated learning settings. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 221,803 |
2407.12669 | Enhancing the Utility of Privacy-Preserving Cancer Classification using
Synthetic Data | Deep learning holds immense promise for aiding radiologists in breast cancer detection. However, achieving optimal model performance is hampered by limitations in availability and sharing of data commonly associated to patient privacy concerns. Such concerns are further exacerbated, as traditional deep learning models can inadvertently leak sensitive training information. This work addresses these challenges exploring and quantifying the utility of privacy-preserving deep learning techniques, concretely, (i) differentially private stochastic gradient descent (DP-SGD) and (ii) fully synthetic training data generated by our proposed malignancy-conditioned generative adversarial network. We assess these methods via downstream malignancy classification of mammography masses using a transformer model. Our experimental results depict that synthetic data augmentation can improve privacy-utility tradeoffs in differentially private model training. Further, model pretraining on synthetic data achieves remarkable performance, which can be further increased with DP-SGD fine-tuning across all privacy guarantees. With this first in-depth exploration of privacy-preserving deep learning in breast imaging, we address current and emerging clinical privacy requirements and pave the way towards the adoption of private high-utility deep diagnostic models. Our reproducible codebase is publicly available at https://github.com/RichardObi/mammo_dp. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 474,025 |
2407.07295 | Deformation-Recovery Diffusion Model (DRDM): Instance Deformation for
Image Manipulation and Synthesis | In medical imaging, the diffusion models have shown great potential in synthetic image generation tasks. However, these models often struggle with the interpretable connections between the generated and existing images and could create illusions. To address these challenges, our research proposes a novel diffusion-based generative model based on deformation diffusion and recovery. This model, named Deformation-Recovery Diffusion Model (DRDM), diverges from traditional score/intensity and latent feature-based approaches, emphasizing morphological changes through deformation fields rather than direct image synthesis. This is achieved by introducing a topological-preserving deformation field generation method, which randomly samples and integrates a set of multi-scale Deformation Vector Fields (DVF). DRDM is trained to learn to recover unreasonable deformation components, thereby restoring each randomly deformed image to a realistic distribution. These innovations facilitate the generation of diverse and anatomically plausible deformations, enhancing data augmentation and synthesis for further analysis in downstream tasks, such as few-shot learning and image registration. Experimental results in cardiac MRI and pulmonary CT show DRDM is capable of creating diverse, large (over 10\% image size deformation scale), and high-quality (negative rate of the Jacobian matrix's determinant is lower than 1\%) deformation fields. The further experimental results in downstream tasks, 2D image segmentation and 3D image registration, indicate significant improvements resulting from DRDM, showcasing the potential of our model to advance image manipulation and synthesis in medical imaging and beyond. Project page: https://jianqingzheng.github.io/def_diff_rec/ | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 471,699 |
2102.00925 | Neural representation and generation for RNA secondary structures | Our work is concerned with the generation and targeted design of RNA, a type of genetic macromolecule that can adopt complex structures which influence their cellular activities and functions. The design of large scale and complex biological structures spurs dedicated graph-based deep generative modeling techniques, which represents a key but underappreciated aspect of computational drug discovery. In this work, we investigate the principles behind representing and generating different RNA structural modalities, and propose a flexible framework to jointly embed and generate these molecular structures along with their sequence in a meaningful latent space. Equipped with a deep understanding of RNA molecular structures, our most sophisticated encoding and decoding methods operate on the molecular graph as well as the junction tree hierarchy, integrating strong inductive bias about RNA structural regularity and folding mechanism such that high structural validity, stability and diversity of generated RNAs are achieved. Also, we seek to adequately organize the latent space of RNA molecular embeddings with regard to the interaction with proteins, and targeted optimization is used to navigate in this latent space to search for desired novel RNA molecules. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 217,958 |
1904.04307 | Word Similarity Datasets for Thai: Construction and Evaluation | Distributional semantics in the form of word embeddings are an essential ingredient to many modern natural language processing systems. The quantification of semantic similarity between words can be used to evaluate the ability of a system to perform semantic interpretation. To this end, a number of word similarity datasets have been created for the English language over the last decades. For Thai language few such resources are available. In this work, we create three Thai word similarity datasets by translating and re-rating the popular WordSim-353, SimLex-999 and SemEval-2017-Task-2 datasets. The three datasets contain 1852 word pairs in total and have different characteristics in terms of difficulty, domain coverage, and notion of similarity (relatedness vs.~similarity). These features help to gain a broader picture of the properties of an evaluated word embedding model. We include baseline evaluations with existing Thai embedding models, and identify the high ratio of out-of-vocabulary words as one of the biggest challenges. All datasets, evaluation results, and a tool for easy evaluation of new Thai embedding models are available to the NLP community online. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 126,993 |
2206.02480 | Subspace Phase Retrieval | In recent years, phase retrieval has received much attention in statistics, applied mathematics and optical engineering. In this paper, we propose an efficient algorithm, termed Subspace Phase Retrieval (SPR), which can accurately recover an $n$-dimensional $k$-sparse complex-valued signal $\x$ given its $\Omega(k^2\log n)$ magnitude-only Gaussian samples if the minimum nonzero entry of $\x$ satisfies $|x_{\min}| = \Omega(\|\x\|/\sqrt{k})$. Furthermore, if the energy sum of the most significant $\sqrt{k}$ elements in $\x$ is comparable to $\|\x\|^2$, the SPR algorithm can exactly recover $\x$ with $\Omega(k \log n)$ magnitude-only samples, which attains the information-theoretic sampling complexity for sparse phase retrieval. Numerical Experiments demonstrate that the proposed algorithm achieves the state-of-the-art reconstruction performance compared to existing ones. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 300,914 |
2212.00338 | 3D-Aware Object Goal Navigation via Simultaneous Exploration and
Identification | Object goal navigation (ObjectNav) in unseen environments is a fundamental task for Embodied AI. Agents in existing works learn ObjectNav policies based on 2D maps, scene graphs, or image sequences. Considering this task happens in 3D space, a 3D-aware agent can advance its ObjectNav capability via learning from fine-grained spatial information. However, leveraging 3D scene representation can be prohibitively unpractical for policy learning in this floor-level task, due to low sample efficiency and expensive computational cost. In this work, we propose a framework for the challenging 3D-aware ObjectNav based on two straightforward sub-policies. The two sub-polices, namely corner-guided exploration policy and category-aware identification policy, simultaneously perform by utilizing online fused 3D points as observation. Through extensive experiments, we show that this framework can dramatically improve the performance in ObjectNav through learning from 3D scene representation. Our framework achieves the best performance among all modular-based methods on the Matterport3D and Gibson datasets, while requiring (up to 30x) less computational cost for training. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 334,026 |
2410.18458 | On maximal almost balanced non-overlapping codes and non-overlapping
codes with restricted run-lengths | This paper concerns non-overlapping codes, block codes motivated by synchronisation and DNA-based storage applications. Most existing constructions of these codes do not account for the restrictions posed by the physical properties of communication channels. If undesired sequences are not avoided, the system using the encoding may start behaving incorrectly. Hence, we aim to characterise all non-overlapping codes satisfying two additional constraints. For the first constraint, where approximately half of the letters in each word are positive, we derive necessary and sufficient conditions for the code's non-expandability and improve known bounds on its maximum size. We also determine exact values for the maximum sizes of polarity-balanced non-overlapping codes having small block and alphabet sizes. For the other constraint, where long sequences of consecutive equal symbols lead to undesired behaviour, we derive bounds and constructions of constrained non-overlapping codes. Moreover, we provide constructions of non-overlapping codes that satisfy both constraints and analyse the sizes of the obtained codes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 501,894 |
2211.15578 | Mutual Exclusivity Training and Primitive Augmentation to Induce
Compositionality | Recent datasets expose the lack of the systematic generalization ability in standard sequence-to-sequence models. In this work, we analyze this behavior of seq2seq models and identify two contributing factors: a lack of mutual exclusivity bias (i.e., a source sequence already mapped to a target sequence is less likely to be mapped to other target sequences), and the tendency to memorize whole examples rather than separating structures from contents. We propose two techniques to address these two issues respectively: Mutual Exclusivity Training that prevents the model from producing seen generations when facing novel, unseen examples via an unlikelihood-based loss; and prim2primX data augmentation that automatically diversifies the arguments of every syntactic function to prevent memorizing and provide a compositional inductive bias without exposing test-set data. Combining these two techniques, we show substantial empirical improvements using standard sequence-to-sequence models (LSTMs and Transformers) on two widely-used compositionality datasets: SCAN and COGS. Finally, we provide analysis characterizing the improvements as well as the remaining challenges, and provide detailed ablations of our method. Our code is available at https://github.com/owenzx/met-primaug | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 333,325 |
2312.12634 | MotionScript: Natural Language Descriptions for Expressive 3D Human
Motions | This paper proposes MotionScript, a motion-to-text conversion algorithm and natural language representation for human body motions. MotionScript provides more detailed and accurate descriptions of human body movements compared to previous natural language methods. Most motion datasets focus on basic, well-defined actions, with limited variation in expression (e.g., sitting, walking, dribbling a ball). But for expressive actions that contain a diversity of movements in the class (e.g. being sad, dancing), or for actions outside the domain of standard motion capture datasets (e.g. stylistic walking, sign-language, interactions with animals), more specific and granular natural language descriptions are needed. Our proposed MotionScript descriptions differ from existing natural language representations in that it provides detailed descriptions in natural language rather than simple action labels or generalized captions. To the best of our knowledge, this is the first attempt at translating 3D motions to natural language descriptions without requiring training data. Our experiments demonstrate that MotionScript descriptions, when applied to text-to-motion tasks, enable large language models to generate complex, previously unseen motions. Additional examples, dataset, and code can be accessed at https://pjyazdian.github.io/MotionScript | false | false | false | false | true | false | false | true | true | false | false | true | false | false | false | false | false | false | 417,030 |
1811.00148 | Recovery Guarantees for Quadratic Tensors with Sparse Observations | We consider the tensor completion problem of predicting the missing entries of a tensor. The commonly used CP model has a triple product form, but an alternate family of quadratic models, which are the sum of pairwise products instead of a triple product, have emerged from applications such as recommendation systems. Non-convex methods are the method of choice for learning quadratic models, and this work examines their sample complexity and error guarantee. Our main result is that with the number of samples being only linear in the dimension, all local minima of the mean squared error objective are global minima and recover the original tensor. We substantiate our theoretical results with experiments on synthetic and real-world data, showing that quadratic models have better performance than CP models where there are a limited amount of observations available. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 112,013 |
2410.10027 | Efficient ICBased Solutions for Medical Devices and Automotive Radars | This thesis focuses on developing integrated circuit (IC) solutions for medical devices and automotive radars, and is divided into two main parts. Part One presents the design and evaluation of a miniaturized multi chip module (MCM) solution intended to deliver welldefined, charge balanced current stimuli directly to the inner ear. This section emphasizes the design of the supply chip, which includes a DC DC converter. It involves a comprehensive study aimed at optimizing and enhancing the efficiency of the design. Part Two investigates the fundamental principles of designing millimeter wave (mmWave) voltagecontrolled oscillators (VCOs). This section introduces a VCO with stateoftheart performance, showcasing advancements in mmWave technology. Overall, this thesis contributes to both the medical device field and automotive radar technology through innovative IC solutions. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 497,879 |
1509.06103 | Noise Robust IOA/CAS Speech Separation and Recognition System For The
Third 'CHIME' Challenge | This paper presents the contribution to the third 'CHiME' speech separation and recognition challenge including both front-end signal processing and back-end speech recognition. In the front-end, Multi-channel Wiener filter (MWF) is designed to achieve background noise reduction. Different from traditional MWF, optimized parameter for the tradeoff between noise reduction and target signal distortion is built according to the desired noise reduction level. In the back-end, several techniques are taken advantage to improve the noisy Automatic Speech Recognition (ASR) performance including Deep Neural Network (DNN), Convolutional Neural Network (CNN) and Long short-term memory (LSTM) using medium vocabulary, Lattice rescoring with a big vocabulary language model finite state transducer, and ROVER scheme. Experimental results show the proposed system combining front-end and back-end is effective to improve the ASR performance. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 47,128 |
2009.02762 | Real-time and Large-scale Fleet Allocation of Autonomous Taxis: A Case
Study in New York Manhattan Island | Nowadays, autonomous taxis become a highly promising transportation mode, which helps relieve traffic congestion and avoid road accidents. However, it hinders the wide implementation of this service that traditional models fail to efficiently allocate the available fleet to deal with the imbalance of supply (autonomous taxis) and demand (trips), the poor cooperation of taxis, hardly satisfied resource constraints, and on-line platform's requirements. To figure out such urgent problems from a global and more farsighted view, we employ a Constrained Multi-agent Markov Decision Processes (CMMDP) to model fleet allocation decisions, which can be easily split into sub-problems formulated as a 'Dynamic assignment problem' combining both immediate rewards and future gains. We also leverage a Column Generation algorithm to guarantee the efficiency and optimality in a large scale. Through extensive experiments, the proposed approach not only achieves remarkable improvements over the state-of-the-art benchmarks in terms of the individual's efficiency (arriving at 12.40%, 6.54% rise of income and utilization, respectively) and the platform's profit (reaching 4.59% promotion) but also reveals a time-varying fleet adjustment policy to minimize the operation cost of the platform. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | true | false | false | false | 194,652 |
2109.14872 | Towards Understanding Trends Manipulation in Pakistan Twitter | The rapid adoption of online social media platforms has transformed the way of communication and interaction. On these platforms, discussions in the form of trending topics provide a glimpse of events happening around the world in real-time. Also, these trends are used for political campaigns, public awareness, and brand promotions. Consequently, these trends are sensitive to manipulation by malicious users who aim to mislead the mass audience. In this article, we identify and study the characteristics of users involved in the manipulation of Twitter trends in Pakistan. We propose 'Manipify', a framework for automatic detection and analysis of malicious users for Twitter trends. Our framework consists of three distinct modules: i) user classifier, ii) hashtag classifier, and ii) trend analyzer. The user classifier introduces a novel approach to automatically detect manipulators using tweet content and user behaviour features. Also, the module classifies human and bot users. Next, the hashtag classifier categorizes trending hashtags into six categories assisting in examining manipulators behaviour across different categories. Finally, the trend analyzer module examines users, hashtags, and tweets for hashtag reach, linguistic features and user behaviour. Our user classifier module achieves 0.91 accuracy in classifying the manipulators. We further test Manipify on the dataset comprising of 665 trending hashtags with 5.4 million tweets and 1.9 million users. The analysis of trends reveals that the trending panel is mostly dominated by political hashtags. In addition, our results show a higher contribution of human accounts in trend manipulation as compared to bots. Furthermore, we present two case studies of hashtag-wars and anti-state propaganda to implicate the real-world application of our research. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 258,098 |
2206.03086 | Online Deep Clustering with Video Track Consistency | Several unsupervised and self-supervised approaches have been developed in recent years to learn visual features from large-scale unlabeled datasets. Their main drawback however is that these methods are hardly able to recognize visual features of the same object if it is simply rotated or the perspective of the camera changes. To overcome this limitation and at the same time exploit a useful source of supervision, we take into account video object tracks. Following the intuition that two patches in a track should have similar visual representations in a learned feature space, we adopt an unsupervised clustering-based approach and constrain such representations to be labeled as the same category since they likely belong to the same object or object part. Experimental results on two downstream tasks on different datasets demonstrate the effectiveness of our Online Deep Clustering with Video Track Consistency (ODCT) approach compared to prior work, which did not leverage temporal information. In addition we show that exploiting an unsupervised class-agnostic, yet noisy, track generator yields to better accuracy compared to relying on costly and precise track annotations. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 301,146 |
2211.15513 | Composite Score for Anomaly Detection in Imbalanced Real-World
Industrial Dataset | In recent years, the industrial sector has evolved towards its fourth revolution. The quality control domain is particularly interested in advanced machine learning for computer vision anomaly detection. Nevertheless, several challenges have to be faced, including imbalanced datasets, the image complexity, and the zero-false-negative (ZFN) constraint to guarantee the high-quality requirement. This paper illustrates a use case for an industrial partner, where Printed Circuit Board Assembly (PCBA) images are first reconstructed with a Vector Quantized Generative Adversarial Network (VQGAN) trained on normal products. Then, several multi-level metrics are extracted on a few normal and abnormal images, highlighting anomalies through reconstruction differences. Finally, a classifer is trained to build a composite anomaly score thanks to the metrics extracted. This three-step approach is performed on the public MVTec-AD datasets and on the partner PCBA dataset, where it achieves a regular accuracy of 95.69% and 87.93% under the ZFN constraint. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 333,301 |
2102.12924 | Visualizing MuZero Models | MuZero, a model-based reinforcement learning algorithm that uses a value equivalent dynamics model, achieved state-of-the-art performance in Chess, Shogi and the game of Go. In contrast to standard forward dynamics models that predict a full next state, value equivalent models are trained to predict a future value, thereby emphasizing value relevant information in the representations. While value equivalent models have shown strong empirical success, there is no research yet that visualizes and investigates what types of representations these models actually learn. Therefore, in this paper we visualize the latent representation of MuZero agents. We find that action trajectories may diverge between observation embeddings and internal state transition dynamics, which could lead to instability during planning. Based on this insight, we propose two regularization techniques to stabilize MuZero's performance. Additionally, we provide an open-source implementation of MuZero along with an interactive visualizer of learned representations, which may aid further investigation of value equivalent algorithms. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 221,889 |
2105.13003 | Rethinking InfoNCE: How Many Negative Samples Do You Need? | InfoNCE loss is a widely used loss function for contrastive model training. It aims to estimate the mutual information between a pair of variables by discriminating between each positive pair and its associated $K$ negative pairs. It is proved that when the sample labels are clean, the lower bound of mutual information estimation is tighter when more negative samples are incorporated, which usually yields better model performance. However, in many real-world tasks the labels often contain noise, and incorporating too many noisy negative samples for model training may be suboptimal. In this paper, we study how many negative samples are optimal for InfoNCE in different scenarios via a semi-quantitative theoretical framework. More specifically, we first propose a probabilistic model to analyze the influence of the negative sampling ratio $K$ on training sample informativeness. Then, we design a training effectiveness function to measure the overall influence of training samples on model learning based on their informativeness. We estimate the optimal negative sampling ratio using the $K$ value that maximizes the training effectiveness function. Based on our framework, we further propose an adaptive negative sampling method that can dynamically adjust the negative sampling ratio to improve InfoNCE based model training. Extensive experiments on different real-world datasets show our framework can accurately predict the optimal negative sampling ratio in different tasks, and our proposed adaptive negative sampling method can achieve better performance than the commonly used fixed negative sampling ratio strategy. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 237,173 |
1206.4682 | Copula-based Kernel Dependency Measures | The paper presents a new copula based method for measuring dependence between random variables. Our approach extends the Maximum Mean Discrepancy to the copula of the joint distribution. We prove that this approach has several advantageous properties. Similarly to Shannon mutual information, the proposed dependence measure is invariant to any strictly increasing transformation of the marginal variables. This is important in many applications, for example in feature selection. The estimator is consistent, robust to outliers, and uses rank statistics only. We derive upper bounds on the convergence rate and propose independence tests too. We illustrate the theoretical contributions through a series of experiments in feature selection and low-dimensional embedding of distributions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 16,733 |
2305.02523 | Market Making and Pricing of Financial Derivatives based on Road Travel
Times | Travel time derivatives are financial instruments that derive their value from road travel times, serving as an underlying asset that cannot be directly traded. Within the transportation domain, these derivatives are proposed as a more comprehensive approach to value pricing. They enable road pricing based not only on the level of travel time but also its volatility. In the financial market, travel time derivatives are introduced as innovative hedging instruments to mitigate market risk, particularly in light of recent stress experienced by the crypto market and traditional banking sector. The paper focuses on three main aspects: (1) the motivation behind the introduction of these derivatives, driven by the demand for hedging; (2) exploring the potential market for these instruments; and (3) delving into the product design and pricing schemes associated with them. The pricing schemes are devised by utilizing real-time travel time data captured by sensors. These data are modeled using Ornstein-Uhlenbeck processes and, more broadly, continuous time autoregressive moving average (CARMA) models. The calibration of these models is achieved through a hidden factor model, which describes the dynamics of travel time processes. The risk-neutral pricing principle is then employed to determine the prices of the derivatives, employing well-designed procedures to identify the market value of risk. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 362,070 |
2408.07712 | Introduction to Reinforcement Learning | Reinforcement Learning (RL), a subfield of Artificial Intelligence (AI), focuses on training agents to make decisions by interacting with their environment to maximize cumulative rewards. This paper provides an overview of RL, covering its core concepts, methodologies, and resources for further learning. It offers a thorough explanation of fundamental components such as states, actions, policies, and reward signals, ensuring readers develop a solid foundational understanding. Additionally, the paper presents a variety of RL algorithms, categorized based on the key factors such as model-free, model-based, value-based, policy-based, and other key factors. Resources for learning and implementing RL, such as books, courses, and online communities are also provided. By offering a clear, structured introduction, this paper aims to simplify the complexities of RL for beginners, providing a straightforward pathway to understanding. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 480,698 |
2103.11793 | DeepOPF-V: Solving AC-OPF Problems Efficiently | AC optimal power flow (AC-OPF) problems need to be solved more frequently in the future to maintain stable and economic power system operation. To tackle this challenge, a deep neural network-based voltage-constrained approach (DeepOPF-V) is proposed to solve AC-OPF problems with high computational efficiency. Its unique design predicts voltages of all buses and then uses them to reconstruct the remaining variables without solving non-linear AC power flow equations. A fast post-processing process is developed to enforce the box constraints. The effectiveness of DeepOPF-V is validated by simulations on IEEE 118/300-bus systems and a 2000-bus test system. Compared with existing studies, DeepOPF-V achieves decent computation speedup up to four orders of magnitude and comparable performance in optimality gap and preserving the feasibility of the solution. | false | false | false | false | true | false | true | false | false | false | true | false | false | false | false | false | false | false | 225,943 |
2010.06018 | Gender Coreference and Bias Evaluation at WMT 2020 | Gender bias in machine translation can manifest when choosing gender inflections based on spurious gender correlations. For example, always translating doctors as men and nurses as women. This can be particularly harmful as models become more popular and deployed within commercial systems. Our work presents the largest evidence for the phenomenon in more than 19 systems submitted to the WMT over four diverse target languages: Czech, German, Polish, and Russian. To achieve this, we use WinoMT, a recent automatic test suite which examines gender coreference and bias when translating from English to languages with grammatical gender. We extend WinoMT to handle two new languages tested in WMT: Polish and Czech. We find that all systems consistently use spurious correlations in the data rather than meaningful contextual information. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 200,329 |
2301.00422 | Leveraging Semantic Representations Combined with Contextual Word
Representations for Recognizing Textual Entailment in Vietnamese | RTE is a significant problem and is a reasonably active research community. The proposed research works on the approach to this problem are pretty diverse with many different directions. For Vietnamese, the RTE problem is moderately new, but this problem plays a vital role in natural language understanding systems. Currently, methods to solve this problem based on contextual word representation learning models have given outstanding results. However, Vietnamese is a semantically rich language. Therefore, in this paper, we want to present an experiment combining semantic word representation through the SRL task with context representation of BERT relative models for the RTE problem. The experimental results give conclusions about the influence and role of semantic representation on Vietnamese in understanding natural language. The experimental results show that the semantic-aware contextual representation model has about 1% higher performance than the model that does not incorporate semantic representation. In addition, the effects on the data domain in Vietnamese are also higher than those in English. This result also shows the positive influence of SRL on RTE problem in Vietnamese. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 338,904 |
2201.08142 | Physically Embodied Deep Image Optimisation | Physical sketches are created by learning programs to control a drawing robot. A differentiable rasteriser is used to optimise sets of drawing strokes to match an input image, using deep networks to provide an encoding for which we can compute a loss. The optimised drawing primitives can then be translated into G-code commands which command a robot to draw the image using drawing instruments such as pens and pencils on a physical support medium. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 276,246 |
2406.05682 | From Basic to Extra Features: Hypergraph Transformer
Pretrain-then-Finetuning for Balanced Clinical Predictions on EHR | Electronic Health Records (EHRs) contain rich patient information and are crucial for clinical research and practice. In recent years, deep learning models have been applied to EHRs, but they often rely on massive features, which may not be readily available for all patients. We propose HTP-Star, which leverages hypergraph structures with a pretrain-then-finetune framework for modeling EHR data, enabling seamless integration of additional features. Additionally, we design two techniques, namely (1) Smoothness-inducing Regularization and (2) Group-balanced Reweighting, to enhance the model's robustness during fine-tuning. Through experiments conducted on two real EHR datasets, we demonstrate that HTP-Star consistently outperforms various baselines while striking a balance between patients with basic and extra features. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 462,246 |
1801.08951 | Graph-Theoretic Framework for Unified Analysis of Observability and Data
Injection Attacks in the Smart Grid | In this paper, a novel graph-theoretic framework is proposed to generalize the analysis of a broad set of security attacks, including observability and data injection attacks, that target the state estimator of a smart grid. First, the notion of observability attacks is defined based on a proposed graph-theoretic construct. In this respect, a structured approach is proposed to characterize critical sets, whose removal renders the system unobservable. It is then shown that, for the system to be observable, these critical sets must be part of a maximum matching over a proposed bipartite graph. In addition, it is shown that stealthy data injection attacks (SDIAs) constitute a special case of these observability attacks. Then, various attack strategies and defense policies, for observability and data injection attacks, are shown to be amenable to analysis using the introduced graph-theoretic framework. The proposed framework is then shown to provide a unified basis for analysis of four key security problems (among others), pertaining to the characterization of: 1) The sparsest SDIA; 2) the sparsest SDIA including a certain measurement; 3) a set of measurements which must be defended to thwart all potential SDIAs; and 4) the set of measurements, which when protected, can thwart any SDIA whose cardinality is below a certain threshold. A case study using the IEEE 14-bus system with a set of 17 measurements is used to support the theoretical findings. | false | false | false | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | 89,016 |
2203.09690 | A$^3$T: Alignment-Aware Acoustic and Text Pretraining for Speech
Synthesis and Editing | Recently, speech representation learning has improved many speech-related tasks such as speech recognition, speech classification, and speech-to-text translation. However, all the above tasks are in the direction of speech understanding, but for the inverse direction, speech synthesis, the potential of representation learning is yet to be realized, due to the challenging nature of generating high-quality speech. To address this problem, we propose our framework, Alignment-Aware Acoustic-Text Pretraining (A$^3$T), which reconstructs masked acoustic signals with text input and acoustic-text alignment during training. In this way, the pretrained model can generate high quality reconstructed spectrogram, which can be applied to the speech editing and unseen speaker TTS directly. Experiments show A$^3$T outperforms SOTA models on speech editing, and improves multi-speaker speech synthesis without the external speaker verification model. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 286,245 |
1803.08604 | Learning State Representations for Query Optimization with Deep
Reinforcement Learning | Deep reinforcement learning is quickly changing the field of artificial intelligence. These models are able to capture a high level understanding of their environment, enabling them to learn difficult dynamic tasks in a variety of domains. In the database field, query optimization remains a difficult problem. Our goal in this work is to explore the capabilities of deep reinforcement learning in the context of query optimization. At each state, we build queries incrementally and encode properties of subqueries through a learned representation. The challenge here lies in the formation of the state transition function, which defines how the current subquery state combines with the next query operation (action) to yield the next state. As a first step in this direction, we focus the state representation problem and the formation of the state transition function. We describe our approach and show preliminary results. We further discuss how we can use the state representation to improve query optimization using reinforcement learning. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | true | false | 93,293 |
1803.05963 | Studying Invariances of Trained Convolutional Neural Networks | Convolutional Neural Networks (CNNs) define an exceptionally powerful class of models for image classification, but the theoretical background and the understanding of how invariances to certain transformations are learned is limited. In a large scale screening with images modified by different affine and nonaffine transformations of varying magnitude, we analyzed the behavior of the CNN architectures AlexNet and ResNet. If the magnitude of different transformations does not exceed a class- and transformation dependent threshold, both architectures show invariant behavior. In this work we furthermore introduce a new learnable module, the Invariant Transformer Net, which enables us to learn differentiable parameters for a set of affine transformations. This allows us to extract the space of transformations to which the CNN is invariant and its class prediction robust. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 92,744 |
1607.00968 | Full waveform inversion guided by travel time tomography | Full waveform inversion (FWI) is a process in which seismic numerical simulations are fit to observed data by changing the wave velocity model of the medium under investigation. The problem is non-linear, and therefore optimization techniques have been used to find a reasonable solution to the problem. The main problem in fitting the data is the lack of low spatial frequencies. This deficiency often leads to a local minimum and to non-plausible solutions. In this work we explore how to obtain low frequency information for FWI. Our approach involves augmenting FWI with travel time tomography, which has low-frequency features. By jointly inverting these two problems we enrich FWI with information that can replace low frequency data. In addition, we use high order regularization, in a preliminary inversion stage, to prevent high frequency features from polluting our model in the initial stages of the reconstruction. This regularization also promotes the non-dominant low-frequency modes that exist in the FWI sensitivity. By applying a joint FWI and travel time inversion we are able to obtain a smooth model than can later be used to recover a good approximation for the true model. A second contribution of this paper involves the acceleration of the main computational bottleneck in FWI--the solution of the Helmholtz equation. We show that the solution time can be reduced by solving the equation for multiple right hand sides using block multigrid preconditioned Krylov methods. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 58,158 |
2306.05584 | Multi-body SE(3) Equivariance for Unsupervised Rigid Segmentation and
Motion Estimation | A truly generalizable approach to rigid segmentation and motion estimation is fundamental to 3D understanding of articulated objects and moving scenes. In view of the closely intertwined relationship between segmentation and motion estimates, we present an SE(3) equivariant architecture and a training strategy to tackle this task in an unsupervised manner. Our architecture is composed of two interconnected, lightweight heads. These heads predict segmentation masks using point-level invariant features and estimate motion from SE(3) equivariant features, all without the need for category information. Our training strategy is unified and can be implemented online, which jointly optimizes the predicted segmentation and motion by leveraging the interrelationships among scene flow, segmentation mask, and rigid transformations. We conduct experiments on four datasets to demonstrate the superiority of our method. The results show that our method excels in both model performance and computational efficiency, with only 0.25M parameters and 0.92G FLOPs. To the best of our knowledge, this is the first work designed for category-agnostic part-level SE(3) equivariance in dynamic point clouds. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | true | 372,253 |
1402.1652 | How to Apply Assignment Methods that were Developed for Vehicular
Traffic to Pedestrian Microsimulations | Applying assignment methods to compute user-equilibrium route choice is very common in traffic planning. It is common sense that vehicular traffic arranges in a user-equilibrium based on generalized costs in which travel time is a major factor. Surprisingly travel time has not received much attention for the route choice of pedestrians. In microscopic simulations of pedestrians the vastly dominating paradigm for the computation of the preferred walking direction is set into the direction of the (spatially) shortest path. For situations where pedestrians have travel time as primary determinant for their walking behavior it would be desirable to also have an assignment method in pedestrian simulations. To apply existing (road traffic) assignment methods with simulations of pedestrians one has to reduce the nondenumerably many possible pedestrian trajectories to a small subset of routes which represent the main, relevant, and significantly distinguished routing alternatives. All except one of these routes will mark detours, i.e. not the shortest connection between origin and destination. The proposed assignment method is intended to work with common operational models of pedestrian dynamics. These - as mentioned before - usually send pedestrians into the direction of the spatially shortest path. Thus, all detouring routes have to be equipped with intermediate destinations, such that pedestrians can do a detour as a piecewise connection of segments on which they walk into the direction of the shortest path. One has then to take care that the transgression from one segment to the following one no artifacts are introduced into the pedestrian trajectory. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 30,691 |
2402.14355 | Rule or Story, Which is a Better Commonsense Expression for Talking with
Large Language Models? | Building machines with commonsense has been a longstanding challenge in NLP due to the reporting bias of commonsense rules and the exposure bias of rule-based commonsense reasoning. In contrast, humans convey and pass down commonsense implicitly through stories. This paper investigates the inherent commonsense ability of large language models (LLMs) expressed through storytelling. We systematically investigate and compare stories and rules for retrieving and leveraging commonsense in LLMs. Experimental results on 28 commonsense QA datasets show that stories outperform rules as the expression for retrieving commonsense from LLMs, exhibiting higher generation confidence and commonsense accuracy. Moreover, stories are the more effective commonsense expression for answering questions regarding daily events, while rules are more effective for scientific questions. This aligns with the reporting bias of commonsense in text corpora. We further show that the correctness and relevance of commonsense stories can be further improved via iterative self-supervised fine-tuning. These findings emphasize the importance of using appropriate language to express, retrieve, and leverage commonsense for LLMs, highlighting a promising direction for better exploiting their commonsense abilities. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 431,646 |
1609.05225 | Autonomous Orbit Determination via Kalman Filtering of Gravity Gradients | Spaceborne gravity gradients are proposed in this paper to provide autonomous orbit determination capabilities for near Earth satellites. The gravity gradients contain useful position information which can be extracted by matching the observations with a precise gravity model. The extended Kalman filter is investigated as the principal estimator. The stochastic model of orbital motion, the measurement equation and the model configuration are discussed for the filter design. An augmented state filter is also developed to deal with unknown significant measurement biases. Simulations are conducted to analyze the effects of initial errors, data-sampling periods, orbital heights, attitude and gradiometer noise levels, and measurement biases. Results show that the filter performs well with additive white noise observation errors. Degraded observability for the along-track position is found for the augmented state filter. Real flight data from the GOCE satellite are used to test the algorithm. Radial and cross-track position errors of less than 100 m have been achieved. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 61,096 |
2212.01027 | Progress and Challenges for the Application of Machine Learning for
Neglected Tropical Diseases | Neglected tropical diseases (NTDs) continue to affect the livelihood of individuals in countries in the Southeast Asia and Western Pacific region. These diseases have been long existing and have caused devastating health problems and economic decline to people in low- and middle-income (developing) countries. An estimated 1.7 billion of the world's population suffer one or more NTDs annually, this puts approximately one in five individuals at risk for NTDs. In addition to health and social impact, NTDs inflict significant financial burden to patients, close relatives, and are responsible for billions of dollars lost in revenue from reduced labor productivity in developing countries alone. There is an urgent need to better improve the control and eradication or elimination efforts towards NTDs. This can be achieved by utilizing machine learning tools to better the surveillance, prediction and detection program, and combat NTDs through the discovery of new therapeutics against these pathogens. This review surveys the current application of machine learning tools for NTDs and the challenges to elevate the state-of-the-art of NTDs surveillance, management, and treatment. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 334,284 |
1610.07804 | mdBrief - A Fast Online Adaptable, Distorted Binary Descriptor for
Real-Time Applications Using Calibrated Wide-Angle Or Fisheye Cameras | Fast binary descriptors build the core for many vision based applications with real-time demands like object detection, Visual Odometry or SLAM. Commonly it is assumed, that the acquired images and thus the patches extracted around keypoints originate from a perspective projection ignoring image distortion or completely different types of projections such as omnidirectional or fisheye. Usually the deviations from a perfect perspective projection are corrected by undistortion. Latter, however, introduces severe artifacts if the cameras field-of-view gets larger. In this paper, we propose a distorted and masked version of the BRIEF descriptor for calibrated cameras. Instead of correcting the distortion holistically, we distort the binary tests and thus adapt the descriptor to different image regions. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 62,852 |
1907.12608 | Deep Gradient Boosting -- Layer-wise Input Normalization of Neural
Networks | Stochastic gradient descent (SGD) has been the dominant optimization method for training deep neural networks due to its many desirable properties. One of the more remarkable and least understood quality of SGD is that it generalizes relatively well on unseen data even when the neural network has millions of parameters. We hypothesize that in certain cases it is desirable to relax its intrinsic generalization properties and introduce an extension of SGD called deep gradient boosting (DGB). The key idea of DGB is that back-propagated gradients inferred using the chain rule can be viewed as pseudo-residual targets of a gradient boosting problem. Thus at each layer of a neural network the weight update is calculated by solving the corresponding boosting problem using a linear base learner. The resulting weight update formula can also be viewed as a normalization procedure of the data that arrives at each layer during the forward pass. When implemented as a separate input normalization layer (INN) the new architecture shows improved performance on image recognition tasks when compared to the same architecture without normalization layers. As opposed to batch normalization (BN), INN has no learnable parameters however it matches its performance on CIFAR10 and ImageNet classification tasks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 140,151 |
2103.09906 | Releasing Locks As Early As You Can: Reducing Contention of Hotspots by
Violating Two-Phase Locking (Extended Version) | Hotspots, a small set of tuples frequently read/written by a large number of transactions, cause contention in a concurrency control protocol. While a hotspot may comprise only a small fraction of a transaction's execution time, conventional strict two-phase locking allows a transaction to release lock only after the transaction completes, which leaves significant parallelism unexploited. Ideally, a concurrency control protocol serializes transactions only for the duration of the hotspots, rather than the duration of transactions. We observe that exploiting such parallelism requires violating two-phase locking. In this paper, we propose Bamboo, a new concurrency control protocol that can enable such parallelism by modifying the conventional two-phase locking, while maintaining the same guarantees in correctness. We thoroughly analyzed the effect of cascading aborts involved in reading uncommitted data and discussed optimizations that can be applied to further improve the performance. Our evaluation on TPC-C shows a performance improvement up to 4x compared to the best of pessimistic and optimistic baseline protocols. On synthetic workloads that contain a single hotspot, Bamboo achieves a speedup up to 19x over baselines. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 225,282 |
2306.06344 | Language-Guided Traffic Simulation via Scene-Level Diffusion | Realistic and controllable traffic simulation is a core capability that is necessary to accelerate autonomous vehicle (AV) development. However, current approaches for controlling learning-based traffic models require significant domain expertise and are difficult for practitioners to use. To remedy this, we present CTG++, a scene-level conditional diffusion model that can be guided by language instructions. Developing this requires tackling two challenges: the need for a realistic and controllable traffic model backbone, and an effective method to interface with a traffic model using language. To address these challenges, we first propose a scene-level diffusion model equipped with a spatio-temporal transformer backbone, which generates realistic and controllable traffic. We then harness a large language model (LLM) to convert a user's query into a loss function, guiding the diffusion model towards query-compliant generation. Through comprehensive evaluation, we demonstrate the effectiveness of our proposed method in generating realistic, query-compliant traffic simulations. | false | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | false | 372,566 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.