id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1808.10245 | Comparative Studies of Detecting Abusive Language on Twitter | The context-dependent nature of online aggression makes annotating large collections of data extremely difficult. Previously studied datasets in abusive language detection have been insufficient in size to efficiently train deep learning models. Recently, Hate and Abusive Speech on Twitter, a dataset much greater in size and reliability, has been released. However, this dataset has not been comprehensively studied to its potential. In this paper, we conduct the first comparative study of various learning models on Hate and Abusive Speech on Twitter, and discuss the possibility of using additional features and context data for improvements. Experimental results show that bidirectional GRU networks trained on word-level features, with Latent Topic Clustering modules, is the most accurate model scoring 0.805 F1. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 106,354 |
1810.00113 | Predicting the Generalization Gap in Deep Networks with Margin
Distributions | As shown in recent research, deep neural networks can perfectly fit randomly labeled data, but with very poor accuracy on held out data. This phenomenon indicates that loss functions such as cross-entropy are not a reliable indicator of generalization. This leads to the crucial question of how generalization gap should be predicted from the training data and network parameters. In this paper, we propose such a measure, and conduct extensive empirical studies on how well it can predict the generalization gap. Our measure is based on the concept of margin distribution, which are the distances of training points to the decision boundary. We find that it is necessary to use margin distributions at multiple layers of a deep network. On the CIFAR-10 and the CIFAR-100 datasets, our proposed measure correlates very strongly with the generalization gap. In addition, we find the following other factors to be of importance: normalizing margin values for scale independence, using characterizations of margin distribution rather than just the margin (closest distance to decision boundary), and working in log space instead of linear space (effectively using a product of margins rather than a sum). Our measure can be easily applied to feedforward deep networks with any architecture and may point towards new training loss functions that could enable better generalization. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 109,094 |
2404.10290 | NeuroMorphix: A Novel Brain MRI Asymmetry-specific Feature Construction
Approach For Seizure Recurrence Prediction | Seizure recurrence is an important concern after an initial unprovoked seizure; without drug treatment, it occurs within 2 years in 40-50% of cases. The decision to treat currently relies on predictors of seizure recurrence risk that are inaccurate, resulting in unnecessary, possibly harmful, treatment in some patients and potentially preventable seizures in others. Because of the link between brain lesions and seizure recurrence, we developed a recurrence prediction tool using machine learning and clinical 3T brain MRI. We developed NeuroMorphix, a feature construction approach based on MRI brain anatomy. Each of seven NeuroMorphix features measures the absolute or relative difference between corresponding regions in each cerebral hemisphere. FreeSurfer was used to segment brain regions and to generate values for morphometric parameters (8 for each cortical region and 5 for each subcortical region). The parameters were then mapped to whole brain NeuroMorphix features, yielding a total of 91 features per subject. Features were generated for a first seizure patient cohort (n = 169) categorised into seizure recurrence and non-recurrence subgroups. State-of-the-art classification algorithms were trained and tested using NeuroMorphix features to predict seizure recurrence. Classification models using the top 5 features, ranked by sequential forward selection, demonstrated excellent performance in predicting seizure recurrence, with area under the ROC curve of 88-93%, accuracy of 83-89%, and F1 score of 83-90%. Highly ranked features aligned with structural alterations known to be associated with epilepsy. This study highlights the potential for targeted, data-driven approaches to aid clinical decision-making in brain disorders. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 447,030 |
2401.08520 | SecPLF: Secure Protocols for Loanable Funds against Oracle Manipulation
Attacks | The evolving landscape of Decentralized Finance (DeFi) has raised critical security concerns, especially pertaining to Protocols for Loanable Funds (PLFs) and their dependency on price oracles, which are susceptible to manipulation. The emergence of flash loans has further amplified these risks, enabling increasingly complex oracle manipulation attacks that can lead to significant financial losses. Responding to this threat, we first dissect the attack mechanism by formalizing the standard operational and adversary models for PLFs. Based on our analysis, we propose SecPLF, a robust and practical solution designed to counteract oracle manipulation attacks efficiently. SecPLF operates by tracking a price state for each crypto-asset, including the recent price and the timestamp of its last update. By imposing price constraints on the price oracle usage, SecPLF ensures a PLF only engages a price oracle if the last recorded price falls within a defined threshold, thereby negating the profitability of potential attacks. Our evaluation based on historical market data confirms SecPLF's efficacy in providing high-confidence prevention against arbitrage attacks that arise due to minor price differences. SecPLF delivers proactive protection against oracle manipulation attacks, offering ease of implementation, oracle-agnostic property, and resource and cost efficiency. | false | true | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | 421,920 |
2403.04153 | Designing Social Robots that Engage Older Adults in Exercise: A Case
Study | We present and evaluate a prototype social robot to encourage daily exercise among older adults in a home setting. Our prototype system, designed to lead users through exercise sessions with motivational feedback, was assessed through a case study with a 78-year-old participant for one week. Our case study highlighted preferences for greater user control over exercise choices and questioned the necessity of precise motion tracking. Feedback also indicated a desire for more varied exercises and suggested improvements in user engagement techniques. The insights suggest that further research is needed to enhance system adaptability and effectiveness to better promote daily exercise. Future efforts will aim to refine the prototype based on participant feedback and extend the evaluation to broader in-home deployments. | true | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 435,479 |
2401.12803 | Enhancements for 5G NR PRACH Reception: An AI/ML Approach | Random Access is an important step in enabling the initial attachment of a User Equipment (UE) to a Base Station (gNB). The UE identifies itself by embedding a Preamble Index (RAPID) in the phase rotation of a known base sequence, which it transmits on the Physical Random Access Channel (PRACH). The signal on the PRACH also enables the estimation of propagation delay, often known as Timing Advance (TA), which is induced by virtue of the UE's position. Traditional receivers estimate the RAPID and TA using correlation-based techniques. This paper presents an alternative receiver approach that uses AI/ML models, wherein two neural networks are proposed, one for the RAPID and one for the TA. Different from other works, these two models can run in parallel as opposed to sequentially. Experiments with both simulated data and over-the-air hardware captures highlight the improved performance of the proposed AI/ML-based techniques compared to conventional correlation methods. | false | false | false | false | true | false | true | false | false | true | false | false | false | false | false | false | false | false | 423,502 |
2309.17284 | Differentially Private Computation of Basic Reproduction Numbers in
Networked Epidemic Models | The basic reproduction number of a networked epidemic model, denoted $R_0$, can be computed from a network's topology to quantify epidemic spread. However, disclosure of $R_0$ risks revealing sensitive information about the underlying network, such as an individual's relationships within a social network. Therefore, we propose a framework to compute and release $R_0$ in a differentially private way. First, we provide a new result that shows how $R_0$ can be used to bound the level of penetration of an epidemic within a single community as a motivation for the need of privacy, which may also be of independent interest. We next develop a privacy mechanism to formally safeguard the edge weights in the underlying network when computing $R_0$. Then we formalize tradeoffs between the level of privacy and the accuracy of values of the privatized $R_0$. To show the utility of the private $R_0$ in practice, we use it to bound this level of penetration under privacy, and concentration bounds on these analyses show they remain accurate with privacy implemented. We apply our results to real travel data gathered during the spread of COVID-19, and we show that, under real-world conditions, we can compute $R_0$ in a differentially private way while incurring errors as low as $7.6\%$ on average. | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | false | 395,713 |
2207.10653 | RepFair-GAN: Mitigating Representation Bias in GANs Using Gradient
Clipping | Fairness has become an essential problem in many domains of Machine Learning (ML), such as classification, natural language processing, and Generative Adversarial Networks (GANs). In this research effort, we study the unfairness of GANs. We formally define a new fairness notion for generative models in terms of the distribution of generated samples sharing the same protected attributes (gender, race, etc.). The defined fairness notion (representational fairness) requires the distribution of the sensitive attributes at the test time to be uniform, and, in particular for GAN model, we show that this fairness notion is violated even when the dataset contains equally represented groups, i.e., the generator favors generating one group of samples over the others at the test time. In this work, we shed light on the source of this representation bias in GANs along with a straightforward method to overcome this problem. We first show on two widely used datasets (MNIST, SVHN) that when the norm of the gradient of one group is more important than the other during the discriminator's training, the generator favours sampling data from one group more than the other at test time. We then show that controlling the groups' gradient norm by performing group-wise gradient norm clipping in the discriminator during the training leads to a more fair data generation in terms of representational fairness compared to existing models while preserving the quality of generated samples. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 309,334 |
2112.13556 | Towards Personalized Answer Generation in E-Commerce via
Multi-Perspective Preference Modeling | Recently, Product Question Answering (PQA) on E-Commerce platforms has attracted increasing attention as it can act as an intelligent online shopping assistant and improve the customer shopping experience. Its key function, automatic answer generation for product-related questions, has been studied by aiming to generate content-preserving while question-related answers. However, an important characteristic of PQA, i.e., personalization, is neglected by existing methods. It is insufficient to provide the same "completely summarized" answer to all customers, since many customers are more willing to see personalized answers with customized information only for themselves, by taking into consideration their own preferences towards product aspects or information needs. To tackle this challenge, we propose a novel Personalized Answer GEneration method (PAGE) with multi-perspective preference modeling, which explores historical user-generated contents to model user preference for generating personalized answers in PQA. Specifically, we first retrieve question-related user history as external knowledge to model knowledge-level user preference. Then we leverage Gaussian Softmax distribution model to capture latent aspect-level user preference. Finally, we develop a persona-aware pointer network to generate personalized answers in terms of both content and style by utilizing personal user preference and dynamic user vocabulary. Experimental results on real-world E-Commerce QA datasets demonstrate that the proposed method outperforms existing methods by generating informative and customized answers, and show that answer generation in E-Commerce can benefit from personalization. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 273,285 |
2205.12450 | Cross-Domain Style Mixing for Face Cartoonization | Cartoon domain has recently gained increasing popularity. Previous studies have attempted quality portrait stylization into the cartoon domain; however, this poses a great challenge since they have not properly addressed the critical constraints, such as requiring a large number of training images or the lack of support for abstract cartoon faces. Recently, a layer swapping method has been used for stylization requiring only a limited number of training images; however, its use cases are still narrow as it inherits the remaining issues. In this paper, we propose a novel method called Cross-domain Style mixing, which combines two latent codes from two different domains. Our method effectively stylizes faces into multiple cartoon characters at various face abstraction levels using only a single generator without even using a large number of training images. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 298,541 |
1708.03917 | About renegades and outgroup-haters: Modelling the link between social
influence and intergroup attitudes | Polarization between groups is a major topic of contemporary societal debate as well as of research into intergroup relations. Formal modelers of opinion dynamics try to explain how intergroup polarization can arise from simple first principles of interactions within and between groups. Models have been proposed in which intergroup attitudes affect social influence in the form of homophily or xenophobia, elaborated as fixed tendencies of individuals to interact more with in-group members, be more open to influence from in-group members and perhaps even distance oneself from attitudes of outgroup members. While these models can generate polarization between groups, their underlying assumptions curiously neglect a central insight from research on intergroup attitudes. Intergroup attitudes are themselves subject to social influence in interactions with both in- and outgroup members. I extend an existing model of opinion formation with intergroup attitudes, by adding this feedback-effect. I show how this changes model predictions about the process and the conditions of polarization between groups. In particular, it is demonstrated how the model implies that intergroup polarization can become less likely if intergroup attitudes change under social influence; and how more complex patterns of intergroup relations emerge. Especially, a renegade minority (outgroup-lovers) can have a key role in avoiding mutually negative intergroup relations and even elicit attitude reversal, resulting in a majority of individuals developing a negative attitude towards their in-group and a positive one of the outgroup. Interpretations of these theoretical results and directions for future research are further discussed. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 78,846 |
1809.03368 | Probabilistic Binary Neural Networks | Low bit-width weights and activations are an effective way of combating the increasing need for both memory and compute power of Deep Neural Networks. In this work, we present a probabilistic training method for Neural Network with both binary weights and activations, called BLRNet. By embracing stochasticity during training, we circumvent the need to approximate the gradient of non-differentiable functions such as sign(), while still obtaining a fully Binary Neural Network at test time. Moreover, it allows for anytime ensemble predictions for improved performance and uncertainty estimates by sampling from the weight distribution. Since all operations in a layer of the BLRNet operate on random variables, we introduce stochastic versions of Batch Normalization and max pooling, which transfer well to a deterministic network at test time. We evaluate the BLRNet on multiple standardized benchmarks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 107,309 |
2010.05508 | Implicit Subspace Prior Learning for Dual-Blind Face Restoration | Face restoration is an inherently ill-posed problem, where additional prior constraints are typically considered crucial for mitigating such pathology. However, real-world image prior are often hard to simulate with precise mathematical models, which inevitably limits the performance and generalization ability of existing prior-regularized restoration methods. In this paper, we study the problem of face restoration under a more practical ``dual blind'' setting, i.e., without prior assumptions or hand-crafted regularization terms on the degradation profile or image contents. To this end, a novel implicit subspace prior learning (ISPL) framework is proposed as a generic solution to dual-blind face restoration, with two key elements: 1) an implicit formulation to circumvent the ill-defined restoration mapping and 2) a subspace prior decomposition and fusion mechanism to dynamically handle inputs at varying degradation levels with consistent high-quality restoration results. Experimental results demonstrate significant perception-distortion improvement of ISPL against existing state-of-the-art methods for a variety of restoration subtasks, including a 3.69db PSNR and 45.8% FID gain against ESRGAN, the 2018 NTIRE SR challenge winner. Overall, we prove that it is possible to capture and utilize prior knowledge without explicitly formulating it, which will help inspire new research paradigms towards low-level vision tasks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 200,162 |
2008.02217 | Hopfield Networks is All You Need | We introduce a modern Hopfield network with continuous states and a corresponding update rule. The new Hopfield network can store exponentially (with the dimension of the associative space) many patterns, retrieves the pattern with one update, and has exponentially small retrieval errors. It has three types of energy minima (fixed points of the update): (1) global fixed point averaging over all patterns, (2) metastable states averaging over a subset of patterns, and (3) fixed points which store a single pattern. The new update rule is equivalent to the attention mechanism used in transformers. This equivalence enables a characterization of the heads of transformer models. These heads perform in the first layers preferably global averaging and in higher layers partial averaging via metastable states. The new modern Hopfield network can be integrated into deep learning architectures as layers to allow the storage of and access to raw input data, intermediate results, or learned prototypes. These Hopfield layers enable new ways of deep learning, beyond fully-connected, convolutional, or recurrent networks, and provide pooling, memory, association, and attention mechanisms. We demonstrate the broad applicability of the Hopfield layers across various domains. Hopfield layers improved state-of-the-art on three out of four considered multiple instance learning problems as well as on immune repertoire classification with several hundreds of thousands of instances. On the UCI benchmark collections of small classification tasks, where deep learning methods typically struggle, Hopfield layers yielded a new state-of-the-art when compared to different machine learning methods. Finally, Hopfield layers achieved state-of-the-art on two drug design datasets. The implementation is available at: https://github.com/ml-jku/hopfield-layers | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | true | false | false | 190,560 |
1803.10910 | Deep Unsupervised Saliency Detection: A Multiple Noisy Labeling
Perspective | The success of current deep saliency detection methods heavily depends on the availability of large-scale supervision in the form of per-pixel labeling. Such supervision, while labor-intensive and not always possible, tends to hinder the generalization ability of the learned models. By contrast, traditional handcrafted features based unsupervised saliency detection methods, even though have been surpassed by the deep supervised methods, are generally dataset-independent and could be applied in the wild. This raises a natural question that "Is it possible to learn saliency maps without using labeled data while improving the generalization ability?". To this end, we present a novel perspective to unsupervised saliency detection through learning from multiple noisy labeling generated by "weak" and "noisy" unsupervised handcrafted saliency methods. Our end-to-end deep learning framework for unsupervised saliency detection consists of a latent saliency prediction module and a noise modeling module that work collaboratively and are optimized jointly. Explicit noise modeling enables us to deal with noisy saliency maps in a probabilistic way. Extensive experimental results on various benchmarking datasets show that our model not only outperforms all the unsupervised saliency methods with a large margin but also achieves comparable performance with the recent state-of-the-art supervised deep saliency methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 93,787 |
2305.05403 | Completeness, Recall, and Negation in Open-World Knowledge Bases: A
Survey | General-purpose knowledge bases (KBs) are a cornerstone of knowledge-centric AI. Many of them are constructed pragmatically from Web sources, and are thus far from complete. This poses challenges for the consumption as well as the curation of their content. While several surveys target the problem of completing incomplete KBs, the first problem is arguably to know whether and where the KB is incomplete in the first place, and to which degree. In this survey we discuss how knowledge about completeness, recall, and negation in KBs can be expressed, extracted, and inferred. We cover (i) the logical foundations of knowledge representation and querying under partial closed-world semantics; (ii) the estimation of this information via statistical patterns; (iii) the extraction of information about recall from KBs and text; (iv) the identification of interesting negative statements; and (v) relaxed notions of relative recall. This survey is targeted at two types of audiences: (1) practitioners who are interested in tracking KB quality, focusing extraction efforts, and building quality-aware downstream applications; and (2) data management, knowledge base and semantic web researchers who wish to understand the state of the art of knowledge bases beyond the open-world assumption. Consequently, our survey presents both fundamental methodologies and their working, and gives practice-oriented recommendations on how to choose between different approaches for a problem at hand. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | true | true | 363,137 |
2006.16719 | Gaussian Process Repetitive Control for Suppressing Spatial Disturbances | Motion systems are often subject to disturbances such as cogging, commutation errors, and imbalances, that vary with velocity and appear periodic in time for constant operating velocities. The aim of this paper is to develop a repetitive controller (RC) for disturbances that are not periodic in the time domain, yet occur due to an identical position-domain disturbance. A new spatial RC framework is developed, allowing to attenuate disturbances that are periodic in the position domain but manifest a-periodic in the time domain. A Gaussian process (GP) based memory is employed with a suitable periodic kernel that can effectively deal with the intermittent observations inherent to the position domain. A mechatronic example confirms the potential of the method. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 184,890 |
2410.14281 | PLMTrajRec: A Scalable and Generalizable Trajectory Recovery Method with
Pre-trained Language Models | Spatiotemporal trajectory data is crucial for various applications. However, issues such as device malfunctions and network instability often cause sparse trajectories, leading to lost detailed movement information. Recovering the missing points in sparse trajectories to restore the detailed information is thus essential. Despite recent progress, several challenges remain. First, the lack of large-scale dense trajectory data makes it difficult to train a trajectory recovery model from scratch. Second, the varying spatiotemporal correlations in sparse trajectories make it hard to generalize recovery across different sampling intervals. Third, the lack of location information complicates the extraction of road conditions for missing points. To address these challenges, we propose a novel trajectory recovery model called PLMTrajRec. It leverages the scalability of a pre-trained language model (PLM) and can be fine-tuned with only a limited set of dense trajectories. To handle different sampling intervals in sparse trajectories, we first convert each trajectory's sampling interval and movement features into natural language representations, allowing the PLM to recognize its interval. We then introduce a trajectory encoder to unify trajectories of varying intervals into a single interval and capture their spatiotemporal relationships. To obtain road conditions for missing points, we propose an area flow-guided implicit trajectory prompt, which models road conditions by collecting traffic flows in each region. We also introduce a road condition passing mechanism that uses observed points' road conditions to infer those of the missing points. Experiments on two public trajectory datasets with three sampling intervals each demonstrate the effectiveness, scalability, and generalization ability of PLMTrajRec. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 499,970 |
1901.11078 | Real-world Mapping of Gaze Fixations Using Instance Segmentation for
Road Construction Safety Applications | Research studies have shown that a large proportion of hazards remain unrecognized, which expose construction workers to unanticipated safety risks. Recent studies have also found that a strong correlation exists between viewing patterns of workers, captured using eye-tracking devices, and their hazard recognition performance. Therefore, it is important to analyze the viewing patterns of workers to gain a better understanding of their hazard recognition performance. This paper proposes a method that can automatically map the gaze fixations collected using a wearable eye-tracker to the predefined areas of interests. The proposed method detects these areas or objects (i.e., hazards) of interests through a computer vision-based segmentation technique and transfer learning. The mapped fixation data is then used to analyze the viewing behaviors of workers and compute their attention distribution. The proposed method is implemented on an under construction road as a case study to evaluate the performance of the proposed method. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 120,160 |
2412.11119 | Impact of Adversarial Attacks on Deep Learning Model Explainability | In this paper, we investigate the impact of adversarial attacks on the explainability of deep learning models, which are commonly criticized for their black-box nature despite their capacity for autonomous feature extraction. This black-box nature can affect the perceived trustworthiness of these models. To address this, explainability techniques such as GradCAM, SmoothGrad, and LIME have been developed to clarify model decision-making processes. Our research focuses on the robustness of these explanations when models are subjected to adversarial attacks, specifically those involving subtle image perturbations that are imperceptible to humans but can significantly mislead models. For this, we utilize attack methods like the Fast Gradient Sign Method (FGSM) and the Basic Iterative Method (BIM) and observe their effects on model accuracy and explanations. The results reveal a substantial decline in model accuracy, with accuracies dropping from 89.94% to 58.73% and 45.50% under FGSM and BIM attacks, respectively. Despite these declines in accuracy, the explanation of the models measured by metrics such as Intersection over Union (IoU) and Root Mean Square Error (RMSE) shows negligible changes, suggesting that these metrics may not be sensitive enough to detect the presence of adversarial perturbations. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 517,264 |
2412.04285 | Deep Causal Inference for Point-referenced Spatial Data with Continuous
Treatments | Causal reasoning is often challenging with spatial data, particularly when handling high-dimensional inputs. To address this, we propose a neural network (NN) based framework integrated with an approximate Gaussian process to manage spatial interference and unobserved confounding. Additionally, we adopt a generalized propensity-score-based approach to address partially observed outcomes when estimating causal effects with continuous treatments. We evaluate our framework using synthetic, semi-synthetic, and real-world data inferred from satellite imagery. Our results demonstrate that NN-based models significantly outperform linear spatial regression models in estimating causal effects. Furthermore, in real-world case studies, NN-based models offer more reasonable predictions of causal effects, facilitating decision-making in relevant applications. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 514,341 |
1911.11427 | Adaptive Frequency-limited H2-Model Order Reduction | In this paper, we present an adaptive framework for constructing a pseudo-optimal reduced model for the frequency-limited H2-optimal model order reduction problem. We show that the frequency-limited pseudo-optimal reduced-order model has an inherent property of monotonic decay in error if the interpolation points and tangential directions are selected appropriately. We also show that this property can be used to make an automatic selection of the order of the reduced model for an allowable tolerance in error. The proposed algorithm adaptively increases the order of the reduced model such that the frequency-limited H2-norm error decays monotonically irrespective of the choice of interpolation points and tangential directions. The stability of the reduced-order model is also guaranteed. Additionally, it also generates the approximations of the frequency-limited system Gramians that monotonically approach the original solution. Further, we show that the low-rank alternating direction implicit iteration method for solving large-scale frequency-limited Lyapunov equations implicitly performs frequency-limited pseudo-optimal model order reduction. We consider two numerical examples to validate the theory presented in the paper. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 155,125 |
2402.13321 | Rigor with Machine Learning from Field Theory to the Poincar\'e
Conjecture | Machine learning techniques are increasingly powerful, leading to many breakthroughs in the natural sciences, but they are often stochastic, error-prone, and blackbox. How, then, should they be utilized in fields such as theoretical physics and pure mathematics that place a premium on rigor and understanding? In this Perspective we discuss techniques for obtaining rigor in the natural sciences with machine learning. Non-rigorous methods may lead to rigorous results via conjecture generation or verification by reinforcement learning. We survey applications of these techniques-for-rigor ranging from string theory to the smooth $4$d Poincar\'e conjecture in low-dimensional topology. One can also imagine building direct bridges between machine learning theory and either mathematics or theoretical physics. As examples, we describe a new approach to field theory motivated by neural network theory, and a theory of Riemannian metric flows induced by neural network gradient descent, which encompasses Perelman's formulation of the Ricci flow that was utilized to resolve the $3$d Poincar\'e conjecture. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 431,199 |
2211.17015 | Explaining automated gender classification of human gait | State-of-the-art machine learning (ML) models are highly effective in classifying gait analysis data, however, they lack in providing explanations for their predictions. This "black-box" characteristic makes it impossible to understand on which input patterns, ML models base their predictions. The present study investigates whether Explainable Artificial Intelligence methods, i.e., Layer-wise Relevance Propagation (LRP), can be useful to enhance the explainability of ML predictions in gait classification. The research question was: Which input patterns are most relevant for an automated gender classification model and do they correspond to characteristics identified in the literature? We utilized a subset of the GAITREC dataset containing five bilateral ground reaction force (GRF) recordings per person during barefoot walking of 62 healthy participants: 34 females and 28 males. Each input signal (right and left side) was min-max normalized before concatenation and fed into a multi-layer Convolutional Neural Network (CNN). The classification accuracy was obtained over a stratified ten-fold cross-validation. To identify gender-specific patterns, the input relevance scores were derived using LRP. The mean classification accuracy of the CNN with 83.3% showed a clear superiority over the zero-rule baseline of 54.8%. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 333,829 |
2108.04418 | Knowledge Enhanced Multi-modal Fake News Detection | Recent years have witnessed the significant damage caused by various types of fake news. Although considerable effort has been applied to address this issue and much progress has been made on detecting fake news, most existing approaches mainly rely on the textual content and/or social context, while knowledge-level information---entities extracted from the news content and the relations between them---is much less explored. Within the limited work on knowledge-based fake news detection, an external knowledge graph is often required, which may introduce additional problems: it is quite common for entities and relations, especially with respect to new concepts, to be missing in existing knowledge graphs, and both entity prediction and link prediction are open research questions themselves. Therefore, in this work, we investigate \textbf{knowledge-based fake news detection that does not require any external knowledge graph.} Specifically, our contributions include: (1) transforming the problem of detecting fake news into a subgraph classification task---entities and relations are extracted from each news item to form a single knowledge graph, where a news item is represented by a subgraph. Then a graph neural network (GNN) model is trained to classify each subgraph/news item. (2) Further improving the performance of this model through a simple but effective multi-modal technique that combines extracted knowledge, textual content and social context. Experiments on multiple datasets with thousands of labelled news items demonstrate that our knowledge-based algorithm outperforms existing counterpart methods, and its performance can be further boosted by the multi-modal approach. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 249,992 |
2208.14037 | Towards Artificial Virtuous Agents: Games, Dilemmas and Machine Learning | Machine ethics has received increasing attention over the past few years because of the need to ensure safe and reliable artificial intelligence (AI). The two dominantly used theories in machine ethics are deontological and utilitarian ethics. Virtue ethics, on the other hand, has often been mentioned as an alternative ethical theory. While this interesting approach has certain advantages over popular ethical theories, little effort has been put into engineering artificial virtuous agents due to challenges in their formalization, codifiability, and the resolution of ethical dilemmas to train virtuous agents. We propose to bridge this gap by using role-playing games riddled with moral dilemmas. There are several such games in existence, such as Papers, Please and Life is Strange, where the main character encounters situations where they must choose the right course of action by giving up something else dear to them. We draw inspiration from such games to show how a systemic role-playing game can be designed to develop virtues within an artificial agent. Using modern day AI techniques, such as affinity-based reinforcement learning and explainable AI, we motivate the implementation of virtuous agents that play such role-playing games, and the examination of their decisions through a virtue ethical lens. The development of such agents and environments is a first step towards practically formalizing and demonstrating the value of virtue ethics in the development of ethical agents. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 315,204 |
2205.00897 | Fast Continuous and Integer L-shaped Heuristics Through Supervised
Learning | We propose a methodology at the nexus of operations research and machine learning (ML) leveraging generic approximators available from ML to accelerate the solution of mixed-integer linear two-stage stochastic programs. We aim at solving problems where the second stage is highly demanding. Our core idea is to gain large reductions in online solution time while incurring small reductions in first-stage solution accuracy by substituting the exact second-stage solutions with fast, yet accurate supervised ML predictions. This upfront investment in ML would be justified when similar problems are solved repeatedly over time, for example, in transport planning related to fleet management, routing and container yard management. Our numerical results focus on the problem class seminally addressed with the integer and continuous L-shaped cuts. Our extensive empirical analysis is grounded in standardized families of problems derived from stochastic server location (SSLP) and stochastic multi knapsack (SMKP) problems available in the literature. The proposed method can solve the hardest instances of SSLP in less than 9% of the time it takes the state-of-the-art exact method, and in the case of SMKP the same figure is 20%. Average optimality gaps are in most cases less than 0.1%. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 294,417 |
2411.13509 | Degenerate quantum erasure decoding | Erasures are the primary type of errors in physical systems dominated by leakage errors. While quantum error correction (QEC) using stabilizer codes can combat these error, the question of achieving near-capacity performance with explicit codes and efficient decoders remains a challenge. Quantum decoding is a classical computational problem that decides what the recovery operation should be based on the measured syndromes. For QEC, using an accurate decoder with the shortest possible runtime will minimize the degradation of quantum information while awaiting the decoder's decision. We examine the quantum erasure decoding problem for general stabilizer codes and present decoders that not only run in linear-time but are also accurate. We achieve this by exploiting the symmetry of degenerate errors. Numerical evaluations show near maximum-likelihood decoding for various codes, achieving capacity performance with topological codes and near-capacity performance with non-topological codes. We furthermore explore the potential of our decoders to handle other error models, such as mixed erasure and depolarizing errors, and also local deletion errors via concatenation with permutation invariant codes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 509,805 |
2308.10847 | Evaluating quantum generative models via imbalanced data classification
benchmarks | A limited set of tools exist for assessing whether the behavior of quantum machine learning models diverges from conventional models, outside of abstract or theoretical settings. We present a systematic application of explainable artificial intelligence techniques to analyze synthetic data generated from a hybrid quantum-classical neural network adapted from twenty different real-world data sets, including solar flares, cardiac arrhythmia, and speech data. Each of these data sets exhibits varying degrees of complexity and class imbalance. We benchmark the quantum-generated data relative to state-of-the-art methods for mitigating class imbalance for associated classification tasks. We leverage this approach to elucidate the qualities of a problem that make it more or less likely to be amenable to a hybrid quantum-classical generative model. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 386,909 |
2305.17588 | Diagnosing Transformers: Illuminating Feature Spaces for Clinical
Decision-Making | Pre-trained transformers are often fine-tuned to aid clinical decision-making using limited clinical notes. Model interpretability is crucial, especially in high-stakes domains like medicine, to establish trust and ensure safety, which requires human engagement. We introduce SUFO, a systematic framework that enhances interpretability of fine-tuned transformer feature spaces. SUFO utilizes a range of analytic and visualization techniques, including Supervised probing, Unsupervised similarity analysis, Feature dynamics, and Outlier analysis to address key questions about model trust and interpretability. We conduct a case study investigating the impact of pre-training data where we focus on real-world pathology classification tasks, and validate our findings on MedNLI. We evaluate five 110M-sized pre-trained transformer models, categorized into general-domain (BERT, TNLR), mixed-domain (BioBERT, Clinical BioBERT), and domain-specific (PubMedBERT) groups. Our SUFO analyses reveal that: (1) while PubMedBERT, the domain-specific model, contains valuable information for fine-tuning, it can overfit to minority classes when class imbalances exist. In contrast, mixed-domain models exhibit greater resistance to overfitting, suggesting potential improvements in domain-specific model robustness; (2) in-domain pre-training accelerates feature disambiguation during fine-tuning; and (3) feature spaces undergo significant sparsification during this process, enabling clinicians to identify common outlier modes among fine-tuned models as demonstrated in this paper. These findings showcase the utility of SUFO in enhancing trust and safety when using transformers in medicine, and we believe SUFO can aid practitioners in evaluating fine-tuned language models for other applications in medicine and in more critical domains. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 368,663 |
2308.00325 | Informative Path Planning of Autonomous Vehicle for Parking Occupancy
Estimation | Parking occupancy estimation holds significant potential in facilitating parking resource management and mitigating traffic congestion. Existing approaches employ robotic systems to detect the occupancy status of individual parking spaces and primarily focus on enhancing detection accuracy through perception pipelines. However, these methods often overlook the crucial aspect of robot path planning, which can hinder the accurate estimation of the entire parking area. In light of these limitations, we introduce the problem of informative path planning for parking occupancy estimation using autonomous vehicles and formulate it as a Partially Observable Markov Decision Process (POMDP) task. Then, we develop an occupancy state transition model and introduce a Bayes filter to estimate occupancy based on noisy sensor measurements. Subsequently, we propose the Monte Carlo Bayes Filter Tree, a computationally efficient algorithm that leverages progressive widening to generate informative paths. We demonstrate that the proposed approach outperforms the benchmark methods in diverse simulation environments, effectively striking a balance between optimality and computational efficiency. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 382,903 |
2308.04269 | Lossy and Lossless (L$^2$) Post-training Model Size Compression | Deep neural networks have delivered remarkable performance and have been widely used in various visual tasks. However, their huge size causes significant inconvenience for transmission and storage. Many previous studies have explored model size compression. However, these studies often approach various lossy and lossless compression methods in isolation, leading to challenges in achieving high compression ratios efficiently. This work proposes a post-training model size compression method that combines lossy and lossless compression in a unified way. We first propose a unified parametric weight transformation, which ensures different lossy compression methods can be performed jointly in a post-training manner. Then, a dedicated differentiable counter is introduced to guide the optimization of lossy compression to arrive at a more suitable point for later lossless compression. Additionally, our method can easily control a desired global compression ratio and allocate adaptive ratios for different layers. Finally, our method can achieve a stable $10\times$ compression ratio without sacrificing accuracy and a $20\times$ compression ratio with minor accuracy loss in a short time. Our code is available at https://github.com/ModelTC/L2_Compression . | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 384,352 |
2009.02429 | Player Identification in Hockey Broadcast Videos | We present a deep recurrent convolutional neural network (CNN) approach to solve the problem of hockey player identification in NHL broadcast videos. Player identification is a difficult computer vision problem mainly because of the players' similar appearance, occlusion, and blurry facial and physical features. However, we can observe players' jersey numbers over time by processing variable length image sequences of players (aka 'tracklets'). We propose an end-to-end trainable ResNet+LSTM network, with a residual network (ResNet) base and a long short-term memory (LSTM) layer, to discover spatio-temporal features of jersey numbers over time and learn long-term dependencies. For this work, we created a new hockey player tracklet dataset that contains sequences of hockey player bounding boxes. Additionally, we employ a secondary 1-dimensional convolutional neural network classifier as a late score-level fusion method to classify the output of the ResNet+LSTM network. This achieves an overall player identification accuracy score over 87% on the test split of our new dataset. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 194,538 |
2306.08259 | LargeST: A Benchmark Dataset for Large-Scale Traffic Forecasting | Road traffic forecasting plays a critical role in smart city initiatives and has experienced significant advancements thanks to the power of deep learning in capturing non-linear patterns of traffic data. However, the promising results achieved on current public datasets may not be applicable to practical scenarios due to limitations within these datasets. First, the limited sizes of them may not reflect the real-world scale of traffic networks. Second, the temporal coverage of these datasets is typically short, posing hurdles in studying long-term patterns and acquiring sufficient samples for training deep models. Third, these datasets often lack adequate metadata for sensors, which compromises the reliability and interpretability of the data. To mitigate these limitations, we introduce the LargeST benchmark dataset. It encompasses a total number of 8,600 sensors in California with a 5-year time coverage and includes comprehensive metadata. Using LargeST, we perform in-depth data analysis to extract data insights, benchmark well-known baselines in terms of their performance and efficiency, and identify challenges as well as opportunities for future research. We release the datasets and baseline implementations at: https://github.com/liuxu77/LargeST. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 373,346 |
1401.3189 | Asymmetric Compute-and-Forward with CSIT | We present a modified compute-and-forward scheme which utilizes Channel State Information at the Transmitters (CSIT) in a natural way. The modified scheme allows different users to have different coding rates, and use CSIT to achieve larger rate region. This idea is applicable to all systems which use the compute-and-forward technique and can be arbitrarily better than the regular scheme in some settings. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 29,816 |
1907.05846 | Exploring the context of course rankings on online academic forums | University students routinely use the tools provided by online course ranking forums to share and discuss their satisfaction with the quality of instruction and content in a wide variety of courses. Student perception of the efficacy of pedagogies employed in a course is a reflection of a multitude of decisions by professors, instructional designers and university administrators. This complexity has motivated a large body of research on the utility, reliability, and behavioral correlates of course rankings. There is, however, little investigation of the (potential) implicit student bias on these forums towards desirable course outcomes at the institution level. To that end, we examine the connection between course outcomes (student-reported GPA) and the overall ranking of the primary course instructor, as well as rating disparity by nature of course outcomes, based on data from two popular academic rating forums. Our experiments with ranking data about over ten thousand courses taught at Virginia Tech and its 25 SCHEV-approved peer institutions indicate that there is a discernible albeit complex bias towards course outcomes in the professor ratings registered by students. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 138,466 |
2205.05194 | Multiplexed Immunofluorescence Brain Image Analysis Using
Self-Supervised Dual-Loss Adaptive Masked Autoencoder | Reliable large-scale cell detection and segmentation is the fundamental first step to understanding biological processes in the brain. The ability to phenotype cells at scale can accelerate preclinical drug evaluation and system-level brain histology studies. The impressive advances in deep learning offer a practical solution to cell image detection and segmentation. Unfortunately, categorizing cells and delineating their boundaries for training deep networks is an expensive process that requires skilled biologists. This paper presents a novel self-supervised Dual-Loss Adaptive Masked Autoencoder (DAMA) for learning rich features from multiplexed immunofluorescence brain images. DAMA's objective function minimizes the conditional entropy in pixel-level reconstruction and feature-level regression. Unlike existing self-supervised learning methods based on a random image masking strategy, DAMA employs a novel adaptive mask sampling strategy to maximize mutual information and effectively learn brain cell data. To the best of our knowledge, this is the first effort to develop a self-supervised learning method for multiplexed immunofluorescence brain images. Our extensive experiments demonstrate that DAMA features enable superior cell detection, segmentation, and classification performance without requiring many annotations. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 295,864 |
cs/0203007 | Two results for proiritized logic programming | Prioritized default reasoning has illustrated its rich expressiveness and flexibility in knowledge representation and reasoning. However, many important aspects of prioritized default reasoning have yet to be thoroughly explored. In this paper, we investigate two properties of prioritized logic programs in the context of answer set semantics. Specifically, we reveal a close relationship between mutual defeasibility and uniqueness of the answer set for a prioritized logic program. We then explore how the splitting technique for extended logic programs can be extended to prioritized logic programs. We prove splitting theorems that can be used to simplify the evaluation of a prioritized logic program under certain conditions. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 537,522 |
2310.19068 | Sketching Algorithms for Sparse Dictionary Learning: PTAS and Turnstile
Streaming | Sketching algorithms have recently proven to be a powerful approach both for designing low-space streaming algorithms as well as fast polynomial time approximation schemes (PTAS). In this work, we develop new techniques to extend the applicability of sketching-based approaches to the sparse dictionary learning and the Euclidean $k$-means clustering problems. In particular, we initiate the study of the challenging setting where the dictionary/clustering assignment for each of the $n$ input points must be output, which has surprisingly received little attention in prior work. On the fast algorithms front, we obtain a new approach for designing PTAS's for the $k$-means clustering problem, which generalizes to the first PTAS for the sparse dictionary learning problem. On the streaming algorithms front, we obtain new upper bounds and lower bounds for dictionary learning and $k$-means clustering. In particular, given a design matrix $\mathbf A\in\mathbb R^{n\times d}$ in a turnstile stream, we show an $\tilde O(nr/\epsilon^2 + dk/\epsilon)$ space upper bound for $r$-sparse dictionary learning of size $k$, an $\tilde O(n/\epsilon^2 + dk/\epsilon)$ space upper bound for $k$-means clustering, as well as an $\tilde O(n)$ space upper bound for $k$-means clustering on random order row insertion streams with a natural "bounded sensitivity" assumption. On the lower bounds side, we obtain a general $\tilde\Omega(n/\epsilon + dk/\epsilon)$ lower bound for $k$-means clustering, as well as an $\tilde\Omega(n/\epsilon^2)$ lower bound for algorithms which can estimate the cost of a single fixed set of candidate centers. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 403,840 |
2305.07241 | On the Optimality of Misspecified Kernel Ridge Regression | In the misspecified kernel ridge regression problem, researchers usually assume the underground true function $f_{\rho}^{*} \in [\mathcal{H}]^{s}$, a less-smooth interpolation space of a reproducing kernel Hilbert space (RKHS) $\mathcal{H}$ for some $s\in (0,1)$. The existing minimax optimal results require $\|f_{\rho}^{*}\|_{L^{\infty}}<\infty$ which implicitly requires $s > \alpha_{0}$ where $\alpha_{0}\in (0,1)$ is the embedding index, a constant depending on $\mathcal{H}$. Whether the KRR is optimal for all $s\in (0,1)$ is an outstanding problem lasting for years. In this paper, we show that KRR is minimax optimal for any $s\in (0,1)$ when the $\mathcal{H}$ is a Sobolev RKHS. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 363,821 |
1510.05203 | Neural Reranking Improves Subjective Quality of Machine Translation:
NAIST at WAT2015 | This year, the Nara Institute of Science and Technology (NAIST)'s submission to the 2015 Workshop on Asian Translation was based on syntax-based statistical machine translation, with the addition of a reranking component using neural attentional machine translation models. Experiments re-confirmed results from previous work stating that neural MT reranking provides a large gain in objective evaluation measures such as BLEU, and also confirmed for the first time that these results also carry over to manual evaluation. We further perform a detailed analysis of reasons for this increase, finding that the main contributions of the neural models lie in improvement of the grammatical correctness of the output, as opposed to improvements in lexical choice of content words. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 48,000 |
1001.3550 | Deconvolution of linear systems with quantized input: an information
theoretic viewpoint | In spite of the huge literature on deconvolution problems, very little is done for hybrid contexts where signals are quantized. In this paper we undertake an information theoretic approach to the deconvolution problem of a simple integrator with quantized binary input and sampled noisy output. We recast it into a decoding problem and we propose and analyze (theoretically and numerically) some low complexity on-line algorithms to achieve deconvolution. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 5,471 |
2409.06263 | Keyword-Aware ASR Error Augmentation for Robust Dialogue State Tracking | Dialogue State Tracking (DST) is a key part of task-oriented dialogue systems, identifying important information in conversations. However, its accuracy drops significantly in spoken dialogue environments due to named entity errors from Automatic Speech Recognition (ASR) systems. We introduce a simple yet effective data augmentation method that targets those entities to improve the robustness of DST model. Our novel method can control the placement of errors using keyword-highlighted prompts while introducing phonetically similar errors. As a result, our method generated sufficient error patterns on keywords, leading to improved accuracy in noised and low-accuracy ASR environments. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 487,062 |
2105.11627 | Least-Squares ReLU Neural Network (LSNN) Method For Scalar Nonlinear
Hyperbolic Conservation Law | We introduced the least-squares ReLU neural network (LSNN) method for solving the linear advection-reaction problem with discontinuous solution and showed that the method outperforms mesh-based numerical methods in terms of the number of degrees of freedom. This paper studies the LSNN method for scalar nonlinear hyperbolic conservation law. The method is a discretization of an equivalent least-squares (LS) formulation in the set of neural network functions with the ReLU activation function. Evaluation of the LS functional is done by using numerical integration and conservative finite volume scheme. Numerical results of some test problems show that the method is capable of approximating the discontinuous interface of the underlying problem automatically through the free breaking lines of the ReLU neural network. Moreover, the method does not exhibit the common Gibbs phenomena along the discontinuous interface. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 236,765 |
2311.04491 | Explainable AI for Earth Observation: Current Methods, Open Challenges,
and Opportunities | Deep learning has taken by storm all fields involved in data analysis, including remote sensing for Earth observation. However, despite significant advances in terms of performance, its lack of explainability and interpretability, inherent to neural networks in general since their inception, remains a major source of criticism. Hence it comes as no surprise that the expansion of deep learning methods in remote sensing is being accompanied by increasingly intensive efforts oriented towards addressing this drawback through the exploration of a wide spectrum of Explainable Artificial Intelligence techniques. This chapter, organized according to prominent Earth observation application fields, presents a panorama of the state-of-the-art in explainable remote sensing image analysis. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 406,241 |
1309.5803 | Scalable Anomaly Detection in Large Homogenous Populations | Anomaly detection in large populations is a challenging but highly relevant problem. The problem is essentially a multi-hypothesis problem, with a hypothesis for every division of the systems into normal and anomal systems. The number of hypothesis grows rapidly with the number of systems and approximate solutions become a necessity for any problems of practical interests. In the current paper we take an optimization approach to this multi-hypothesis problem. We first observe that the problem is equivalent to a non-convex combinatorial optimization problem. We then relax the problem to a convex problem that can be solved distributively on the systems and that stays computationally tractable as the number of systems increase. An interesting property of the proposed method is that it can under certain conditions be shown to give exactly the same result as the combinatorial multi-hypothesis problem and the relaxation is hence tight. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | true | 27,196 |
1504.05474 | A Simple Algorithm for Approximation by Nomographic Functions | This paper introduces a novel algorithmic solution for the approximation of a given multivariate function by a nomographic function that is composed of a one-dimensional continuous and monotone outer function and a sum of univariate continuous inner functions. We show that a suitable approximation can be obtained by solving a cone-constrained Rayleigh-Quotient optimization problem. The proposed approach is based on a combination of a dimensionwise function decomposition known as Analysis of Variance (ANOVA) and optimization over a class of monotone polynomials. An example is given to show that the proposed algorithm can be applied to solve problems in distributed function computation over multiple-access channels. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 42,276 |
2401.00578 | Exact Error in Matrix Completion: Approximately Low-Rank Structures and
Missing Blocks | We study the completion of approximately low rank matrices with entries missing not at random (MNAR). In the context of typical large-dimensional statistical settings, we establish a framework for the performance analysis of the nuclear norm minimization ($\ell_1^*$) algorithm. Our framework produces \emph{exact} estimates of the worst-case residual root mean squared error and the associated phase transitions (PT), with both exhibiting remarkably simple characterizations. Our results enable to {\it precisely} quantify the impact of key system parameters, including data heterogeneity, size of the missing block, and deviation from ideal low rankness, on the accuracy of $\ell_1^*$-based matrix completion. To validate our theoretical worst-case RMSE estimates, we conduct numerical simulations, demonstrating close agreement with their numerical counterparts. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 419,029 |
1509.01074 | A Novice Guide towards Human Motion Analysis and Understanding | Human motion analysis and understanding has been, and is still, the focus of attention of many disciplines which is considered an obvious indicator of the wide and massive importance of the subject. The purpose of this article is to shed some light on this very important subject, so it can be a good insight for a novice computer vision researcher in this field by providing him/her with a wealth of knowledge about the subject covering many directions. There are two main contributions of this article. The first one investigates various aspects of some disciplines (e.g., arts, philosophy, psychology, and neuroscience) that are interested in the subject and review some of their contributions stressing on those that can be useful for computer vision researchers. Moreover, many examples are illustrated to indicate the benefits of integrating concepts and results among different disciplines. The second contribution is concerned with the subject from the computer vision aspect where we discuss the following issues. First, we explore many demanding and promising applications to reveal the wide and massive importance of the field. Second, we list various types of sensors that may be used for acquiring various data. Third, we review different taxonomies used for classifying motions. Fourth, we review various processes involved in motion analysis. Fifth, we exhibit how different surveys are structured. Sixth, we examine many of the most cited and recent reviews in the field that have been published during the past two decades to reveal various approaches used for implementing different stages of the problem and refer to various algorithms and their suitability for different situations. Moreover, we provide a long list of public datasets and discuss briefly some examples of these datasets. Finally, we provide a general discussion of the subject from the aspect of computer vision. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 46,562 |
2211.15235 | Reducing Domain Gap in Frequency and Spatial domain for Cross-modality
Domain Adaptation on Medical Image Segmentation | Unsupervised domain adaptation (UDA) aims to learn a model trained on source domain and performs well on unlabeled target domain. In medical image segmentation field, most existing UDA methods depend on adversarial learning to address the domain gap between different image modalities, which is ineffective due to its complicated training process. In this paper, we propose a simple yet effective UDA method based on frequency and spatial domain transfer uner multi-teacher distillation framework. In the frequency domain, we first introduce non-subsampled contourlet transform for identifying domain-invariant and domain-variant frequency components (DIFs and DVFs), and then keep the DIFs unchanged while replacing the DVFs of the source domain images with that of the target domain images to narrow the domain gap. In the spatial domain, we propose a batch momentum update-based histogram matching strategy to reduce the domain-variant image style bias. Experiments on two cross-modality medical image segmentation datasets (cardiac, abdominal) show that our proposed method achieves superior performance compared to state-of-the-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 333,173 |
0803.0755 | Toeplitz Block Matrices in Compressed Sensing | Recent work in compressed sensing theory shows that $n\times N$ independent and identically distributed (IID) sensing matrices whose entries are drawn independently from certain probability distributions guarantee exact recovery of a sparse signal with high probability even if $n\ll N$. Motivated by signal processing applications, random filtering with Toeplitz sensing matrices whose elements are drawn from the same distributions were considered and shown to also be sufficient to recover a sparse signal from reduced samples exactly with high probability. This paper considers Toeplitz block matrices as sensing matrices. They naturally arise in multichannel and multidimensional filtering applications and include Toeplitz matrices as special cases. It is shown that the probability of exact reconstruction is also high. Their performance is validated using simulations. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 1,391 |
2502.00855 | Psychometric-Based Evaluation for Theorem Proving with Large Language
Models | Large language models (LLMs) for formal theorem proving have become a prominent research focus. At present, the proving ability of these LLMs is mainly evaluated through proof pass rates on datasets such as miniF2F. However, this evaluation method overlooks the varying importance of theorems. As a result, it fails to highlight the real performance disparities between LLMs and leads to high evaluation costs. This study proposes a psychometric-based evaluation method for theorem proving with LLMs, comprising two main components: Dataset Annotation and Adaptive Evaluation. First, we propose a metric calculation method to annotate the dataset with difficulty and discrimination metrics. Specifically, we annotate each theorem in the miniF2F dataset and grade them into varying difficulty levels according to the performance of LLMs, resulting in an enhanced dataset: miniF2F-Graded. Experimental results show that the difficulty grading in miniF2F-Graded better reflects the theorem difficulty perceived by LLMs. Secondly, we design an adaptive evaluation method to dynamically select the most suitable theorems for testing based on the annotated metrics and the real-time performance of LLMs. We apply this method to evaluate 10 LLMs. The results show that our method finely highlights the performance disparities between LLMs. It also reduces evaluation costs by using only 23% of the theorems in the dataset. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 529,580 |
2410.21804 | Efficient and Effective Weight-Ensembling Mixture of Experts for
Multi-Task Model Merging | Multi-task learning (MTL) leverages a shared model to accomplish multiple tasks and facilitate knowledge transfer. Recent research on task arithmetic-based MTL demonstrates that merging the parameters of independently fine-tuned models can effectively achieve MTL. However, existing merging methods primarily seek a static optimal solution within the original model parameter space, which often results in performance degradation due to the inherent diversity among tasks and potential interferences. To address this challenge, in this paper, we propose a Weight-Ensembling Mixture of Experts (WEMoE) method for multi-task model merging. Specifically, we first identify critical (or sensitive) modules by analyzing parameter variations in core modules of Transformer-based models before and after finetuning. Then, our WEMoE statically merges non-critical modules while transforming critical modules into a mixture-of-experts (MoE) structure. During inference, expert modules in the MoE are dynamically merged based on input samples, enabling a more flexible and adaptive merging approach. Building on WEMoE, we further introduce an efficient-and-effective WEMoE (E-WEMoE) method, whose core mechanism involves eliminating non-essential elements in the critical modules of WEMoE and implementing shared routing across multiple MoE modules, thereby significantly reducing both the trainable parameters, the overall parameter count, and computational overhead of the merged model by WEMoE. Experimental results across various architectures and tasks demonstrate that both WEMoE and E-WEMoE outperform state-of-the-art (SOTA) model merging methods in terms of MTL performance, generalization, and robustness. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 503,398 |
2005.04330 | Generalizing Outside the Training Set: When Can Neural Networks Learn
Identity Effects? | Often in language and other areas of cognition, whether two components of an object are identical or not determine whether it is well formed. We call such constraints identity effects. When developing a system to learn well-formedness from examples, it is easy enough to build in an identify effect. But can identity effects be learned from the data without explicit guidance? We provide a simple framework in which we can rigorously prove that algorithms satisfying simple criteria cannot make the correct inference. We then show that a broad class of algorithms including deep neural networks with standard architecture and training with backpropagation satisfy our criteria, dependent on the encoding of inputs. Finally, we demonstrate our theory with computational experiments in which we explore the effect of different input encodings on the ability of algorithms to generalize to novel inputs. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | 176,428 |
2204.08683 | Imbalanced Classification via a Tabular Translation GAN | When presented with a binary classification problem where the data exhibits severe class imbalance, most standard predictive methods may fail to accurately model the minority class. We present a model based on Generative Adversarial Networks which uses additional regularization losses to map majority samples to corresponding synthetic minority samples. This translation mechanism encourages the synthesized samples to be close to the class boundary. Furthermore, we explore a selection criterion to retain the most useful of the synthesized samples. Experimental results using several downstream classifiers on a variety of tabular class-imbalanced datasets show that the proposed method improves average precision when compared to alternative re-weighting and oversampling techniques. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 292,175 |
2111.13943 | Modeling VI and VDRL feedback functions: searching normative rules
through computational simulation | In this paper, we present a R script named Beak, built to simulate rates of behavior interacting with schedules of reinforcement. Using Beak, we've simulated data that allows an assessment of different reinforcement feedback functions (RFF). This was made with unparalleled precision, since simulations provide huge samples of data and, more importantly, simulated behavior isn't changed by the reinforcement it produces. Therefore, we can vary it systematically. We've compared different RFF for RI schedules, using as criteria: meaning, precision, parsimony and generality. Our results indicate that the best feedback function for the RI schedule was published by Baum (1981). We also propose that the model used by Killeen (1975) is a viable feedback function for the RDRL schedule. We argue that Beak paves the way for greater understanding of schedules of reinforcement, addressing still open questions about quantitative features of schedules. Also, they could guide future experiments that use schedules as theoretical and methodological tools. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 268,436 |
2210.02648 | Self-triggered Consensus of Multi-agent Systems with Quantized Relative
State Measurements | This paper addresses the consensus problem of first-order continuous-time multi-agent systems over undirected graphs. Each agent samples relative state measurements in a self-triggered fashion and transmits the sum of the measurements to its neighbors. Moreover, we use finite-level dynamic quantizers and apply the zooming-in technique. The proposed joint design method for quantization and self-triggered sampling achieves asymptotic consensus, and inter-event times are strictly positive. Sampling times are determined explicitly with iterative procedures including the computation of the Lambert $W$-function. A simulation example is provided to illustrate the effectiveness of the proposed method. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 321,726 |
2312.10237 | A Distributed Privacy Preserving Model for the Detection of Alzheimer's
Disease | In the era of rapidly advancing medical technologies, the segmentation of medical data has become inevitable, necessitating the development of privacy preserving machine learning algorithms that can train on distributed data. Consolidating sensitive medical data is not always an option particularly due to the stringent privacy regulations imposed by the Health Insurance Portability and Accountability Act (HIPAA). In this paper, I introduce a HIPAA compliant framework that can train from distributed data. I then propose a multimodal vertical federated model for Alzheimer's Disease (AD) detection, a serious neurodegenerative condition that can cause dementia, severely impairing brain function and hindering simple tasks, especially without preventative care. This vertical federated learning (VFL) model offers a distributed architecture that enables collaborative learning across diverse sources of medical data while respecting privacy constraints imposed by HIPAA. The VFL architecture proposed herein offers a novel distributed architecture, enabling collaborative learning across diverse sources of medical data while respecting statutory privacy constraints. By leveraging multiple modalities of data, the robustness and accuracy of AD detection can be enhanced. This model not only contributes to the advancement of federated learning techniques but also holds promise for overcoming the hurdles posed by data segmentation in medical research. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | true | 416,071 |
2311.01111 | H-NeXt: The next step towards roto-translation invariant networks | The widespread popularity of equivariant networks underscores the significance of parameter efficient models and effective use of training data. At a time when robustness to unseen deformations is becoming increasingly important, we present H-NeXt, which bridges the gap between equivariance and invariance. H-NeXt is a parameter-efficient roto-translation invariant network that is trained without a single augmented image in the training set. Our network comprises three components: an equivariant backbone for learning roto-translation independent features, an invariant pooling layer for discarding roto-translation information, and a classification layer. H-NeXt outperforms the state of the art in classification on unaugmented training sets and augmented test sets of MNIST and CIFAR-10. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 404,921 |
1505.04265 | Cognitive Development of the Web | The sociotechnological system is a system constituted of human individuals and their artifacts: technological artifacts, institutions, conceptual and representational systems, worldviews, knowledge systems, culture and the whole biosphere as a volutionary niche. In our view the sociotechnological system as a super-organism is shaped and determined both by the characteristics of the agents involved and the characteristics emergent in their interactions at multiple scales. Our approach to sociotechnological dynamics will maintain a balance between perspectives: the individual and the collective. Accordingly, we analyze dynamics of the Web as a sociotechnological system made of people, computers and digital artifacts (Web pages, databases, search engines, etc.). Making sense of the sociotechnological system while being part of it, is also a constant interplay between pragmatic and value based approaches. The first is focusing on the actualities of the system while the second highlights the observer's projections. In our attempt to model sociotechnological dynamics and envision its future, we take special care to make explicit our values as part of the analysis. In sociotechnological systems with a high degree of reflexivity (coupling between the perception of the system and the system's behavior), highlighting values is of critical importance. In this essay, we choose to see the future evolution of the web as facilitating a basic value, that is, continuous open-ended intelligence expansion. By that we mean that we see intelligence expansion as the determinant of the 'greater good' and 'well being' of both of individuals and collectives at all scales. Our working definition of intelligence here is the progressive process of sense-making of self, other, environment and universe. Intelligence expansion, therefore, means an increasing ability of sense-making. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 43,170 |
2306.04641 | Generalizable Low-Resource Activity Recognition with Diverse and
Discriminative Representation Learning | Human activity recognition (HAR) is a time series classification task that focuses on identifying the motion patterns from human sensor readings. Adequate data is essential but a major bottleneck for training a generalizable HAR model, which assists customization and optimization of online web applications. However, it is costly in time and economy to collect large-scale labeled data in reality, i.e., the low-resource challenge. Meanwhile, data collected from different persons have distribution shifts due to different living habits, body shapes, age groups, etc. The low-resource and distribution shift challenges are detrimental to HAR when applying the trained model to new unseen subjects. In this paper, we propose a novel approach called Diverse and Discriminative representation Learning (DDLearn) for generalizable low-resource HAR. DDLearn simultaneously considers diversity and discrimination learning. With the constructed self-supervised learning task, DDLearn enlarges the data diversity and explores the latent activity properties. Then, we propose a diversity preservation module to preserve the diversity of learned features by enlarging the distribution divergence between the original and augmented domains. Meanwhile, DDLearn also enhances semantic discrimination by learning discriminative representations with supervised contrastive learning. Extensive experiments on three public HAR datasets demonstrate that our method significantly outperforms state-of-art methods by an average accuracy improvement of 9.5% under the low-resource distribution shift scenarios, while being a generic, explainable, and flexible framework. Code is available at: https://github.com/microsoft/robustlearn. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 371,832 |
1902.04742 | Uniform convergence may be unable to explain generalization in deep
learning | Aimed at explaining the surprisingly good generalization behavior of overparameterized deep networks, recent works have developed a variety of generalization bounds for deep learning, all based on the fundamental learning-theoretic technique of uniform convergence. While it is well-known that many of these existing bounds are numerically large, through numerous experiments, we bring to light a more concerning aspect of these bounds: in practice, these bounds can {\em increase} with the training dataset size. Guided by our observations, we then present examples of overparameterized linear classifiers and neural networks trained by gradient descent (GD) where uniform convergence provably cannot "explain generalization" -- even if we take into account the implicit bias of GD {\em to the fullest extent possible}. More precisely, even if we consider only the set of classifiers output by GD, which have test errors less than some small $\epsilon$ in our settings, we show that applying (two-sided) uniform convergence on this set of classifiers will yield only a vacuous generalization guarantee larger than $1-\epsilon$. Through these findings, we cast doubt on the power of uniform convergence-based generalization bounds to provide a complete picture of why overparameterized deep networks generalize well. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 121,412 |
2012.10773 | Forming Real-World Human-Robot Cooperation for Tasks With General Goal | In human-robot cooperation, the robot cooperates with humans to accomplish the task together. Existing approaches assume the human has a specific goal during the cooperation, and the robot infers and acts toward it. However, in real-world environments, a human usually only has a general goal (e.g., general direction or area in motion planning) at the beginning of the cooperation, which needs to be clarified to a specific goal (i.e., an exact position) during cooperation. The specification process is interactive and dynamic, which depends on the environment and the partner's behavior. The robot that does not consider the goal specification process may cause frustration to the human partner, elongate the time to come to an agreement, and compromise team performance. This work presents the Evolutionary Value Learning approach to model the dynamics of the goal specification process with State-based Multivariate Bayesian Inference and goal specificity-related features. This model enables the robot to enhance the process of the human's goal specification actively and find a cooperative policy in a Deep Reinforcement Learning manner. Our method outperforms existing methods with faster goal specification processes and better team performance in a dynamic ball balancing task with real human subjects. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 212,435 |
1912.02718 | All-Digital Massive MIMO Uplink and Downlink Rates under a Fronthaul
Constraint | We characterize the rate achievable in a bidirectional quasi-static link where several user equipments communicate with a massive multiple-input multiple-output base station (BS). In the considered setup, the BS operates in full-digital mode, the physical size of the antenna array is limited, and there exists a rate constraint on the fronthaul interface connecting the (possibly remote) radio head to the digital baseband processing unit. Our analysis enables us to determine the optimal resolution of the analog-to-digital and digital-to-analog converters as well as the optimal number of active antenna elements to be used in order to maximize the transmission rate on the bidirectional link, for a given constraint on the outage probability and on the fronthaul rate. We investigate both the case in which perfect channel-state information is available, and the case in which channel-state information is acquired through pilot transmission, and is, hence, imperfect. For the second case, we present a novel rate expression that relies on the generalized mutual-information framework. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 156,418 |
2309.14207 | Automatic Animation of Hair Blowing in Still Portrait Photos | We propose a novel approach to animate human hair in a still portrait photo. Existing work has largely studied the animation of fluid elements such as water and fire. However, hair animation for a real image remains underexplored, which is a challenging problem, due to the high complexity of hair structure and dynamics. Considering the complexity of hair structure, we innovatively treat hair wisp extraction as an instance segmentation problem, where a hair wisp is referred to as an instance. With advanced instance segmentation networks, our method extracts meaningful and natural hair wisps. Furthermore, we propose a wisp-aware animation module that animates hair wisps with pleasing motions without noticeable artifacts. The extensive experiments show the superiority of our method. Our method provides the most pleasing and compelling viewing experience in the qualitative experiments and outperforms state-of-the-art still-image animation methods by a large margin in the quantitative evaluation. Project url: \url{https://nevergiveu.github.io/AutomaticHairBlowing/} | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 394,500 |
2305.06044 | Correlation visualization under missing values: a comparison between
imputation and direct parameter estimation methods | Correlation matrix visualization is essential for understanding the relationships between variables in a dataset, but missing data can pose a significant challenge in estimating correlation coefficients. In this paper, we compare the effects of various missing data methods on the correlation plot, focusing on two common missing patterns: random and monotone. We aim to provide practical strategies and recommendations for researchers and practitioners in creating and analyzing the correlation plot. Our experimental results suggest that while imputation is commonly used for missing data, using imputed data for plotting the correlation matrix may lead to a significantly misleading inference of the relation between the features. We recommend using DPER, a direct parameter estimation approach, for plotting the correlation matrix based on its performance in the experiments. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 363,384 |
1806.05009 | Tree Edit Distance Learning via Adaptive Symbol Embeddings | Metric learning has the aim to improve classification accuracy by learning a distance measure which brings data points from the same class closer together and pushes data points from different classes further apart. Recent research has demonstrated that metric learning approaches can also be applied to trees, such as molecular structures, abstract syntax trees of computer programs, or syntax trees of natural language, by learning the cost function of an edit distance, i.e. the costs of replacing, deleting, or inserting nodes in a tree. However, learning such costs directly may yield an edit distance which violates metric axioms, is challenging to interpret, and may not generalize well. In this contribution, we propose a novel metric learning approach for trees which we call embedding edit distance learning (BEDL) and which learns an edit distance indirectly by embedding the tree nodes as vectors, such that the Euclidean distance between those vectors supports class discrimination. We learn such embeddings by reducing the distance to prototypical trees from the same class and increasing the distance to prototypical trees from different classes. In our experiments, we show that BEDL improves upon the state-of-the-art in metric learning for trees on six benchmark data sets, ranging from computer science over biomedical data to a natural-language processing data set containing over 300,000 nodes. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 100,368 |
2012.03704 | Conversational Browsing | How can we better understand the mechanisms behind multi-turn information seeking dialogues? How can we use these insights to design a dialogue system that does not require explicit query formulation upfront as in question answering? To answer these questions, we collected observations of human participants performing a similar task to obtain inspiration for the system design. Then, we studied the structure of conversations that occurred in these settings and used the resulting insights to develop a grounded theory, design and evaluate a first system prototype. Evaluation results show that our approach is effective and can complement query-based information retrieval approaches. We contribute new insights about information-seeking behavior by analyzing and providing automated support for a type of information-seeking strategy that is effective when the clarity of the information need and familiarity with the collection content are low. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 210,218 |
2004.06152 | Sparse Regression at Scale: Branch-and-Bound rooted in First-Order
Optimization | We consider the least squares regression problem, penalized with a combination of the $\ell_{0}$ and squared $\ell_{2}$ penalty functions (a.k.a. $\ell_0 \ell_2$ regularization). Recent work shows that the resulting estimators are of key importance in many high-dimensional statistical settings. However, exact computation of these estimators remains a major challenge. Indeed, modern exact methods, based on mixed integer programming (MIP), face difficulties when the number of features $p \sim 10^4$. In this work, we present a new exact MIP framework for $\ell_0\ell_2$-regularized regression that can scale to $p \sim 10^7$, achieving speedups of at least $5000$x, compared to state-of-the-art exact methods. Unlike recent work, which relies on modern commercial MIP solvers, we design a specialized nonlinear branch-and-bound (BnB) framework, by critically exploiting the problem structure. A key distinguishing component in our framework lies in efficiently solving the node relaxations using a specialized first-order method, based on coordinate descent (CD). Our CD-based method effectively leverages information across the BnB nodes, through using warm starts, active sets, and gradient screening. In addition, we design a novel method for obtaining dual bounds from primal CD solutions, which certifiably works in high dimensions. Experiments on synthetic and real high-dimensional datasets demonstrate that our framework is not only significantly faster than the state of the art, but can also deliver certifiably optimal solutions to statistically challenging instances that cannot be handled with existing methods. We open source the implementation through our toolkit L0BnB. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 172,421 |
2002.05262 | HAN-ECG: An Interpretable Atrial Fibrillation Detection Model Using
Hierarchical Attention Networks | Atrial fibrillation (AF) is one of the most prevalent cardiac arrhythmias that affects the lives of more than 3 million people in the U.S. and over 33 million people around the world and is associated with a five-fold increased risk of stroke and mortality. like other problems in healthcare domain, artificial intelligence (AI)-based algorithms have been used to reliably detect AF from patients' physiological signals. The cardiologist level performance in detecting this arrhythmia is often achieved by deep learning-based methods, however, they suffer from the lack of interpretability. In other words, these approaches are unable to explain the reasons behind their decisions. The lack of interpretability is a common challenge toward a wide application of machine learning-based approaches in the healthcare which limits the trust of clinicians in such methods. To address this challenge, we propose HAN-ECG, an interpretable bidirectional-recurrent-neural-network-based approach for the AF detection task. The HAN-ECG employs three attention mechanism levels to provide a multi-resolution analysis of the patterns in ECG leading to AF. The first level, wave level, computes the wave weights, the second level, heartbeat level, calculates the heartbeat weights, and third level, window (i.e., multiple heartbeats) level, produces the window weights in triggering a class of interest. The detected patterns by this hierarchical attention model facilitate the interpretation of the neural network decision process in identifying the patterns in the signal which contributed the most to the final prediction. Experimental results on two AF databases demonstrate that our proposed model performs significantly better than the existing algorithms. Visualization of these attention layers illustrates that our model decides upon the important waves and heartbeats which are clinically meaningful in the detection task. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 163,841 |
1705.05414 | Key-Value Retrieval Networks for Task-Oriented Dialogue | Neural task-oriented dialogue systems often struggle to smoothly interface with a knowledge base. In this work, we seek to address this problem by proposing a new neural dialogue agent that is able to effectively sustain grounded, multi-domain discourse through a novel key-value retrieval mechanism. The model is end-to-end differentiable and does not need to explicitly model dialogue state or belief trackers. We also release a new dataset of 3,031 dialogues that are grounded through underlying knowledge bases and span three distinct tasks in the in-car personal assistant space: calendar scheduling, weather information retrieval, and point-of-interest navigation. Our architecture is simultaneously trained on data from all domains and significantly outperforms a competitive rule-based system and other existing neural dialogue architectures on the provided domains according to both automatic and human evaluation metrics. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 73,483 |
2110.03828 | SkullEngine: A Multi-stage CNN Framework for Collaborative CBCT Image
Segmentation and Landmark Detection | We propose a multi-stage coarse-to-fine CNN-based framework, called SkullEngine, for high-resolution segmentation and large-scale landmark detection through a collaborative, integrated, and scalable JSD model and three segmentation and landmark detection refinement models. We evaluated our framework on a clinical dataset consisting of 170 CBCT/CT images for the task of segmenting 2 bones (midface and mandible) and detecting 175 clinically common landmarks on bones, teeth, and soft tissues. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 259,641 |
2409.16684 | Erase then Rectify: A Training-Free Parameter Editing Approach for
Cost-Effective Graph Unlearning | Graph unlearning, which aims to eliminate the influence of specific nodes, edges, or attributes from a trained Graph Neural Network (GNN), is essential in applications where privacy, bias, or data obsolescence is a concern. However, existing graph unlearning techniques often necessitate additional training on the remaining data, leading to significant computational costs, particularly with large-scale graphs. To address these challenges, we propose a two-stage training-free approach, Erase then Rectify (ETR), designed for efficient and scalable graph unlearning while preserving the model utility. Specifically, we first build a theoretical foundation showing that masking parameters critical for unlearned samples enables effective unlearning. Building on this insight, the Erase stage strategically edits model parameters to eliminate the impact of unlearned samples and their propagated influence on intercorrelated nodes. To further ensure the GNN's utility, the Rectify stage devises a gradient approximation method to estimate the model's gradient on the remaining dataset, which is then used to enhance model performance. Overall, ETR achieves graph unlearning without additional training or full training data access, significantly reducing computational overhead and preserving data privacy. Extensive experiments on seven public datasets demonstrate the consistent superiority of ETR in model utility, unlearning efficiency, and unlearning effectiveness, establishing it as a promising solution for real-world graph unlearning challenges. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 491,461 |
0707.3409 | Faster exon assembly by sparse spliced alignment | Assembling a gene from candidate exons is an important problem in computational biology. Among the most successful approaches to this problem is \emph{spliced alignment}, proposed by Gelfand et al., which scores different candidate exon chains within a DNA sequence of length $m$ by comparing them to a known related gene sequence of length n, $m = \Theta(n)$. Gelfand et al.\ gave an algorithm for spliced alignment running in time O(n^3). Kent et al.\ considered sparse spliced alignment, where the number of candidate exons is O(n), and proposed an algorithm for this problem running in time O(n^{2.5}). We improve on this result, by proposing an algorithm for sparse spliced alignment running in time O(n^{2.25}). Our approach is based on a new framework of \emph{quasi-local string comparison}. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 471 |
2210.02998 | ThoraX-PriorNet: A Novel Attention-Based Architecture Using Anatomical
Prior Probability Maps for Thoracic Disease Classification | Objective: Computer-aided disease diagnosis and prognosis based on medical images is a rapidly emerging field. Many Convolutional Neural Network (CNN) architectures have been developed by researchers for disease classification and localization from chest X-ray images. It is known that different thoracic disease lesions are more likely to occur in specific anatomical regions compared to others. This article aims to incorporate this disease and region-dependent prior probability distribution within a deep learning framework. Methods: We present the ThoraX-PriorNet, a novel attention-based CNN model for thoracic disease classification. We first estimate a disease-dependent spatial probability, i.e., an anatomical prior, that indicates the probability of occurrence of a disease in a specific region in a chest X-ray image. Next, we develop a novel attention-based classification model that combines information from the estimated anatomical prior and automatically extracted chest region of interest (ROI) masks to provide attention to the feature maps generated from a deep convolution network. Unlike previous works that utilize various self-attention mechanisms, the proposed method leverages the extracted chest ROI masks along with the probabilistic anatomical prior information, which selects the region of interest for different diseases to provide attention. Results: The proposed method shows superior performance in disease classification on the NIH ChestX-ray14 dataset compared to existing state-of-the-art methods while reaching an area under the ROC curve (%AUC) of 84.67. Regarding disease localization, the anatomy prior attention method shows competitive performance compared to state-of-the-art methods, achieving an accuracy of 0.80, 0.63, 0.49, 0.33, 0.28, 0.21, and 0.04 with an Intersection over Union (IoU) threshold of 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, and 0.7, respectively. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 321,857 |
2412.03039 | MRNet: Multifaceted Resilient Networks for Medical Image-to-Image
Translation | We propose a Multifaceted Resilient Network(MRNet), a novel architecture developed for medical image-to-image translation that outperforms state-of-the-art methods in MRI-to-CT and MRI-to-MRI conversion. MRNet leverages the Segment Anything Model (SAM) to exploit frequency-based features to build a powerful method for advanced medical image transformation. The architecture extracts comprehensive multiscale features from diverse datasets using a powerful SAM image encoder and performs resolution-aware feature fusion that consistently integrates U-Net encoder outputs with SAM-derived features. This fusion optimizes the traditional U-Net skip connection while leveraging transformer-based contextual analysis. The translation is complemented by an innovative dual-mask configuration incorporating dynamic attention patterns and a specialized loss function designed to address regional mapping mismatches, preserving both the gross anatomy and tissue details. Extensive validation studies have shown that MRNet outperforms state-of-the-art architectures, particularly in maintaining anatomical fidelity and minimizing translation artifacts. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 513,788 |
2207.08675 | Learning differentiable solvers for systems with hard constraints | We introduce a practical method to enforce partial differential equation (PDE) constraints for functions defined by neural networks (NNs), with a high degree of accuracy and up to a desired tolerance. We develop a differentiable PDE-constrained layer that can be incorporated into any NN architecture. Our method leverages differentiable optimization and the implicit function theorem to effectively enforce physical constraints. Inspired by dictionary learning, our model learns a family of functions, each of which defines a mapping from PDE parameters to PDE solutions. At inference time, the model finds an optimal linear combination of the functions in the learned family by solving a PDE-constrained optimization problem. Our method provides continuous solutions over the domain of interest that accurately satisfy desired physical constraints. Our results show that incorporating hard constraints directly into the NN architecture achieves much lower test error when compared to training on an unconstrained objective. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 308,663 |
1711.02809 | A New Hybrid-parameter Recurrent Neural Networks for Online Handwritten
Chinese Character Recognition | The recurrent neural network (RNN) is appropriate for dealing with temporal sequences. In this paper, we present a deep RNN with new features and apply it for online handwritten Chinese character recognition. Compared with the existing RNN models, three innovations are involved in the proposed system. First, a new hidden layer function for RNN is proposed for learning temporal information better. we call it Memory Pool Unit (MPU). The proposed MPU has a simple architecture. Second, a new RNN architecture with hybrid parameter is presented, in order to increasing the expression capacity of RNN. The proposed hybrid-parameter RNN has parameter changes when calculating the iteration at temporal dimension. Third, we make a adaptation that all the outputs of each layer are stacked as the output of network. Stacked hidden layer states combine all the hidden layer states for increasing the expression capacity. Experiments are carried out on the IAHCC-UCAS2016 dataset and the CASIA-OLHWDB1.1 dataset. The experimental results show that the hybrid-parameter RNN obtain a better recognition performance with higher efficiency (fewer parameters and faster speed). And the proposed Memory Pool Unit is proved to be a simple hidden layer function and obtains a competitive recognition results. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 84,121 |
1506.03995 | Technical Report: Image Captioning with Semantically Similar Images | This report presents our submission to the MS COCO Captioning Challenge 2015. The method uses Convolutional Neural Network activations as an embedding to find semantically similar images. From these images, the most typical caption is selected based on unigram frequencies. Although the method received low scores with automated evaluation metrics and in human assessed average correctness, it is competitive in the ratio of captions which pass the Turing test and which are assessed as better or equal to human captions. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 44,115 |
1302.5554 | Self-similar prior and wavelet bases for hidden incompressible turbulent
motion | This work is concerned with the ill-posed inverse problem of estimating turbulent flows from the observation of an image sequence. From a Bayesian perspective, a divergence-free isotropic fractional Brownian motion (fBm) is chosen as a prior model for instantaneous turbulent velocity fields. This self-similar prior characterizes accurately second-order statistics of velocity fields in incompressible isotropic turbulence. Nevertheless, the associated maximum a posteriori involves a fractional Laplacian operator which is delicate to implement in practice. To deal with this issue, we propose to decompose the divergent-free fBm on well-chosen wavelet bases. As a first alternative, we propose to design wavelets as whitening filters. We show that these filters are fractional Laplacian wavelets composed with the Leray projector. As a second alternative, we use a divergence-free wavelet basis, which takes implicitly into account the incompressibility constraint arising from physics. Although the latter decomposition involves correlated wavelet coefficients, we are able to handle this dependence in practice. Based on these two wavelet decompositions, we finally provide effective and efficient algorithms to approach the maximum a posteriori. An intensive numerical evaluation proves the relevance of the proposed wavelet-based self-similar priors. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 22,307 |
2101.09710 | Exploitation of Image Statistics with Sparse Coding in the Case of
Stereo Vision | The sparse coding algorithm has served as a model for early processing in mammalian vision. It has been assumed that the brain uses sparse coding to exploit statistical properties of the sensory stream. We hypothesize that sparse coding discovers patterns from the data set, which can be used to estimate a set of stimulus parameters by simple readout. In this study, we chose a model of stereo vision to test our hypothesis. We used the Locally Competitive Algorithm (LCA), followed by a na\"ive Bayes classifier, to infer stereo disparity. From the results we report three observations. First, disparity inference was successful with this naturalistic processing pipeline. Second, an expanded, highly redundant representation is required to robustly identify the input patterns. Third, the inference error can be predicted from the number of active coefficients in the LCA representation. We conclude that sparse coding can generate a suitable general representation for subsequent inference tasks. Keywords: Sparse coding; Locally Competitive Algorithm (LCA); Efficient coding; Compact code; Probabilistic inference; Stereo vision | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 216,692 |
2211.07045 | Tracking control on homogeneous spaces: the Equivariant Regulator (EqR) | Accurate tracking of planned trajectories in the presence of perturbations is an important problem in control and robotics. Symmetry is a fundamental mathematical feature of many dynamical systems and exploiting this property offers the potential of improved tracking performance. In this paper, we investigate the tracking problem for systems on homogeneous spaces, manifolds which admit symmetries with transitive group actions. We show that there is natural manner to lift any desired trajectory of such a system to a lifted trajectory on the symmetry group. This construction allows us to define a global tracking error and apply LQR design to obtain an approximately optimal control in a single coordinate chart. The resulting control is then applied to the original plant and shown to yield excellent tracking performance. We term the resulting design methodology the Equivariant Regulator (EqR). We provide an example system posed on a homogeneous space, derive the trajectory linearisation in error coordinates and demonstrate the effectiveness of EqR compared to standard approaches in simulation. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 330,104 |
1711.05366 | Velocity variations at Columbia Glacier captured by particle filtering
of oblique time-lapse images | We develop a probabilistic method for tracking glacier surface motion based on time-lapse imagery, which works by sequentially resampling a stochastic state-space model according to a likelihood determined through correlation between reference and test images. The method is robust due to its natural handling of periodic occlusion and its capacity to follow multiple hypothesis displacements between images, and can improve estimates of velocity magnitude and direction through the inclusion of observations from an arbitrary number of cameras. We apply the method to an annual record of images from two cameras near the terminus of Columbia Glacier. While the method produces velocities at daily resolution, we verify our results by comparing eleven-day means to TerraSar-X. We find that Columbia Glacier transitions between a winter state characterized by moderate velocities and little temporal variability, to an early summer speed-up in which velocities are sensitive to increases in melt- and rainwater, to a fall slowdown, where velocities drop to below their winter mean and become insensitive to external forcing, a pattern consistent with the development and collapse of efficient and inefficient subglacial hydrologic networks throughout the year. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 84,547 |
1302.1669 | Possible and Necessary Winner Problem in Social Polls | Social networks are increasingly being used to conduct polls. We introduce a simple model of such social polling. We suppose agents vote sequentially, but the order in which agents choose to vote is not necessarily fixed. We also suppose that an agent's vote is influenced by the votes of their friends who have already voted. Despite its simplicity, this model provides useful insights into a number of areas including social polling, sequential voting, and manipulation. We prove that the number of candidates and the network structure affect the computational complexity of computing which candidate necessarily or possibly can win in such a social poll. For social networks with bounded treewidth and a bounded number of candidates, we provide polynomial algorithms for both problems. In other cases, we prove that computing which candidates necessarily or possibly win are computationally intractable. | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 21,885 |
2405.14406 | A Unification Between Deep-Learning Vision, Compartmental Dynamical
Thermodynamics, and Robotic Manipulation for a Circular Economy | The shift from a linear to a circular economy has the potential to simultaneously reduce uncertainties of material supplies and waste generation. However, to date, the development of robotic and, more generally, autonomous systems have been rarely integrated into circular economy implementation strategies despite their potential to reduce the operational costs and the contamination risks from handling waste. In addition, the science of circularity still lacks the physical foundations needed to improve the accuracy and the repeatability of the models. Hence, in this paper, we merge deep-learning vision, compartmental dynamical thermodynamics, and robotic manipulation into a theoretically-coherent physics-based research framework to lay the foundations of circular flow designs of materials. The proposed framework tackles circularity by generalizing the design approach of the Rankine cycle enhanced with dynamical systems theory. This differs from state-of-the-art approaches to circular economy, which are mainly based on data analysis, e.g., material flow analysis (MFA). We begin by reviewing the literature of the three abovementioned research areas, then we introduce the proposed unified framework and we report the initial application of the framework to plastics systems along with initial simulation results of reinforcement-learning control of robotic waste sorting. This shows the framework applicability, generality, scalability, and the similarity and difference between the optimization of artificial neural systems and the proposed compartmental networks. Finally, we discuss the still not fully exploited opportunities for robotics in circular economy and the future challenges in the theory and practice of the proposed circularity framework. | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 456,411 |
2109.01491 | A mixed method for 3D nonlinear elasticity using finite element exterior
calculus | This article discusses a mixed FE technique for 3D nonlinear elasticity using a Hu-Washizu (HW) type variational principle. Here, the deformed configuration and sections from its cotangent bundle are taken as additional input arguments. The critical points of the HW functional enforce compatibility of these sections with the configuration, in addition to mechanical equilibrium and constitutive relations. The present FE approximation distinguishes a vector from a 1-from, a feature not commonly found in FE approximations. This point of view permits us to construct finite elements with vastly superior performance. Discrete approximations for the differential forms appearing in the variational principle are constructed with ideas borrowed from finite element exterior calculus. The discrete equations describing mechanical equilibrium, compatibility and constitutive rule, are obtained by extemizing the discrete functional with respect to appropriate DoF, which are then solved using the Newton's method. This mixed FE technique is then applied to benchmark problems wherein conventional displacement based approximations encounter locking and checker boarding. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 253,449 |
2307.00919 | Why do CNNs excel at feature extraction? A mathematical explanation | Over the past decade deep learning has revolutionized the field of computer vision, with convolutional neural network models proving to be very effective for image classification benchmarks. However, a fundamental theoretical questions remain answered: why can they solve discrete image classification tasks that involve feature extraction? We address this question in this paper by introducing a novel mathematical model for image classification, based on feature extraction, that can be used to generate images resembling real-world datasets. We show that convolutional neural network classifiers can solve these image classification tasks with zero error. In our proof, we construct piecewise linear functions that detect the presence of features, and show that they can be realized by a convolutional network. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 377,176 |
2107.09169 | Exploring the Non-Overlapping Visibility Regions in XL-MIMO Random
Access Protocol | The recent extra-large scale massive multiple-input multiple-output (XL-MIMO) systems are seen as a promising technology for providing very high data rates in increased user-density scenarios. Spatial non-stationarities and visibility regions (VRs) appear across the XL-MIMO array since its large dimension is of the same order as the distances to the user-equipments (UEs). Due to the increased density of UEs in typical applications of XL-MIMO systems and the scarcity of pilots, the design of random access (RA) protocols and scheduling algorithms become challenging. In this paper, we propose a joint RA and scheduling protocol, namely non-overlapping VR XL- MIMO (NOVR-XL) RA protocol, which takes advantage of the different VRs of the UEs for improving RA performance, besides seeking UEs with non-overlapping VRs to be scheduled in the same payload data pilot resource. Our results reveal that the proposed scheme achieves significant gains in terms of sum rate compared with traditional RA schemes, as well as reducing access latency and improving connectivity performance as a whole. | false | false | false | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | 246,951 |
2406.00029 | Clustered Retrieved Augmented Generation (CRAG) | Providing external knowledge to Large Language Models (LLMs) is a key point for using these models in real-world applications for several reasons, such as incorporating up-to-date content in a real-time manner, providing access to domain-specific knowledge, and contributing to hallucination prevention. The vector database-based Retrieval Augmented Generation (RAG) approach has been widely adopted to this end. Thus, any part of external knowledge can be retrieved and provided to some LLM as the input context. Despite RAG approach's success, it still might be unfeasible for some applications, because the context retrieved can demand a longer context window than the size supported by LLM. Even when the context retrieved fits into the context window size, the number of tokens might be expressive and, consequently, impact costs and processing time, becoming impractical for most applications. To address these, we propose CRAG, a novel approach able to effectively reduce the number of prompting tokens without degrading the quality of the response generated compared to a solution using RAG. Through our experiments, we show that CRAG can reduce the number of tokens by at least 46\%, achieving more than 90\% in some cases, compared to RAG. Moreover, the number of tokens with CRAG does not increase considerably when the number of reviews analyzed is higher, unlike RAG, where the number of tokens is almost 9x higher when there are 75 reviews compared to 4 reviews. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 459,667 |
2112.07225 | Margin Calibration for Long-Tailed Visual Recognition | The long-tailed class distribution in visual recognition tasks poses great challenges for neural networks on how to handle the biased predictions between head and tail classes, i.e., the model tends to classify tail classes as head classes. While existing research focused on data resampling and loss function engineering, in this paper, we take a different perspective: the classification margins. We study the relationship between the margins and logits (classification scores) and empirically observe the biased margins and the biased logits are positively correlated. We propose MARC, a simple yet effective MARgin Calibration function to dynamically calibrate the biased margins for unbiased logits. We validate MARC through extensive experiments on common long-tailed benchmarks including CIFAR-LT, ImageNet-LT, Places-LT, and iNaturalist-LT. Experimental results demonstrate that our MARC achieves favorable results on these benchmarks. In addition, MARC is extremely easy to implement with just three lines of code. We hope this simple method will motivate people to rethink the biased margins and biased logits in long-tailed visual recognition. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 271,416 |
2204.08115 | HFT-ONLSTM: Hierarchical and Fine-Tuning Multi-label Text Classification | Many important classification problems in the real-world consist of a large number of closely related categories in a hierarchical structure or taxonomy. Hierarchical multi-label text classification (HMTC) with higher accuracy over large sets of closely related categories organized in a hierarchy or taxonomy has become a challenging problem. In this paper, we present a hierarchical and fine-tuning approach based on the Ordered Neural LSTM neural network, abbreviated as HFT-ONLSTM, for more accurate level-by-level HMTC. First, we present a novel approach to learning the joint embeddings based on parent category labels and textual data for accurately capturing the joint features of both category labels and texts. Second, a fine tuning technique is adopted for training parameters such that the text classification results in the upper level should contribute to the classification in the lower one. At last, the comprehensive analysis is made based on extensive experiments in comparison with the state-of-the-art hierarchical and flat multi-label text classification approaches over two benchmark datasets, and the experimental results show that our HFT-ONLSTM approach outperforms these approaches, in particular reducing computational costs while achieving superior performance. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 291,965 |
1709.06182 | A Survey of Machine Learning for Big Code and Naturalness | Research at the intersection of machine learning, programming languages, and software engineering has recently taken important steps in proposing learnable probabilistic models of source code that exploit code's abundance of patterns. In this article, we survey this work. We contrast programming languages against natural languages and discuss how these similarities and differences drive the design of probabilistic models. We present a taxonomy based on the underlying design principles of each model and use it to navigate the literature. Then, we review how researchers have adapted these models to application areas and discuss cross-cutting and application-specific challenges and opportunities. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 81,048 |
1902.07528 | Adaptive scale-invariant online algorithms for learning linear models | We consider online learning with linear models, where the algorithm predicts on sequentially revealed instances (feature vectors), and is compared against the best linear function (comparator) in hindsight. Popular algorithms in this framework, such as Online Gradient Descent (OGD), have parameters (learning rates), which ideally should be tuned based on the scales of the features and the optimal comparator, but these quantities only become available at the end of the learning process. In this paper, we resolve the tuning problem by proposing online algorithms making predictions which are invariant under arbitrary rescaling of the features. The algorithms have no parameters to tune, do not require any prior knowledge on the scale of the instances or the comparator, and achieve regret bounds matching (up to a logarithmic factor) that of OGD with optimally tuned separate learning rates per dimension, while retaining comparable runtime performance. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 122,005 |
1402.0574 | Learning to Predict from Textual Data | Given a current news event, we tackle the problem of generating plausible predictions of future events it might cause. We present a new methodology for modeling and predicting such future news events using machine learning and data mining techniques. Our Pundit algorithm generalizes examples of causality pairs to infer a causality predictor. To obtain precisely labeled causality examples, we mine 150 years of news articles and apply semantic natural language modeling techniques to headlines containing certain predefined causality patterns. For generalization, the model uses a vast number of world knowledge ontologies. Empirical evaluation on real news articles shows that our Pundit algorithm performs as well as non-expert humans. | false | false | false | false | true | true | false | false | true | false | false | false | false | false | false | false | false | false | 30,588 |
1410.8027 | Towards a Visual Turing Challenge | As language and visual understanding by machines progresses rapidly, we are observing an increasing interest in holistic architectures that tightly interlink both modalities in a joint learning and inference process. This trend has allowed the community to progress towards more challenging and open tasks and refueled the hope at achieving the old AI dream of building machines that could pass a turing test in open domains. In order to steadily make progress towards this goal, we realize that quantifying performance becomes increasingly difficult. Therefore we ask how we can precisely define such challenges and how we can evaluate different algorithms on this open tasks? In this paper, we summarize and discuss such challenges as well as try to give answers where appropriate options are available in the literature. We exemplify some of the solutions on a recently presented dataset of question-answering task based on real-world indoor images that establishes a visual turing challenge. Finally, we argue despite the success of unique ground-truth annotation, we likely have to step away from carefully curated dataset and rather rely on 'social consensus' as the main driving force to create suitable benchmarks. Providing coverage in this inherently ambiguous output space is an emerging challenge that we face in order to make quantifiable progress in this area. | false | false | false | false | true | false | true | false | true | false | false | true | false | false | false | false | false | false | 37,125 |
1808.09916 | Autoencoders, Kernels, and Multilayer Perceptrons for Electron
Micrograph Restoration and Compression | We present 14 autoencoders, 15 kernels and 14 multilayer perceptrons for electron micrograph restoration and compression. These have been trained for transmission electron microscopy (TEM), scanning transmission electron microscopy (STEM) and for both (TEM+STEM). TEM autoencoders have been trained for 1$\times$, 4$\times$, 16$\times$ and 64$\times$ compression, STEM autoencoders for 1$\times$, 4$\times$ and 16$\times$ compression and TEM+STEM autoencoders for 1$\times$, 2$\times$, 4$\times$, 8$\times$, 16$\times$, 32$\times$ and 64$\times$ compression. Kernels and multilayer perceptrons have been trained to approximate the denoising effect of the 4$\times$ compression autoencoders. Kernels for input sizes of 3, 5, 7, 11 and 15 have been fitted for TEM, STEM and TEM+STEM. TEM multilayer perceptrons have been trained with 1 hidden layer for input sizes of 3, 5 and 7 and with 2 hidden layers for input sizes of 5 and 7. STEM multilayer perceptrons have been trained with 1 hidden layer for input sizes of 3, 5 and 7. TEM+STEM multilayer perceptrons have been trained with 1 hidden layer for input sizes of 3, 5, 7 and 11 and with 2 hidden layers for input sizes of 3 and 7. Our code, example usage and pre-trained models are available at https://github.com/Jeffrey-Ede/Denoising-Kernels-MLPs-Autoencoders | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 106,293 |
2209.05213 | Learning Dense Visual Descriptors using Image Augmentations for Robot
Manipulation Tasks | We propose a self-supervised training approach for learning view-invariant dense visual descriptors using image augmentations. Unlike existing works, which often require complex datasets, such as registered RGBD sequences, we train on an unordered set of RGB images. This allows for learning from a single camera view, e.g., in an existing robotic cell with a fix-mounted camera. We create synthetic views and dense pixel correspondences using data augmentations. We find our descriptors are competitive to the existing methods, despite the simpler data recording and setup requirements. We show that training on synthetic correspondences provides descriptor consistency across a broad range of camera views. We compare against training with geometric correspondence from multiple views and provide ablation studies. We also show a robotic bin-picking experiment using descriptors learned from a fix-mounted camera for defining grasp preferences. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 317,027 |
2306.05017 | Non-Intrusive Load Monitoring (NILM) using Deep Neural Networks: A
Review | Demand-side management now encompasses more residential loads. To efficiently apply demand response strategies, it's essential to periodically observe the contribution of various domestic appliances to total energy consumption. Non-intrusive load monitoring (NILM), also known as load disaggregation, is a method for decomposing the total energy consumption profile into individual appliance load profiles within the household. It has multiple applications in demand-side management, energy consumption monitoring, and analysis. Various methods, including machine learning and deep learning, have been used to implement and improve NILM algorithms. This paper reviews some recent NILM methods based on deep learning and introduces the most accurate methods for residential loads. It summarizes public databases for NILM evaluation and compares methods using standard performance metrics. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 372,015 |
1903.09215 | Calibrated Top-1 Uncertainty estimates for classification by score based
models | While the accuracy of modern deep learning models has significantly improved in recent years, the ability of these models to generate uncertainty estimates has not progressed to the same degree. Uncertainty methods are designed to provide an estimate of class probabilities when predicting class assignment. While there are a number of proposed methods for estimating uncertainty, they all suffer from a lack of calibration: predicted probabilities can be off from empirical ones by a few percent or more. By restricting the scope of our predictions to only the probability of Top-1 error, we can decrease the calibration error of existing methods to less than one percent. As a result, the scores of the methods also improve significantly over benchmarks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 124,999 |
1510.06799 | Robust Preamble Design for Synchronization, Signaling Transmission and
Channel Estimation | The European second generation digital video broadcasting standard (DVB-T2) introduces a P1 symbol. This P1 symbol facilitates the coarse synchronization and carries 7-bit transmission parameter signaling (TPS), including the fast Fourier transform size, single-input/single-output and multiple-input/single-output transmission modes, etc. However, this P1 symbol suffers from obvious performance loss over fading channels. In this paper, an improved preamble scheme is proposed, where a pair of optimal m sequences are inserted into the frequency domain. One sequence is used for carrier frequency offset (CFO) estimation, and the other carries TPS to inform the receiver about the transmission configuration parameters. Compared with the conventional preamble scheme, the proposed preamble improves CFO estimation performance and the signaling capacity. Meanwhile, without additional overhead, the proposed scheme exploits more active pilots than the conventional schemes. In this way, it can facilitate the channel estimation, improve the frame synchronization accuracy as well as enhance its robustness to frequency selective fading channels. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 48,143 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.