id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2106.15339 | SpreadsheetCoder: Formula Prediction from Semi-structured Context | Spreadsheet formula prediction has been an important program synthesis problem with many real-world applications. Previous works typically utilize input-output examples as the specification for spreadsheet formula synthesis, where each input-output pair simulates a separate row in the spreadsheet. However, this formulation does not fully capture the rich context in real-world spreadsheets. First, spreadsheet data entries are organized as tables, thus rows and columns are not necessarily independent from each other. In addition, many spreadsheet tables include headers, which provide high-level descriptions of the cell data. However, previous synthesis approaches do not consider headers as part of the specification. In this work, we present the first approach for synthesizing spreadsheet formulas from tabular context, which includes both headers and semi-structured tabular data. In particular, we propose SpreadsheetCoder, a BERT-based model architecture to represent the tabular context in both row-based and column-based formats. We train our model on a large dataset of spreadsheets, and demonstrate that SpreadsheetCoder achieves top-1 prediction accuracy of 42.51%, which is a considerable improvement over baselines that do not employ rich tabular context. Compared to the rule-based system, SpreadsheetCoder assists 82% more users in composing formulas on Google Sheets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 243,722 |
2406.02336 | Polynomial-Augmented Neural Networks (PANNs) with Weak Orthogonality
Constraints for Enhanced Function and PDE Approximation | We present polynomial-augmented neural networks (PANNs), a novel machine learning architecture that combines deep neural networks (DNNs) with a polynomial approximant. PANNs combine the strengths of DNNs (flexibility and efficiency in higher-dimensional approximation) with those of polynomial approximation (rapid convergence rates for smooth functions). To aid in both stable training and enhanced accuracy over a variety of problems, we present (1) a family of orthogonality constraints that impose mutual orthogonality between the polynomial and the DNN within a PANN; (2) a simple basis pruning approach to combat the curse of dimensionality introduced by the polynomial component; and (3) an adaptation of a polynomial preconditioning strategy to both DNNs and polynomials. We test the resulting architecture for its polynomial reproduction properties, ability to approximate both smooth functions and functions of limited smoothness, and as a method for the solution of partial differential equations (PDEs). Through these experiments, we demonstrate that PANNs offer superior approximation properties to DNNs for both regression and the numerical solution of PDEs, while also offering enhanced accuracy over both polynomial and DNN-based regression (each) when regressing functions with limited smoothness. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 460,725 |
2406.06465 | AID: Adapting Image2Video Diffusion Models for Instruction-guided Video
Prediction | Text-guided video prediction (TVP) involves predicting the motion of future frames from the initial frame according to an instruction, which has wide applications in virtual reality, robotics, and content creation. Previous TVP methods make significant breakthroughs by adapting Stable Diffusion for this task. However, they struggle with frame consistency and temporal stability primarily due to the limited scale of video datasets. We observe that pretrained Image2Video diffusion models possess good priors for video dynamics but they lack textual control. Hence, transferring Image2Video models to leverage their video dynamic priors while injecting instruction control to generate controllable videos is both a meaningful and challenging task. To achieve this, we introduce the Multi-Modal Large Language Model (MLLM) to predict future video states based on initial frames and text instructions. More specifically, we design a dual query transformer (DQFormer) architecture, which integrates the instructions and frames into the conditional embeddings for future frame prediction. Additionally, we develop Long-Short Term Temporal Adapters and Spatial Adapters that can quickly transfer general video diffusion models to specific scenarios with minimal training costs. Experimental results show that our method significantly outperforms state-of-the-art techniques on four datasets: Something Something V2, Epic Kitchen-100, Bridge Data, and UCF-101. Notably, AID achieves 91.2% and 55.5% FVD improvements on Bridge and SSv2 respectively, demonstrating its effectiveness in various domains. More examples can be found at our website https://chenhsing.github.io/AID. | false | false | false | false | true | false | true | false | true | false | false | true | false | false | false | false | false | true | 462,590 |
2405.13300 | FAITH: Frequency-domain Attention In Two Horizons for Time Series
Forecasting | Time Series Forecasting plays a crucial role in various fields such as industrial equipment maintenance, meteorology, energy consumption, traffic flow and financial investment. However, despite their considerable advantages over traditional statistical approaches, current deep learning-based predictive models often exhibit a significant deviation between their forecasting outcomes and the ground truth. This discrepancy is largely due to an insufficient emphasis on extracting the sequence's latent information, particularly its global information within the frequency domain and the relationship between different variables. To address this issue, we propose a novel model Frequency-domain Attention In Two Horizons, which decomposes time series into trend and seasonal components using a multi-scale sequence adaptive decomposition and fusion architecture, and processes them separately. FAITH utilizes Frequency Channel feature Extraction Module and Frequency Temporal feature Extraction Module to capture inter-channel relationships and temporal global information in the sequence, significantly improving its ability to handle long-term dependencies and complex patterns. Furthermore, FAITH achieves theoretically linear complexity by modifying the time-frequency domain transformation method, effectively reducing computational costs. Extensive experiments on 6 benchmarks for long-term forecasting and 3 benchmarks for short-term forecasting demonstrate that FAITH outperforms existing models in many fields, such as electricity, weather and traffic, proving its effectiveness and superiority both in long-term and short-term time series forecasting tasks. Our codes and data are available at https://github.com/LRQ577/FAITH. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 455,884 |
2003.02929 | Flexible Bayesian Nonlinear Model Configuration | Regression models are used in a wide range of applications providing a powerful scientific tool for researchers from different fields. Linear, or simple parametric, models are often not sufficient to describe complex relationships between input variables and a response. Such relationships can be better described through flexible approaches such as neural networks, but this results in less interpretable models and potential overfitting. Alternatively, specific parametric nonlinear functions can be used, but the specification of such functions is in general complicated. In this paper, we introduce a flexible approach for the construction and selection of highly flexible nonlinear parametric regression models. Nonlinear features are generated hierarchically, similarly to deep learning, but have additional flexibility on the possible types of features to be considered. This flexibility, combined with variable selection, allows us to find a small set of important features and thereby more interpretable models. Within the space of possible functions, a Bayesian approach, introducing priors for functions based on their complexity, is considered. A genetically modified mode jumping Markov chain Monte Carlo algorithm is adopted to perform Bayesian inference and estimate posterior probabilities for model averaging. In various applications, we illustrate how our approach is used to obtain meaningful nonlinear models. Additionally, we compare its predictive performance with several machine learning algorithms. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 167,071 |
2409.18108 | Language-Embedded Gaussian Splats (LEGS): Incrementally Building
Room-Scale Representations with a Mobile Robot | Building semantic 3D maps is valuable for searching for objects of interest in offices, warehouses, stores, and homes. We present a mapping system that incrementally builds a Language-Embedded Gaussian Splat (LEGS): a detailed 3D scene representation that encodes both appearance and semantics in a unified representation. LEGS is trained online as a robot traverses its environment to enable localization of open-vocabulary object queries. We evaluate LEGS on 4 room-scale scenes where we query for objects in the scene to assess how LEGS can capture semantic meaning. We compare LEGS to LERF and find that while both systems have comparable object query success rates, LEGS trains over 3.5x faster than LERF. Results suggest that a multi-camera setup and incremental bundle adjustment can boost visual reconstruction quality in constrained robot trajectories, and suggest LEGS can localize open-vocabulary and long-tail object queries with up to 66% accuracy. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 492,108 |
1709.00268 | Algorithmically probable mutations reproduce aspects of evolution such
as convergence rate, genetic memory, and modularity | Natural selection explains how life has evolved over millions of years from more primitive forms. The speed at which this happens, however, has sometimes defied formal explanations when based on random (uniformly distributed) mutations. Here we investigate the application of a simplicity bias based on a natural but algorithmic distribution of mutations (no recombination) in various examples, particularly binary matrices in order to compare evolutionary convergence rates. Results both on synthetic and on small biological examples indicate an accelerated rate when mutations are not statistical uniform but \textit{algorithmic uniform}. We show that algorithmic distributions can evolve modularity and genetic memory by preservation of structures when they first occur sometimes leading to an accelerated production of diversity but also population extinctions, possibly explaining naturally occurring phenomena such as diversity explosions (e.g. the Cambrian) and massive extinctions (e.g. the End Triassic) whose causes are currently a cause for debate. The natural approach introduced here appears to be a better approximation to biological evolution than models based exclusively upon random uniform mutations, and it also approaches a formal version of open-ended evolution based on previous formal results. These results validate some suggestions in the direction that computation may be an equally important driver of evolution. We also show that inducing the method on problems of optimization, such as genetic algorithms, has the potential to accelerate convergence of artificial evolutionary algorithms. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | 79,876 |
1903.11916 | Intelligent Processing in Vehicular Ad hoc Networks: a Survey | The intelligent Processing technique is more and more attractive to researchers due to its ability to deal with key problems in Vehicular Ad hoc networks. However, several problems in applying intelligent processing technologies in VANETs remain open. The existing applications are comprehensively reviewed and discussed, and classified into different categories in this paper. Their strategies, advantages/disadvantages, and performances are elaborated. By generalizing different tactics in various applications related to different scenarios of VANETs and evaluating their performances, several promising directions for future research have been suggested. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 125,607 |
2409.10141 | PSHuman: Photorealistic Single-view Human Reconstruction using
Cross-Scale Diffusion | Detailed and photorealistic 3D human modeling is essential for various applications and has seen tremendous progress. However, full-body reconstruction from a monocular RGB image remains challenging due to the ill-posed nature of the problem and sophisticated clothing topology with self-occlusions. In this paper, we propose PSHuman, a novel framework that explicitly reconstructs human meshes utilizing priors from the multiview diffusion model. It is found that directly applying multiview diffusion on single-view human images leads to severe geometric distortions, especially on generated faces. To address it, we propose a cross-scale diffusion that models the joint probability distribution of global full-body shape and local facial characteristics, enabling detailed and identity-preserved novel-view generation without any geometric distortion. Moreover, to enhance cross-view body shape consistency of varied human poses, we condition the generative model on parametric models like SMPL-X, which provide body priors and prevent unnatural views inconsistent with human anatomy. Leveraging the generated multi-view normal and color images, we present SMPLX-initialized explicit human carving to recover realistic textured human meshes efficiently. Extensive experimental results and quantitative evaluations on CAPE and THuman2.1 datasets demonstrate PSHumans superiority in geometry details, texture fidelity, and generalization capability. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 488,633 |
2102.09462 | Equivariant Spherical Deconvolution: Learning Sparse Orientation
Distribution Functions from Spherical Data | We present a rotation-equivariant unsupervised learning framework for the sparse deconvolution of non-negative scalar fields defined on the unit sphere. Spherical signals with multiple peaks naturally arise in Diffusion MRI (dMRI), where each voxel consists of one or more signal sources corresponding to anisotropic tissue structure such as white matter. Due to spatial and spectral partial voluming, clinically-feasible dMRI struggles to resolve crossing-fiber white matter configurations, leading to extensive development in spherical deconvolution methodology to recover underlying fiber directions. However, these methods are typically linear and struggle with small crossing-angles and partial volume fraction estimation. In this work, we improve on current methodologies by nonlinearly estimating fiber structures via unsupervised spherical convolutional networks with guaranteed equivariance to spherical rotation. Experimentally, we first validate our proposition via extensive single and multi-shell synthetic benchmarks demonstrating competitive performance against common baselines. We then show improved downstream performance on fiber tractography measures on the Tractometer benchmark dataset. Finally, we show downstream improvements in terms of tractography and partial volume estimation on a multi-shell dataset of human subjects. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 220,782 |
1910.01863 | Template-free Data-to-Text Generation of Finnish Sports News | News articles such as sports game reports are often thought to closely follow the underlying game statistics, but in practice they contain a notable amount of background knowledge, interpretation, insight into the game, and quotes that are not present in the official statistics. This poses a challenge for automated data-to-text news generation with real-world news corpora as training data. We report on the development of a corpus of Finnish ice hockey news, edited to be suitable for training of end-to-end news generation methods, as well as demonstrate generation of text, which was judged by journalists to be relatively close to a viable product. The new dataset and system source code are available for research purposes at https://github.com/scoopmatic/finnish-hockey-news-generation-paper. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 148,063 |
1811.10158 | Reinforcement Learning for Uplift Modeling | Uplift modeling aims to directly model the incremental impact of a treatment on an individual response. In this work, we address the problem from a new angle and reformulate it as a Markov Decision Process (MDP). We conducted extensive experiments on both a synthetic dataset and real-world scenarios, and showed that our method can achieve significant improvement over previous methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 114,427 |
1902.01999 | Testing Markov Chains without Hitting | We study the problem of identity testing of markov chains. In this setting, we are given access to a single trajectory from a markov chain with unknown transition matrix $Q$ and the goal is to determine whether $Q = P$ for some known matrix $P$ or $\text{Dist}(P, Q) \geq \epsilon$ where $\text{Dist}$ is suitably defined. In recent work by Daskalakis, Dikkala and Gravin, 2018, it was shown that it is possible to distinguish between the two cases provided the length of the observed trajectory is at least super-linear in the hitting time of $P$ which may be arbitrarily large. In this paper, we propose an algorithm that avoids this dependence on hitting time thus enabling efficient testing of markov chains even in cases where it is infeasible to observe every state in the chain. Our algorithm is based on combining classical ideas from approximation algorithms with techniques for the spectral analysis of markov chains. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 120,787 |
2102.09728 | On a Variational Definition for the Jensen-Shannon Symmetrization of
Distances based on the Information Radius | We generalize the Jensen-Shannon divergence by considering a variational definition with respect to a generic mean extending thereby the notion of Sibson's information radius. The variational definition applies to any arbitrary distance and yields another way to define a Jensen-Shannon symmetrization of distances. When the variational optimization is further constrained to belong to prescribed probability measure families, we get relative Jensen-Shannon divergences and symmetrizations which generalize the concept of information projections. Finally, we discuss applications of these variational Jensen-Shannon divergences and diversity indices to clustering and quantization tasks of probability measures including statistical mixtures. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 220,868 |
2402.10062 | Optimal Parameter and Neuron Pruning for Out-of-Distribution Detection | For a machine learning model deployed in real world scenarios, the ability of detecting out-of-distribution (OOD) samples is indispensable and challenging. Most existing OOD detection methods focused on exploring advanced training skills or training-free tricks to prevent the model from yielding overconfident confidence score for unknown samples. The training-based methods require expensive training cost and rely on OOD samples which are not always available, while most training-free methods can not efficiently utilize the prior information from the training data. In this work, we propose an \textbf{O}ptimal \textbf{P}arameter and \textbf{N}euron \textbf{P}runing (\textbf{OPNP}) approach, which aims to identify and remove those parameters and neurons that lead to over-fitting. The main method is divided into two steps. In the first step, we evaluate the sensitivity of the model parameters and neurons by averaging gradients over all training samples. In the second step, the parameters and neurons with exceptionally large or close to zero sensitivities are removed for prediction. Our proposal is training-free, compatible with other post-hoc methods, and exploring the information from all training data. Extensive experiments are performed on multiple OOD detection tasks and model architectures, showing that our proposed OPNP consistently outperforms the existing methods by a large margin. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 429,791 |
2006.11395 | Deep Learning Based Single Sample Per Person Face Recognition: A Survey | Face recognition has long been an active research area in the field of artificial intelligence, particularly since the rise of deep learning in recent years. In some practical situations, each identity has only a single sample available for training. Face recognition under this situation is referred to as single sample face recognition and poses significant challenges to the effective training of deep models. Therefore, in recent years, researchers have attempted to unleash more potential of deep learning and improve the model recognition performance in the single sample situation. While several comprehensive surveys have been conducted on traditional single sample face recognition approaches, emerging deep learning based methods are rarely involved in these reviews. Accordingly, we focus on the deep learning-based methods in this paper, classifying them into virtual sample methods and generic learning methods. In the former category, virtual images or virtual features are generated to benefit the training of the deep model. In the latter one, additional multi-sample generic sets are used. There are three types of generic learning methods: combining traditional methods and deep features, improving the loss function, and improving network structure, all of which are covered in our analysis. Moreover, we review face datasets that have been commonly used for evaluating single sample face recognition models and go on to compare the results of different types of models. Additionally, we discuss problems with existing single sample face recognition methods, including identity information preservation in virtual sample methods, domain adaption in generic learning methods. Furthermore, we regard developing unsupervised methods is a promising future direction, and point out that the semantic gap as an important issue that needs to be further considered. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 183,208 |
2207.13345 | Traffic Sign Detection With Event Cameras and DCNN | In recent years, event cameras (DVS - Dynamic Vision Sensors) have been used in vision systems as an alternative or supplement to traditional cameras. They are characterised by high dynamic range, high temporal resolution, low latency, and reliable performance in limited lighting conditions -- parameters that are particularly important in the context of advanced driver assistance systems (ADAS) and self-driving cars. In this work, we test whether these rather novel sensors can be applied to the popular task of traffic sign detection. To this end, we analyse different representations of the event data: event frame, event frequency, and the exponentially decaying time surface, and apply video frame reconstruction using a deep neural network called FireNet. We use the deep convolutional neural network YOLOv4 as a detector. For particular representations, we obtain a detection accuracy in the range of 86.9-88.9% mAP@0.5. The use of a fusion of the considered representations allows us to obtain a detector with higher accuracy of 89.9% mAP@0.5. In comparison, the detector for the frames reconstructed with FireNet is characterised by an accuracy of 72.67% mAP@0.5. The results obtained illustrate the potential of event cameras in automotive applications, either as standalone sensors or in close cooperation with typical frame-based cameras. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 310,279 |
1709.06669 | A textual transform of multivariate time-series for prognostics | Prognostics or early detection of incipient faults is an important industrial challenge for condition-based and preventive maintenance. Physics-based approaches to modeling fault progression are infeasible due to multiple interacting components, uncontrolled environmental factors and observability constraints. Moreover, such approaches to prognostics do not generalize to new domains. Consequently, domain-agnostic data-driven machine learning approaches to prognostics are desirable. Damage progression is a path-dependent process and explicitly modeling the temporal patterns is critical for accurate estimation of both the current damage state and its progression leading to total failure. In this paper, we present a novel data-driven approach to prognostics that employs a novel textual representation of multivariate temporal sensor observations for predicting the future health state of the monitored equipment early in its life. This representation enables us to utilize well-understood concepts from text-mining for modeling, prediction and understanding distress patterns in a domain agnostic way. The approach has been deployed and successfully tested on large scale multivariate time-series data from commercial aircraft engines. We report experiments on well-known publicly available benchmark datasets and simulation datasets. The proposed approach is shown to be superior in terms of prediction accuracy, lead time to prediction and interpretability. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 81,147 |
1903.08858 | Classification of EEG-Based Brain Connectivity Networks in Schizophrenia
Using a Multi-Domain Connectome Convolutional Neural Network | We exploit altered patterns in brain functional connectivity as features for automatic discriminative analysis of neuropsychiatric patients. Deep learning methods have been introduced to functional network classification only very recently for fMRI, and the proposed architectures essentially focused on a single type of connectivity measure. We propose a deep convolutional neural network (CNN) framework for classification of electroencephalogram (EEG)-derived brain connectome in schizophrenia (SZ). To capture complementary aspects of disrupted connectivity in SZ, we explore combination of various connectivity features consisting of time and frequency-domain metrics of effective connectivity based on vector autoregressive model and partial directed coherence, and complex network measures of network topology. We design a novel multi-domain connectome CNN (MDC-CNN) based on a parallel ensemble of 1D and 2D CNNs to integrate the features from various domains and dimensions using different fusion strategies. Hierarchical latent representations learned by the multiple convolutional layers from EEG connectivity reveal apparent group differences between SZ and healthy controls (HC). Results on a large resting-state EEG dataset show that the proposed CNNs significantly outperform traditional support vector machine classifiers. The MDC-CNN with combined connectivity features further improves performance over single-domain CNNs using individual features, achieving remarkable accuracy of $93.06\%$ with a decision-level fusion. The proposed MDC-CNN by integrating information from diverse brain connectivity descriptors is able to accurately discriminate SZ from HC. The new framework is potentially useful for developing diagnostic tools for SZ and other disorders. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 124,926 |
2405.13555 | A Perspective Analysis of Handwritten Signature Technology | Handwritten signatures are biometric traits at the center of debate in the scientific community. Over the last 40 years, the interest in signature studies has grown steadily, having as its main reference the application of automatic signature verification, as previously published reviews in 1989, 2000, and 2008 bear witness. Ever since, and over the last 10 years, the application of handwritten signature technology has strongly evolved, and much research has focused on the possibility of applying systems based on handwritten signature analysis and processing to a multitude of new fields. After several years of haphazard growth of this research area, it is time to assess its current developments for their applicability in order to draw a structured way forward. This perspective reports a systematic review of the last 10 years of the literature on handwritten signatures with respect to the new scenario, focusing on the most promising domains of research and trying to elicit possible future research directions in this subject. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 455,987 |
2008.11337 | Comparison of Centralized and Decentralized Approaches in Cooperative
Coverage Problems with Energy-Constrained Agents | A multi-agent coverage problem is considered with energy-constrained agents. The objective of this paper is to compare the coverage performance between centralized and decentralized approaches. To this end, a near-optimal centralized coverage control method is developed under energy depletion and repletion constraints. The optimal coverage formation corresponds to the locations of agents where the coverage performance is maximized. The optimal charging formation corresponds to the locations of agents with one agent fixed at the charging station and the remaining agents maximizing the coverage performance. We control the behavior of this cooperative multi-agent system by switching between the optimal coverage formation and the optimal charging formation. Finally, the optimal dwell times at coverage locations, charging time, and agent trajectories are determined so as to maximize coverage over a given time interval. In particular, our controller guarantees that at any time there is at most one agent leaving the team for energy repletion. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 193,242 |
1910.02235 | Cascaded Volumetric Convolutional Network for Kidney Tumor Segmentation
from CT volumes | Automated segmentation of kidney and tumor from 3D CT scans is necessary for the diagnosis, monitoring, and treatment planning of the disease. In this paper, we describe a two-stage framework for kidney and tumor segmentation based on 3D fully convolutional network (FCN). The first stage preliminarily locate the kidney and cut off the irrelevant background to reduce class imbalance and computation cost. Then the second stage precisely segment the kidney and tumor on the cropped patch. The proposed method ranks the 4th place out of 105 competitive teams in MICCAI 2019 KiTS Challenge with a Composite Dice of 90.24%. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 148,182 |
2111.01231 | Switch Point biased Self-Training: Re-purposing Pretrained Models for
Code-Switching | Code-switching (CS), a ubiquitous phenomenon due to the ease of communication it offers in multilingual communities still remains an understudied problem in language processing. The primary reasons behind this are: (1) minimal efforts in leveraging large pretrained multilingual models, and (2) the lack of annotated data. The distinguishing case of low performance of multilingual models in CS is the intra-sentence mixing of languages leading to switch points. We first benchmark two sequence labeling tasks -- POS and NER on 4 different language pairs with a suite of pretrained models to identify the problems and select the best performing model, char-BERT, among them (addressing (1)). We then propose a self training method to repurpose the existing pretrained models using a switch-point bias by leveraging unannotated data (addressing (2)). We finally demonstrate that our approach performs well on both tasks by reducing the gap between the switch point performance while retaining the overall performance on two distinct language pairs in both the tasks. Our code is available here: https://github.com/PC09/EMNLP2021-Switch-Point-biased-Self-Training. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 264,491 |
2303.02484 | Multi-Symmetry Ensembles: Improving Diversity and Generalization via
Opposing Symmetries | Deep ensembles (DE) have been successful in improving model performance by learning diverse members via the stochasticity of random initialization. While recent works have attempted to promote further diversity in DE via hyperparameters or regularizing loss functions, these methods primarily still rely on a stochastic approach to explore the hypothesis space. In this work, we present Multi-Symmetry Ensembles (MSE), a framework for constructing diverse ensembles by capturing the multiplicity of hypotheses along symmetry axes, which explore the hypothesis space beyond stochastic perturbations of model weights and hyperparameters. We leverage recent advances in contrastive representation learning to create models that separately capture opposing hypotheses of invariant and equivariant functional classes and present a simple ensembling approach to efficiently combine appropriate hypotheses for a given task. We show that MSE effectively captures the multiplicity of conflicting hypotheses that is often required in large, diverse datasets like ImageNet. As a result of their inherent diversity, MSE improves classification performance, uncertainty quantification, and generalization across a series of transfer tasks. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 349,379 |
2402.00620 | Actor Identification in Discourse: A Challenge for LLMs? | The identification of political actors who put forward claims in public debate is a crucial step in the construction of discourse networks, which are helpful to analyze societal debates. Actor identification is, however, rather challenging: Often, the locally mentioned speaker of a claim is only a pronoun ("He proposed that [claim]"), so recovering the canonical actor name requires discourse understanding. We compare a traditional pipeline of dedicated NLP components (similar to those applied to the related task of coreference) with a LLM, which appears a good match for this generation task. Evaluating on a corpus of German actors in newspaper reports, we find surprisingly that the LLM performs worse. Further analysis reveals that the LLM is very good at identifying the right reference, but struggles to generate the correct canonical form. This points to an underlying issue in LLMs with controlling generated output. Indeed, a hybrid model combining the LLM with a classifier to normalize its output substantially outperforms both initial models. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 425,658 |
2211.10418 | Sample-efficient Quantum Born Machine through Coding Rate Reduction | The quantum circuit Born machine (QCBM) is a quantum physics inspired implicit generative model naturally suitable for learning binary images, with a potential advantage of modeling discrete distributions that are hard to simulate classically. As data samples are generated quantum-mechanically, QCBMs encompass a unique optimization landscape. However, pioneering works on QCBMs do not consider the practical scenario where only small batch sizes are allowed during training. QCBMs trained with a statistical two-sample test objective in the image space require large amounts of projective measurements to approximate the model distribution well, unpractical for large-scale quantum systems due to the exponential scaling of the probability space. QCBMs trained adversarially against a deep neural network discriminator are proof-of-concept models that face mode collapse. In this work we investigate practical learning of QCBMs. We use the information-theoretic \textit{Maximal Coding Rate Reduction} (MCR$^2$) metric as a second moment matching tool and study its effect on mode collapse in QCBMs. We compute the sampling based gradient of MCR$^2$ with respect to quantum circuit parameters with or without an explicit feature mapping. We experimentally show that matching up to the second moment alone is not sufficient for training the quantum generator, but when combined with the class probability estimation loss, MCR$^2$ is able to resist mode collapse. In addition, we show that adversarially trained neural network kernel for infinite moment matching is also effective against mode collapse. On the Bars and Stripes dataset, our proposed techniques alleviate mode collapse to a larger degree than previous QCBM training schemes, moving one step closer towards practicality and scalability. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 331,301 |
2412.04735 | A dynamical measure of algorithmically infused visibility | This work focuses on the nature of visibility in societies where the behaviours of humans and algorithms influence each other - termed algorithmically infused societies. We propose a quantitative measure of visibility, with implications and applications to an array of disciplines including communication studies, political science, marketing, technology design, and social media analytics. The measure captures the basic characteristics of the visibility of a given topic, in algorithm/AI-mediated communication/social media settings. Topics, when trending, are ranked against each other, and the proposed measure combines the following two attributes of a topic: (i) the amount of time a topic spends at different ranks, and (ii) the different ranks the topic attains. The proposed measure incorporates a tunable parameter, termed the discrimination level, whose value determines the relative weights of the two attributes that contribute to visibility. Analysis of a large-scale, real-time dataset of trending topics, from one of the largest social media platforms, demonstrates that the proposed measure can explain a large share of the variability of the accumulated views of a topic. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 514,539 |
2203.12376 | A Fast Diagnostic to Inform Screening of Discarded or Retired Batteries | With the increased pervasiveness of Lithium-ion batteries, there is growing concern for the amount of retired batteries that will be entering the waste stream. Although these batteries no longer meet the demands of their first application, many still have a significant portion of their initial capacity remaining for use in secondary applications. Yet, direct repurposing is generally not possible and each cell in a battery must be evaluated, increasing the cost of the repurposed packs due to the time intensive screening process. In this paper, a rapid assessment of the internal resistance of a cell is proposed. First, this method of measuring the resistance is completed on cells from twelve retired battery packs and one fresh pack using a hybrid pulse power characterization (HPPC) test as a benchmark for the analysis. Results from these tests show relatively constant resistance measurements across mid to high terminal voltages, allowing this metric to be independent of state of charge (SOC). Then, the relation between internal resistance and capacity across the various packs is discussed. Initial experimental results from this study show a correlation between internal resistance and capacity which can be approximated with a linear fit, suggesting internal resistance measurements taken above a threshold cell terminal voltage may be a suitable initial screening metric for the capacity of retired cells without knowledge of the SOC. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 287,249 |
2102.06282 | A reproduction of Apple's bi-directional LSTM models for language
identification in short strings | Language Identification is the task of identifying a document's language. For applications like automatic spell checker selection, language identification must use very short strings such as text message fragments. In this work, we reproduce a language identification architecture that Apple briefly sketched in a blog post. We confirm the bi-LSTM model's performance and find that it outperforms current open-source language identifiers. We further find that its language identification mistakes are due to confusion between related languages. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 219,700 |
2403.13914 | Database Dependencies and Formal Concept Analysis | This is an account of the characterization of database dependencies with Formal Concept Analysis. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | true | 439,844 |
2107.09949 | Online structural kernel selection for mobile health | Motivated by the need for efficient and personalized learning in mobile health, we investigate the problem of online kernel selection for Gaussian Process regression in the multi-task setting. We propose a novel generative process on the kernel composition for this purpose. Our method demonstrates that trajectories of kernel evolutions can be transferred between users to improve learning and that the kernels themselves are meaningful for an mHealth prediction goal. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 247,174 |
2109.04581 | A Unified Model with Inertia Shaping for Highly Dynamic Jumps of Legged
Robots | To achieve highly dynamic jumps of legged robots, it is essential to control the rotational dynamics of the robot. In this paper, we aim to improve the jumping performance by proposing a unified model for planning highly dynamic jumps that can approximately model the centroidal inertia. This model abstracts the robot as a single rigid body for the base and point masses for the legs. The model is called the Lump Leg Single Rigid Body Model (LL-SRBM) and can be used to plan motions for both bipedal and quadrupedal robots. By taking the effects of leg dynamics into account, LL-SRBM provides a computationally efficient way for the motion planner to change the centroidal inertia of the robot with various leg configurations. Concurrently, we propose a novel contact detection method by using the norm of the average spatial velocity. After the contact is detected, the controller is switched to force control to achieve a soft landing. Twisting jump and forward jump experiments on the bipedal robot SLIDER and quadrupedal robot ANYmal demonstrate the improved jump performance by actively changing the centroidal inertia. These experiments also show the generalization and the robustness of the integrated planning and control framework. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 254,451 |
2104.13133 | Breeding Diverse Packings for the Knapsack Problem by Means of
Diversity-Tailored Evolutionary Algorithms | In practise, it is often desirable to provide the decision-maker with a rich set of diverse solutions of decent quality instead of just a single solution. In this paper we study evolutionary diversity optimization for the knapsack problem (KP). Our goal is to evolve a population of solutions that all have a profit of at least $(1-\varepsilon)\cdot OPT$, where OPT is the value of an optimal solution. Furthermore, they should differ in structure with respect to an entropy-based diversity measure. To this end we propose a simple $(\mu+1)$-EA with initial approximate solutions calculated by a well-known FPTAS for the KP. We investigate the effect of different standard mutation operators and introduce biased mutation and crossover which puts strong probability on flipping bits of low and/or high frequency within the population. An experimental study on different instances and settings shows that the proposed mutation operators in most cases perform slightly inferior in the long term, but show strong benefits if the number of function evaluations is severely limited. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 232,409 |
2312.00372 | Event-driven Real-time Retrieval in Web Search | Information retrieval in real-time search presents unique challenges distinct from those encountered in classical web search. These challenges are particularly pronounced due to the rapid change of user search intent, which is influenced by the occurrence and evolution of breaking news events, such as earthquakes, elections, and wars. Previous dense retrieval methods, which primarily focused on static semantic representation, lack the capacity to capture immediate search intent, leading to inferior performance in retrieving the most recent event-related documents in time-sensitive scenarios. To address this issue, this paper expands the query with event information that represents real-time search intent. The Event information is then integrated with the query through a cross-attention mechanism, resulting in a time-context query representation. We further enhance the model's capacity for event representation through multi-task training. Since publicly available datasets such as MS-MARCO do not contain any event information on the query side and have few time-sensitive queries, we design an automatic data collection and annotation pipeline to address this issue, which includes ModelZoo-based Coarse Annotation and LLM-driven Fine Annotation processes. In addition, we share the training tricks such as two-stage training and hard negative sampling. Finally, we conduct a set of offline experiments on a million-scale production dataset to evaluate our approach and deploy an A/B testing in a real online system to verify the performance. Extensive experimental results demonstrate that our proposed approach significantly outperforms existing state-of-the-art baseline methods. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 412,028 |
2302.12784 | STA: Self-controlled Text Augmentation for Improving Text
Classifications | Despite recent advancements in Machine Learning, many tasks still involve working in low-data regimes which can make solving natural language problems difficult. Recently, a number of text augmentation techniques have emerged in the field of Natural Language Processing (NLP) which can enrich the training data with new examples, though they are not without their caveats. For instance, simple rule-based heuristic methods are effective, but lack variation in semantic content and syntactic structure with respect to the original text. On the other hand, more complex deep learning approaches can cause extreme shifts in the intrinsic meaning of the text and introduce unwanted noise into the training data. To more reliably control the quality of the augmented examples, we introduce a state-of-the-art approach for Self-Controlled Text Augmentation (STA). Our approach tightly controls the generation process by introducing a self-checking procedure to ensure that generated examples retain the semantic content of the original text. Experimental results on multiple benchmarking datasets demonstrate that STA substantially outperforms existing state-of-the-art techniques, whilst qualitative analysis reveals that the generated examples are both lexically diverse and semantically reliable. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 347,692 |
2405.15879 | Global Output-Feedback Extremum Seeking Control with Source Seeking
Experiments | This paper discusses the design of an extremum seeking controller that relies on a monitoring function for a class of SISO uncertain nonlinear systems characterized by arbitrary and uncertain relative degree. Our demonstration illustrates the feasibility of achieving an arbitrarily small proximity to the desired optimal point through output feedback. The core concept involves integrating a monitoring function with a norm state observer for the unitary relative degree case and its expansion to arbitrary relative degrees by means of the employment of a time-scaling technique. Significantly, our proposed scheme attains the extremum of an unknown nonlinear mapping across the entire domain of initial conditions, ensuring global convergence and stability for the real-time optimization algorithm. Furthermore, we provide tuning rules to ensure convergence to the global maximum in the presence of local extrema. To validate the effectiveness of the proposed approach, we present a numerical example and apply it to a source-seeking problem involving a cart-track linear positioning servomechanism. Notably, the cart lacks the ability to sense its velocity or the source's position, but can detect the source of a light signal of unknown concentration field. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 457,141 |
1604.04789 | A Hierarchical Genetic Optimization of a Fuzzy Logic System for Flow
Control in Micro Grids | Bio-inspired algorithms like Genetic Algorithms and Fuzzy Inference Systems (FIS) are nowadays widely adopted as hybrid techniques in commercial and industrial environment. In this paper we present an interesting application of the fuzzy-GA paradigm to Smart Grids. The main aim consists in performing decision making for power flow management tasks in the proposed microgrid model equipped by renewable sources and an energy storage system, taking into account the economical profit in energy trading with the main-grid. In particular, this study focuses on the application of a Hierarchical Genetic Algorithm (HGA) for tuning the Rule Base (RB) of a Fuzzy Inference System (FIS), trying to discover a minimal fuzzy rules set in a Fuzzy Logic Controller (FLC) adopted to perform decision making in the microgrid. The HGA rationale focuses on a particular encoding scheme, based on control genes and parametric genes applied to the optimization of the FIS parameters, allowing to perform a reduction in the structural complexity of the RB. This approach will be referred in the following as fuzzy-HGA. Results are compared with a simpler approach based on a classic fuzzy-GA scheme, where both FIS parameters and rule weights are tuned, while the number of fuzzy rules is fixed in advance. Experiments shows how the fuzzy-HGA approach adopted for the synthesis of the proposed controller outperforms the classic fuzzy-GA scheme, increasing the accounting profit by 67\% in the considered energy trading problem yielding at the same time a simpler RB. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | false | false | 54,708 |
2101.01336 | Joint Deep Reinforcement Learning and Unfolding: Beam Selection and
Precoding for mmWave Multiuser MIMO with Lens Arrays | The millimeter wave (mmWave) multiuser multiple-input multiple-output (MU-MIMO) systems with discrete lens arrays (DLA) have received great attention due to their simple hardware implementation and excellent performance. In this work, we investigate the joint design of beam selection and digital precoding matrices for mmWave MU-MIMO systems with DLA to maximize the sum-rate subject to the transmit power constraint and the constraints of the selection matrix structure. The investigated non-convex problem with discrete variables and coupled constraints is challenging to solve and an efficient framework of joint neural network (NN) design is proposed to tackle it. Specifically, the proposed framework consists of a deep reinforcement learning (DRL)-based NN and a deep-unfolding NN, which are employed to optimize the beam selection and digital precoding matrices, respectively. As for the DRL-based NN, we formulate the beam selection problem as a Markov decision process and a double deep Q-network algorithm is developed to solve it. The base station is considered to be an agent, where the state, action, and reward function are carefully designed. Regarding the design of the digital precoding matrix, we develop an iterative weighted minimum mean-square error algorithm induced deep-unfolding NN, which unfolds this algorithm into a layerwise structure with introduced trainable parameters. Simulation results verify that this jointly trained NN remarkably outperforms the existing iterative algorithms with reduced complexity and stronger robustness. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 214,340 |
2011.13726 | AdS/Deep-Learning made easy: simple examples | Deep learning has been widely and actively used in various research areas. Recently, in the gauge/gravity duality, a new deep learning technique so-called the AdS/Deep-Learning (DL) has been proposed [1, 2]. The goal of this paper is to describe the essence of the AdS/DL in the simplest possible setups, for those who want to apply it to the subject of emergent spacetime as a neural network. For prototypical examples, we choose simple classical mechanics problems. This method is a little different from standard deep learning techniques in the sense that not only do we have the right final answers but also obtain a physical understanding of learning parameters. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 208,574 |
2303.10725 | SIESTA: Efficient Online Continual Learning with Sleep | In supervised continual learning, a deep neural network (DNN) is updated with an ever-growing data stream. Unlike the offline setting where data is shuffled, we cannot make any distributional assumptions about the data stream. Ideally, only one pass through the dataset is needed for computational efficiency. However, existing methods are inadequate and make many assumptions that cannot be made for real-world applications, while simultaneously failing to improve computational efficiency. In this paper, we propose a novel continual learning method, SIESTA based on wake/sleep framework for training, which is well aligned to the needs of on-device learning. The major goal of SIESTA is to advance compute efficient continual learning so that DNNs can be updated efficiently using far less time and energy. The principal innovations of SIESTA are: 1) rapid online updates using a rehearsal-free, backpropagation-free, and data-driven network update rule during its wake phase, and 2) expedited memory consolidation using a compute-restricted rehearsal policy during its sleep phase. For memory efficiency, SIESTA adapts latent rehearsal using memory indexing from REMIND. Compared to REMIND and prior arts, SIESTA is far more computationally efficient, enabling continual learning on ImageNet-1K in under 2 hours on a single GPU; moreover, in the augmentation-free setting it matches the performance of the offline learner, a milestone critical to driving adoption of continual learning in real-world applications. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 352,559 |
2203.16941 | A unified theory of learning | Recently machine learning using neural networks (NN) has been developed, and many new methods have been suggested. These methods are optimized for the type of input data and work very effectively, but they cannot be used with any kind of input data universally. On the other hand, the human brain is universal for any kind of problem, and we will be able to construct artificial general intelligence if we can mimic the system of how the human brain works. We consider how the human brain learns things uniformly, and find that the essence of learning is the compression of information. We suggest a toy NN model which mimics the system of the human brain, and we show that the NN can compress the input information without ad hoc treatment, only by setting the loss function properly. The loss function is expressed as the sum of the self-information to remember and the loss of the information along with the compression, and its minimum corresponds to the self-information of the original data. To evaluate the self-information to remember, we provided the concept of memory. The memory expresses the compressed information, and the learning proceeds by referring to previous memories. There are many similarities between this NN and the human brain, and this NN is a realization of the free-energy principle which is considered to be a unified theory of the human brain. This work can be applied to any kind of data analysis and cognitive science. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | true | false | false | 288,988 |
1808.09270 | Models for Predicting Community-Specific Interest in News Articles | In this work, we ask two questions: 1. Can we predict the type of community interested in a news article using only features from the article content? and 2. How well do these models generalize over time? To answer these questions, we compute well-studied content-based features on over 60K news articles from 4 communities on reddit.com. We train and test models over three different time periods between 2015 and 2017 to demonstrate which features degrade in performance the most due to concept drift. Our models can classify news articles into communities with high accuracy, ranging from 0.81 ROC AUC to 1.0 ROC AUC. However, while we can predict the community-specific popularity of news articles with high accuracy, practitioners should approach these models carefully. Predictions are both community-pair dependent and feature group dependent. Moreover, these feature groups generalize over time differently, with some only degrading slightly over time, but others degrading greatly. Therefore, we recommend that community-interest predictions are done in a hierarchical structure, where multiple binary classifiers can be used to separate community pairs, rather than a traditional multi-class model. Second, these models should be retrained over time based on accuracy goals and the availability of training data. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 106,147 |
1109.1409 | A georeferenced Agent-Based Model to analyze the climate change impacts
on the Andorra winter tourism | This study presents a georeferenced agent-based model to analyze the climate change impacts on the ski industry in Andorra and the effect of snowmaking as future adaptation strategy. The present study is the first attempt to analyze the ski industry in the Pyrenees region and will contribute to a better understanding of the vulnerability of Andorran ski resorts and the suitability of snowmaking as potential adaptation strategy to climate change. The resulting model can be used as a planning support tool to help local stakeholders understand the vulnerability and potential impacts of climate change. This model can be used in the decision-making process of designing and developing appropriate sustainable adaptation strategies to future climate variability. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 12,026 |
2303.18021 | A flatness-based saturated controller design for a quadcopter with
experimental validation | Using the properties of differential flatness, a controllable system, such as a quadcoper model, may be transformed into a linear equivalent system via a coordinate change and an input mapping. This is a straightforward advantage for the quadcopter's controller design and its real-time implementation. However, one significant hindrance is that, while the dynamics become linear in the new coordinates (the flat output space), the input constraints become convoluted. This paper addresses an explicit pre-stabilization based control scheme which handles the input constraints for the quadcopter in the flat output space with a saturation component. The system's stability is shown to hold by Lyapunov-stability arguments. Moreover, the practical viability of the proposed method is validated both in simulation and experiments over a nano-drone platform. Hence, the flatness-based saturated controller not only ensures stability and constraints satisfaction, but also requires very low computational effort, allowing for embedded implementations. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 355,439 |
2209.05738 | RTAW: An Attention Inspired Reinforcement Learning Method for
Multi-Robot Task Allocation in Warehouse Environments | We present a novel reinforcement learning based algorithm for multi-robot task allocation problem in warehouse environments. We formulate it as a Markov Decision Process and solve via a novel deep multi-agent reinforcement learning method (called RTAW) with attention inspired policy architecture. Hence, our proposed policy network uses global embeddings that are independent of the number of robots/tasks. We utilize proximal policy optimization algorithm for training and use a carefully designed reward to obtain a converged policy. The converged policy ensures cooperation among different robots to minimize total travel delay (TTD) which ultimately improves the makespan for a sufficiently large task-list. In our extensive experiments, we compare the performance of our RTAW algorithm to state of the art methods such as myopic pickup distance minimization (greedy) and regret based baselines on different navigation schemes. We show an improvement of upto 14% (25-1000 seconds) in TTD on scenarios with hundreds or thousands of tasks for different challenging warehouse layouts and task generation schemes. We also demonstrate the scalability of our approach by showing performance with up to $1000$ robots in simulations. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | 317,192 |
2112.04981 | PE-former: Pose Estimation Transformer | Vision transformer architectures have been demonstrated to work very effectively for image classification tasks. Efforts to solve more challenging vision tasks with transformers rely on convolutional backbones for feature extraction. In this paper we investigate the use of a pure transformer architecture (i.e., one with no CNN backbone) for the problem of 2D body pose estimation. We evaluate two ViT architectures on the COCO dataset. We demonstrate that using an encoder-decoder transformer architecture yields state of the art results on this estimation problem. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 270,706 |
2306.03647 | Proximal Symmetric Non-negative Latent Factor Analysis: A Novel Approach
to Highly-Accurate Representation of Undirected Weighted Networks | An Undirected Weighted Network (UWN) is commonly found in big data-related applications. Note that such a network's information connected with its nodes, and edges can be expressed as a Symmetric, High-Dimensional and Incomplete (SHDI) matrix. However, existing models fail in either modeling its intrinsic symmetry or low-data density, resulting in low model scalability or representation learning ability. For addressing this issue, a Proximal Symmetric Nonnegative Latent-factor-analysis (PSNL) model is proposed. It incorporates a proximal term into symmetry-aware and data density-oriented objective function for high representation accuracy. Then an adaptive Alternating Direction Method of Multipliers (ADMM)-based learning scheme is implemented through a Tree-structured of Parzen Estimators (TPE) method for high computational efficiency. Empirical studies on four UWNs demonstrate that PSNL achieves higher accuracy gain than state-of-the-art models, as well as highly competitive computational efficiency. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 371,427 |
1401.2101 | NoSQL Databases | In this document, I present the main notions of NoSQL databases and compare four selected products (Riak, MongoDB, Cassandra, Neo4J) according to their capabilities with respect to consistency, availability, and partition tolerance, as well as performance. I also propose a few criteria for selecting the right tool for the right situation. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 29,714 |
2209.13094 | Efficient Image Denoising by Low-Rank Singular Vector Approximations of
Geodesics' Gramian Matrix | With the advent of sophisticated cameras, the urge to capture high-quality images has grown enormous. However, the noise contamination of the images results in substandard expectations among the people; thus, image denoising is an essential pre-processing step. While the algebraic image processing frameworks are sometimes inefficient for this denoising task as they may require processing of matrices of order equivalent to some power of the order of the original image, the neural network image processing frameworks are sometimes not robust as they require a lot of similar training samples. Thus, here we present a manifold-based noise filtering method that mainly exploits a few prominent singular vectors of the geodesics' Gramian matrix. Especially, the framework partitions an image, say that of size $n \times n$, into $n^2$ overlapping patches of known size such that one patch is centered at each pixel. Then, the prominent singular vectors, of the Gramian matrix of size $n^2 \times n^2$ of the geodesic distances computed over the patch space, are utilized to denoise the image. Here, the prominent singular vectors are revealed by efficient, but diverse, approximation techniques, rather than explicitly computing them using frameworks like Singular Value Decomposition (SVD) which encounters $\mathcal{O}(n^6)$ operations. Finally, we compare both computational time and the noise filtration performance of the proposed denoising algorithm with and without singular vector approximation techniques. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 319,767 |
1907.04481 | Tails of Lipschitz Triangular Flows | We investigate the ability of popular flow based methods to capture tail-properties of a target density by studying the increasing triangular maps used in these flow methods acting on a tractable source density. We show that the density quantile functions of the source and target density provide a precise characterization of the slope of transformation required to capture tails in a target density. We further show that any Lipschitz-continuous transport map acting on a source density will result in a density with similar tail properties as the source, highlighting the trade-off between a complex source density and a sufficiently expressive transformation to capture desirable properties of a target density. Subsequently, we illustrate that flow models like Real-NVP, MAF, and Glow as implemented originally lack the ability to capture a distribution with non-Gaussian tails. We circumvent this problem by proposing tail-adaptive flows consisting of a source distribution that can be learned simultaneously with the triangular map to capture tail-properties of a target density. We perform several synthetic and real-world experiments to compliment our theoretical findings. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 138,117 |
2404.05997 | Concept-Attention Whitening for Interpretable Skin Lesion Diagnosis | The black-box nature of deep learning models has raised concerns about their interpretability for successful deployment in real-world clinical applications. To address the concerns, eXplainable Artificial Intelligence (XAI) aims to provide clear and understandable explanations of the decision-making process. In the medical domain, concepts such as attributes of lesions or abnormalities serve as key evidence for deriving diagnostic results. Existing concept-based models mainly depend on concepts that appear independently and require fine-grained concept annotations such as bounding boxes. However, a medical image usually contains multiple concepts, and the fine-grained concept annotations are difficult to acquire. In this paper, we aim to interpret representations in deep neural networks by aligning the axes of the latent space with known concepts of interest. We propose a novel Concept-Attention Whitening (CAW) framework for interpretable skin lesion diagnosis. CAW is comprised of a disease diagnosis branch and a concept alignment branch. In the former branch, we train a convolutional neural network (CNN) with an inserted CAW layer to perform skin lesion diagnosis. The CAW layer decorrelates features and aligns image features to conceptual meanings via an orthogonal matrix. In the latter branch, the orthogonal matrix is calculated under the guidance of the concept attention mask. We particularly introduce a weakly-supervised concept mask generator that only leverages coarse concept labels for filtering local regions that are relevant to certain concepts, improving the optimization of the orthogonal matrix. Extensive experiments on two public skin lesion diagnosis datasets demonstrated that CAW not only enhanced interpretability but also maintained a state-of-the-art diagnostic performance. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 445,292 |
2108.01246 | AcousticFusion: Fusing Sound Source Localization to Visual SLAM in
Dynamic Environments | Dynamic objects in the environment, such as people and other agents, lead to challenges for existing simultaneous localization and mapping (SLAM) approaches. To deal with dynamic environments, computer vision researchers usually apply some learning-based object detectors to remove these dynamic objects. However, these object detectors are computationally too expensive for mobile robot on-board processing. In practical applications, these objects output noisy sounds that can be effectively detected by on-board sound source localization. The directional information of the sound source object can be efficiently obtained by direction of sound arrival (DoA) estimation, but depth estimation is difficult. Therefore, in this paper, we propose a novel audio-visual fusion approach that fuses sound source direction into the RGB-D image and thus removes the effect of dynamic obstacles on the multi-robot SLAM system. Experimental results of multi-robot SLAM in different dynamic environments show that the proposed method uses very small computational resources to obtain very stable self-localization results. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 248,966 |
2406.07393 | Large Language Models are Limited in Out-of-Context Knowledge Reasoning | Large Language Models (LLMs) possess extensive knowledge and strong capabilities in performing in-context reasoning. However, previous work challenges their out-of-context reasoning ability, i.e., the ability to infer information from their training data, instead of from the context or prompt. This paper focuses on a significant aspect of out-of-context reasoning: Out-of-Context Knowledge Reasoning (OCKR), which is to combine multiple knowledge to infer new knowledge. We designed a synthetic dataset with seven representative OCKR tasks to systematically assess the OCKR capabilities of LLMs. Using this dataset, we evaluated several LLMs and discovered that their proficiency in this aspect is limited, regardless of whether the knowledge is trained in a separate or adjacent training settings. Moreover, training the model to reason with reasoning examples does not result in significant improvement, while training the model to perform explicit knowledge retrieval helps for retrieving attribute knowledge but not the relation knowledge, indicating that the model's limited OCKR capabilities are due to difficulties in knowledge retrieval. Furthermore, we treat cross-lingual knowledge transfer as a distinct form of OCKR, and evaluate this ability. Our results show that the evaluated model also exhibits limited ability in transferring knowledge across languages. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 463,026 |
2210.10352 | Temporal Action Segmentation: An Analysis of Modern Techniques | Temporal action segmentation (TAS) in videos aims at densely identifying video frames in minutes-long videos with multiple action classes. As a long-range video understanding task, researchers have developed an extended collection of methods and examined their performance using various benchmarks. Despite the rapid growth of TAS techniques in recent years, no systematic survey has been conducted in these sectors. This survey analyzes and summarizes the most significant contributions and trends. In particular, we first examine the task definition, common benchmarks, types of supervision, and prevalent evaluation measures. In addition, we systematically investigate two essential techniques of this topic, i.e., frame representation and temporal modeling, which have been studied extensively in the literature. We then conduct a thorough review of existing TAS works categorized by their levels of supervision and conclude our survey by identifying and emphasizing several research gaps. In addition, we have curated a list of TAS resources, which is available at https://github.com/nus-cvml/awesome-temporal-action-segmentation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 324,888 |
2502.13722 | Deep Learning for VWAP Execution in Crypto Markets: Beyond the Volume
Curve | Volume-Weighted Average Price (VWAP) is arguably the most prevalent benchmark for trade execution as it provides an unbiased standard for comparing performance across market participants. However, achieving VWAP is inherently challenging due to its dependence on two dynamic factors, volumes and prices. Traditional approaches typically focus on forecasting the market's volume curve, an assumption that may hold true under steady conditions but becomes suboptimal in more volatile environments or markets such as cryptocurrency where prediction error margins are higher. In this study, I propose a deep learning framework that directly optimizes the VWAP execution objective by bypassing the intermediate step of volume curve prediction. Leveraging automatic differentiation and custom loss functions, my method calibrates order allocation to minimize VWAP slippage, thereby fully addressing the complexities of the execution problem. My results demonstrate that this direct optimization approach consistently achieves lower VWAP slippage compared to conventional methods, even when utilizing a naive linear model presented in arXiv:2410.21448. They validate the observation that strategies optimized for VWAP performance tend to diverge from accurate volume curve predictions and thus underscore the advantage of directly modeling the execution objective. This research contributes a more efficient and robust framework for VWAP execution in volatile markets, illustrating the potential of deep learning in complex financial systems where direct objective optimization is crucial. Although my empirical analysis focuses on cryptocurrency markets, the underlying principles of the framework are readily applicable to other asset classes such as equities. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 535,485 |
2501.12274 | Making it to First: The Random Access Problem in DNA Storage | We study the Random Access Problem in DNA storage, which addresses the challenge of retrieving a specific information strand from a DNA-based storage system. Given that $k$ information strands, representing the data, are encoded into $n$ strands using a code. The goal under this paradigm is to identify and analyze codes that minimize the expected number of reads required to retrieve any of the $k$ information strand, while in each read one of the $n$ encoded strands is read uniformly at random. We fully solve the case when $k=2$, showing that the best possible code attains a random access expectation of $0.914 \cdot 2$. Moreover, we generalize a construction from \cite{GMZ24}, specific to $k=3$, for any value of $k$. Our construction uses $B_{k-1}$ sequences over $\mathbb{Z}_{q-1}$, that always exist over large finite fields. For $k=4$, we show that this generalized construction outperforms all previous constructions in terms of reducing the random access expectation . | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 526,238 |
2212.14421 | Timestomping Vulnerability of Age-Sensitive Gossip Networks | We consider gossip networks consisting of a source that maintains the current version of a file, $n$ nodes that use asynchronous gossip mechanisms to disseminate fresh information in the network, and an oblivious adversary who infects the packets at a target node through data timestamp manipulation, with the intent to replace circulation of fresh packets with outdated packets in the network. We demonstrate how network topology capacitates an adversary to influence age scaling in a network. We show that in a fully connected network, a single infected node increases the expected age from $O(\log n)$ to $O(n)$. Further, we show that the optimal behavior for an adversary is to reset the timestamps of all outgoing packets to the current time and of all incoming packets to an outdated time for the infected node; thereby preventing any fresh information to go into the infected node, and facilitating acceptance of stale information out of the infected node into other network nodes. Lastly for fully connected network, we show that if an infected node contacts only a single node instead of all nodes of the network, the system age can still be degraded to $O(n)$. These show that fully connected nature of a network can be both a benefit and a detriment for information freshness; full connectivity, while enabling fast dissemination of information, also enables fast dissipation of adversarial inputs. We then analyze the unidirectional ring network, the other end of the network connectivity spectrum, where we show that the adversarial effect on age scaling of a node is limited by its distance from the adversary, and the age scaling for a large fraction of the network continues to be $O(\sqrt{n})$, unchanged from the case with no adversary. We finally support our findings with simulations. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 338,608 |
1907.00452 | Detecting Spiky Corruption in Markov Decision Processes | Current reinforcement learning methods fail if the reward function is imperfect, i.e. if the agent observes reward different from what it actually receives. We study this problem within the formalism of Corrupt Reward Markov Decision Processes (CRMDPs). We show that if the reward corruption in a CRMDP is sufficiently "spiky", the environment is solvable. We fully characterize the regret bound of a Spiky CRMDP, and introduce an algorithm that is able to detect its corrupt states. We show that this algorithm can be used to learn the optimal policy with any common reinforcement learning algorithm. Finally, we investigate our algorithm in a pair of simple gridworld environments, finding that our algorithm can detect the corrupt states and learn the optimal policy despite the corruption. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 137,046 |
2201.10017 | Online Convex Optimization Using Coordinate Descent Algorithms | This paper considers the problem of online optimization where the objective function is time-varying. In particular, we extend coordinate descent type algorithms to the online case, where the objective function varies after a finite number of iterations of the algorithm. Instead of solving the problem exactly at each time step, we only apply a finite number of iterations at each time step. Commonly used notions of regret are used to measure the performance of the online algorithm. Moreover, coordinate descent algorithms with different updating rules are considered, including both deterministic and stochastic rules that are developed in the literature of classical offline optimization. A thorough regret analysis is given for each case. Finally, numerical simulations are provided to illustrate the theoretical results. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 276,850 |
1806.00699 | Quantifying the dynamics of topical fluctuations in language | The availability of large diachronic corpora has provided the impetus for a growing body of quantitative research on language evolution and meaning change. The central quantities in this research are token frequencies of linguistic elements in texts, with changes in frequency taken to reflect the popularity or selective fitness of an element. However, corpus frequencies may change for a wide variety of reasons, including purely random sampling effects, or because corpora are composed of contemporary media and fiction texts within which the underlying topics ebb and flow with cultural and socio-political trends. In this work, we introduce a simple model for controlling for topical fluctuations in corpora - the topical-cultural advection model - and demonstrate how it provides a robust baseline of variability in word frequency changes over time. We validate the model on a diachronic corpus spanning two centuries, and a carefully-controlled artificial language change scenario, and then use it to correct for topical fluctuations in historical time series. Finally, we use the model to show that the emergence of new words typically corresponds with the rise of a trending topic. This suggests that some lexical innovations occur due to growing communicative need in a subspace of the lexicon, and that the topical-cultural advection model can be used to quantify this. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 99,372 |
1211.7326 | Repeated Root Constacyclic Codes of Length $mp^s$ over
$\mathbb{F}_{p^r}+u \mathbb{F}_{p^r}+...+ u^{e-1}\mathbb{F}_{p^r}$ | We give the structure of $\lambda$-constacyclic codes of length $p^sm$ over $R=\mathbb{F}_{p^r}+u \mathbb{F}_{p^r}+...+ u^{e-1}\mathbb{F}_{p^r}$ with $\lambda \in \F_{p^r}^*$. We also give the structure of $\lambda$-constacyclic codes of length $p^sm$ with $\lambda=\alpha_1+u\alpha_2+...+u^{e-1} \alpha_{e-1}$, where $\alpha_1,\alpha_2 \neq 0$ and study the self-duality of these codes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 20,044 |
1910.08965 | Learning GANs and Ensembles Using Discrepancy | Generative adversarial networks (GANs) generate data based on minimizing a divergence between two distributions. The choice of that divergence is therefore critical. We argue that the divergence must take into account the hypothesis set and the loss function used in a subsequent learning task, where the data generated by a GAN serves for training. Taking that structural information into account is also important to derive generalization guarantees. Thus, we propose to use the discrepancy measure, which was originally introduced for the closely related problem of domain adaptation and which precisely takes into account the hypothesis set and the loss function. We show that discrepancy admits favorable properties for training GANs and prove explicit generalization guarantees. We present efficient algorithms using discrepancy for two tasks: training a GAN directly, namely DGAN, and mixing previously trained generative models, namely EDGAN. Our experiments on toy examples and several benchmark datasets show that DGAN is competitive with other GANs and that EDGAN outperforms existing GAN ensembles, such as AdaGAN. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 150,033 |
2407.01299 | Preserving Full Degradation Details for Blind Image Super-Resolution | The performance of image super-resolution relies heavily on the accuracy of degradation information, especially under blind settings. Due to absence of true degradation models in real-world scenarios, previous methods learn distinct representations by distinguishing different degradations in a batch. However, the most significant degradation differences may provide shortcuts for the learning of representations such that subtle difference may be discarded. In this paper, we propose an alternative to learn degradation representations through reproducing degraded low-resolution (LR) images. By guiding the degrader to reconstruct input LR images, full degradation information can be encoded into the representations. In addition, we develop an energy distance loss to facilitate the learning of the degradation representations by introducing a bounded constraint. Experiments show that our representations can extract accurate and highly robust degradation information. Moreover, evaluations on both synthetic and real images demonstrate that our ReDSR achieves state-of-the-art performance for the blind SR tasks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 469,234 |
2401.16920 | Sparse Portfolio Selection via Topological Data Analysis based
Clustering | This paper uses topological data analysis (TDA) tools and introduces a data-driven clustering-based stock selection strategy tailored for sparse portfolio construction. Our asset selection strategy exploits the topological features of stock price movements to select a subset of topologically similar (different) assets for a sparse index tracking (Markowitz) portfolio. We introduce new distance measures, which serve as an input to the clustering algorithm, on the space of persistence diagrams and landscapes that consider the time component of a time series. We conduct an empirical analysis on the S\&P index from 2009 to 2022, including a study on the COVID-19 data to validate the robustness of our methodology. Our strategy to integrate TDA with the clustering algorithm significantly enhanced the performance of sparse portfolios across various performance measures in diverse market scenarios. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 425,031 |
2111.00979 | Parabola-Inscribed Poncelet Polygons Derived from the Bicentric Family | We study loci and properties of a Parabola-inscribed family of Poncelet polygons whose caustic is a focus-centered circle. This family is the polar image of a special case of the bicentric family with respect to its circumcircle. We describe closure conditions, curious loci, and new conserved quantities. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | 264,418 |
1905.05891 | Crowd Density Estimation using Novel Feature Descriptor | Crowd density estimation is an important task for crowd monitoring. Many efforts have been done to automate the process of estimating crowd density from images and videos. Despite series of efforts, it remains a challenging task. In this paper, we proposes a new texture feature-based approach for the estimation of crowd density based on Completed Local Binary Pattern (CLBP). We first divide the image into blocks and then re-divide the blocks into cells. For each cell, we compute CLBP and then concatenate them to describe the texture of the corresponding block. We then train a multi-class Support Vector Machine (SVM) classifier, which classifies each block of image into one of four categories, i.e. Very Low, Low, Medium, and High. We evaluate our technique on the PETS 2009 dataset, and from the experiments, we show to achieve 95% accuracy for the proposed descriptor. We also compare other state-of-the-art texture descriptors and from the experimental results, we show that our proposed method outperforms other state-of-the-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 130,842 |
1111.1555 | A scheme to protect against multiple quantum erasures | We present a scheme able to protect k >= 3 qubits of information against the occurrence of multiple erasures, based on the code proposed by Yang et al. (2004 JETP Letters 79 236). In this scheme redundant blocks are used and we restrict to the case that each erasure must occur in distinct blocks. We explicitly characterize the encoding operation and the restoring operation required to implement this scheme. The operators used in these operations can be adjusted to construct different quantum erasure-correcting codes. A special feature of this scheme is that no measurement is required. To illustrate our scheme, we present an example in which five-qubits of information are protected against the occurrence of two erasures. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 12,938 |
2405.11601 | How to integrate cloud service, data analytic and machine learning
technique to reduce cyber risks associated with the modern cloud based
infrastructure | The combination of cloud technology, machine learning, and data visualization techniques allows hybrid enterprise networks to hold massive volumes of data and provide employees and customers easy access to these cloud data. These massive collections of complex data sets are facing security challenges. While cloud platforms are more vulnerable to security threats and traditional security technologies are unable to cope with the rapid data explosion in cloud platforms, machine learning powered security solutions and data visualization techniques are playing instrumental roles in detecting security threat, data breaches, and automatic finding software vulnerabilities. The purpose of this paper is to present some of the widely used cloud services, machine learning techniques and data visualization approach and demonstrate how to integrate cloud service, data analytic and machine learning techniques that can be used to detect and reduce cyber risks associated with the modern cloud based infrastructure. In this paper I applied the machine learning supervised classifier to design a model based on well-known UNSW-NB15 dataset to predict the network behavior metrics and demonstrated how data analytics techniques can be integrated to visualize network traffics. | false | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 455,209 |
2110.06661 | A Primer on Near-Field Beamforming for Arrays and Reconfigurable
Intelligent Surfaces | Wireless communication systems have almost exclusively operated in the far-field of antennas and antenna arrays, which is conventionally characterized by having propagation distances beyond the Fraunhofer distance. This is natural since the Fraunhofer distance is normally only a few wavelengths. With the advent of active arrays and passive reconfigurable intelligent surfaces (RIS) that are physically large, it is plausible that the transmitter or receiver is located in between the Fraunhofer distance of the individual array/surface elements and the Fraunhofer distance of the entire array. An RIS then can be configured to reflect the incident waveform towards a point in the radiative near-field of the surface, resulting in a beam with finite depth, or as a conventional angular beam with infinity focus, which only results in amplification in the far-field. To understand when these different options are viable, an accurate characterization of the near-field behaviors is necessary. In this paper, we revisit the motivation and approximations behind the Fraunhofer distance and show that it is not the right metric for determining when near-field focusing is possible. We obtain the distance range where finite-depth beamforming is possible and the distance where the beamforming gain tapers off. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 260,705 |
2205.06226 | The Mechanism of Prediction Head in Non-contrastive Self-supervised
Learning | Recently the surprising discovery of the Bootstrap Your Own Latent (BYOL) method by Grill et al. shows the negative term in contrastive loss can be removed if we add the so-called prediction head to the network. This initiated the research of non-contrastive self-supervised learning. It is mysterious why even when there exist trivial collapsed global optimal solutions, neural networks trained by (stochastic) gradient descent can still learn competitive representations. This phenomenon is a typical example of implicit bias in deep learning and remains little understood. In this work, we present our empirical and theoretical discoveries on non-contrastive self-supervised learning. Empirically, we find that when the prediction head is initialized as an identity matrix with only its off-diagonal entries being trainable, the network can learn competitive representations even though the trivial optima still exist in the training objective. Theoretically, we present a framework to understand the behavior of the trainable, but identity-initialized prediction head. Under a simple setting, we characterized the substitution effect and acceleration effect of the prediction head. The substitution effect happens when learning the stronger features in some neurons can substitute for learning these features in other neurons through updating the prediction head. And the acceleration effect happens when the substituted features can accelerate the learning of other weaker features to prevent them from being ignored. These two effects enable the neural networks to learn all the features rather than focus only on learning the stronger features, which is likely the cause of the dimensional collapse phenomenon. To the best of our knowledge, this is also the first end-to-end optimization guarantee for non-contrastive methods using nonlinear neural networks with a trainable prediction head and normalization. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 296,177 |
2102.05983 | Tackling Virtual and Real Concept Drifts: An Adaptive Gaussian Mixture
Model | Real-world applications have been dealing with large amounts of data that arrive over time and generally present changes in their underlying joint probability distribution, i.e., concept drift. Concept drift can be subdivided into two types: virtual drift, which affects the unconditional probability distribution p(x), and real drift, which affects the conditional probability distribution p(y|x). Existing works focuses on real drift. However, strategies to cope with real drift may not be the best suited for dealing with virtual drift, since the real class boundaries remain unchanged. We provide the first in depth analysis of the differences between the impact of virtual and real drifts on classifiers' suitability. We propose an approach to handle both drifts called On-line Gaussian Mixture Model With Noise Filter For Handling Virtual and Real Concept Drifts (OGMMF-VRD). Experiments with 7 synthetic and 3 real-world datasets show that OGMMF-VRD obtained the best results in terms of average accuracy, G-mean and runtime compared to existing approaches. Moreover, its accuracy over time suffered less performance degradation in the presence of drifts. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 219,601 |
2007.09859 | Novel Approach to Use HU Moments with Image Processing Techniques for
Real Time Sign Language Communication | Sign language is the fundamental communication method among people who suffer from speech and hearing defects. The rest of the world doesn't have a clear idea of sign language. "Sign Language Communicator" (SLC) is designed to solve the language barrier between the sign language users and the rest of the world. The main objective of this research is to provide a low cost affordable method of sign language interpretation. This system will also be very useful to the sign language learners as they can practice the sign language. During the research available human computer interaction techniques in posture recognition was tested and evaluated. A series of image processing techniques with Hu-moment classification was identified as the best approach. To improve the accuracy of the system, a new approach height to width ratio filtration was implemented along with Hu-moments. System is able to recognize selected Sign Language signs with the accuracy of 84% without a controlled background with small light adjustments | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 188,088 |
1910.10461 | A Novel Generalized Artificial Neural Network for Mining Two-Class
Datasets | A novel general neural network (GNN) is proposed for two-class data mining in this study. In a GNN, each attribute in the dataset is treated as a node, with each pair of nodes being connected by an arc. The reliability is of each arc, which is similar to the weight in artificial neural network and must be solved using simplified swarm optimization (SSO), is constant. After the node reliability is made the transformed value of the related attribute, the approximate reliability of each GNN instance is calculated based on the proposed intelligent Monte Carlo simulation (iMCS). This approximate GNN reliability is then compared with a given threshold to predict each instance. The proposed iMCS-SSO is used to repeat the procedure and train the GNN, such that the predicted class values match the actual class values as much as possible. To evaluate the classification performance of the proposed GNN, experiments were performed on five well-known benchmark datasets. The computational results compared favorably with those obtained using support vector machines. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 150,494 |
1804.02872 | Variational 3D-PIV with Sparse Descriptors | 3D Particle Imaging Velocimetry (3D-PIV) aim to recover the flow field in a volume of fluid, which has been seeded with tracer particles and observed from multiple camera viewpoints. The first step of 3D-PIV is to reconstruct the 3D locations of the tracer particles from synchronous views of the volume. We propose a new method for iterative particle reconstruction (IPR), in which the locations and intensities of all particles are inferred in one joint energy minimization. The energy function is designed to penalize deviations between the reconstructed 3D particles and the image evidence, while at the same time aiming for a sparse set of particles. We find that the new method, without any post-processing, achieves significantly cleaner particle volumes than a conventional, tomographic MART reconstruction, and can handle a wide range of particle densities. The second step of 3D-PIV is to then recover the dense motion field from two consecutive particle reconstructions. We propose a variational model, which makes it possible to directly include physical properties, such as incompressibility and viscosity, in the estimation of the motion field. To further exploit the sparse nature of the input data, we propose a novel, compact descriptor of the local particle layout. Hence, we avoid the memory-intensive storage of high-resolution intensity volumes. Our framework is generic and allows for a variety of different data costs (correlation measures) and regularizers. We quantitatively evaluate it with both the sum of squared differences (SSD) and the normalized cross-correlation (NCC), respectively with both a hard and a soft version of the incompressibility constraint. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 94,515 |
2303.03539 | A Study on Multirobot Quantile Estimation in Natural Environments | Quantiles of a natural phenomena can provide scientists with an important understanding of different spreads of concentrations. When there are several available robots, it may be advantageous to pool resources in a collaborative way to improve performance. A multirobot team can be difficult to practically bring together and coordinate. To this end, we present a study across several axes of the impact of using multiple robots to estimate quantiles of a distribution of interest using an informative path planning formulation. We measure quantile estimation accuracy with increasing team size to understand what benefits result from a multirobot approach in a drone exploration task of analyzing the algae concentration in lakes. We additionally perform an analysis on several parameters, including the spread of robot initial positions, the planning budget, and inter-robot communication, and find that while using more robots generally results in lower estimation error, this benefit is achieved under certain conditions. We present our findings in the context of real field robotic applications and discuss the implications of the results and interesting directions for future work. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | 349,760 |
2409.05601 | Longer is (Not Necessarily) Stronger: Punctuated Long-Sequence Training
for Enhanced Speech Recognition and Translation | This paper presents a new method for training sequence-to-sequence models for speech recognition and translation tasks. Instead of the traditional approach of training models on short segments containing only lowercase or partial punctuation and capitalization (PnC) sentences, we propose training on longer utterances that include complete sentences with proper punctuation and capitalization. We achieve this by using the FastConformer architecture which allows training 1 Billion parameter models with sequences up to 60 seconds long with full attention. However, while training with PnC enhances the overall performance, we observed that accuracy plateaus when training on sequences longer than 40 seconds across various evaluation settings. Our proposed method significantly improves punctuation and capitalization accuracy, showing a 25% relative word error rate (WER) improvement on the Earnings-21 and Earnings-22 benchmarks. Additionally, training on longer audio segments increases the overall model accuracy across speech recognition and translation benchmarks. The model weights and training code are open-sourced though NVIDIA NeMo. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 486,834 |
2112.00006 | Towards algorithm-free physical equilibrium model of computing | Our computers today, from sophisticated servers to small smartphones, operate based on the same computing model, which requires running a sequence of discrete instructions, specified as an algorithm. This sequential computing paradigm has not yet led to a fast algorithm for an NP-complete problem despite numerous attempts over the past half a century. Unfortunately, even after the introduction of quantum mechanics to the world of computing, we still followed a similar sequential paradigm, which has not yet helped us obtain such an algorithm either. Here a completely different model of computing is proposed to replace the sequential paradigm of algorithms with inherent parallelism of physical processes. Using the proposed model, instead of writing algorithms to solve NP-complete problems, we construct physical systems whose equilibrium states correspond to the desired solutions and let them evolve to search for the solutions. The main requirements of the model are identified and quantum circuits are proposed for its potential implementation. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | 269,013 |
2403.02297 | Uncertainty-Aware Prediction and Application in Planning for Autonomous
Driving: Definitions, Methods, and Comparison | Autonomous driving systems face the formidable challenge of navigating intricate and dynamic environments with uncertainty. This study presents a unified prediction and planning framework that concurrently models short-term aleatoric uncertainty (SAU), long-term aleatoric uncertainty (LAU), and epistemic uncertainty (EU) to predict and establish a robust foundation for planning in dynamic contexts. The framework uses Gaussian mixture models and deep ensemble methods, to concurrently capture and assess SAU, LAU, and EU, where traditional methods do not integrate these uncertainties simultaneously. Additionally, uncertainty-aware planning is introduced, considering various uncertainties. The study's contributions include comparisons of uncertainty estimations, risk modeling, and planning methods in comparison to existing approaches. The proposed methods were rigorously evaluated using the CommonRoad benchmark and settings with limited perception. These experiments illuminated the advantages and roles of different uncertainty factors in autonomous driving processes. In addition, comparative assessments of various uncertainty modeling strategies underscore the benefits of modeling multiple types of uncertainties, thus enhancing planning accuracy and reliability. The proposed framework facilitates the development of methods for UAP and surpasses existing uncertainty-aware risk models, particularly when considering diverse traffic scenarios. Project page: https://swb19.github.io/UAP/. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 434,758 |
2205.14412 | Design, Modelling, and Control of a Reconfigurable Rotary Series Elastic
Actuator with Nonlinear Stiffness for Assistive Robots | In assistive robots, compliant actuator is a key component in establishing safe and satisfactory physical human-robot interaction (pHRI). The performance of compliant actuators largely depends on the stiffness of the elastic element. Generally, low stiffness is desirable to achieve low impedance, high fidelity of force control and safe pHRI, while high stiffness is required to ensure sufficient force bandwidth and output force. These requirements, however, are contradictory and often vary according to different tasks and conditions. In order to address the contradiction of stiffness selection and improve adaptability to different applications, we develop a reconfigurable rotary series elastic actuator with nonlinear stiffness (RRSEAns) for assistive robots. In this paper, an accurate model of the reconfigurable rotary series elastic element (RSEE) is presented and the adjusting principles are investigated, followed by detailed analysis and experimental validation. The RRSEAns can provide a wide range of stiffness from 0.095 Nm/deg to 2.33 Nm/deg, and different stiffness profiles can be yielded with respect to different configuration of the reconfigurable RSEE. The overall performance of the RRSEAns is verified by experiments on frequency response, torque control and pHRI, which is adequate for most applications in assistive robots. Specifically, the root-mean-square (RMS) error of the interaction torque results as low as 0.07 Nm in transparent/human-in-charge mode, demonstrating the advantages of the RRSEAns in pHRI. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 299,355 |
2307.14272 | Sim-to-Real Model-Based and Model-Free Deep Reinforcement Learning for
Tactile Pushing | Object pushing presents a key non-prehensile manipulation problem that is illustrative of more complex robotic manipulation tasks. While deep reinforcement learning (RL) methods have demonstrated impressive learning capabilities using visual input, a lack of tactile sensing limits their capability for fine and reliable control during manipulation. Here we propose a deep RL approach to object pushing using tactile sensing without visual input, namely tactile pushing. We present a goal-conditioned formulation that allows both model-free and model-based RL to obtain accurate policies for pushing an object to a goal. To achieve real-world performance, we adopt a sim-to-real approach. Our results demonstrate that it is possible to train on a single object and a limited sample of goals to produce precise and reliable policies that can generalize to a variety of unseen objects and pushing scenarios without domain randomization. We experiment with the trained agents in harsh pushing conditions, and show that with significantly more training samples, a model-free policy can outperform a model-based planner, generating shorter and more reliable pushing trajectories despite large disturbances. The simplicity of our training environment and effective real-world performance highlights the value of rich tactile information for fine manipulation. Code and videos are available at https://sites.google.com/view/tactile-rl-pushing/. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 381,864 |
2203.11373 | Two methods for Jamming Identification in UAVs Networks using New
Synthetic Dataset | Unmanned aerial vehicle (UAV) systems are vulnerable to jamming from self-interested users who utilize radio devices for their benefits during UAV transmissions. The vulnerability occurs due to the open nature of air-to-ground (A2G) wireless communication networks, which may enable network-wide attacks. This paper presents two strategies to identify Jammers in UAV networks. The first strategy is based on time series approaches for anomaly detection where the signal available in resource blocks are decomposed statistically to find trend, seasonality, and residues, while the second is based on newly designed deep networks. The joined technique is suitable for UAVs because the statistical model does not require heavy computation processing but is limited in generalizing possible attack's identification. On the other hand, the deep network can classify attacks accurately but requires more resources. The simulation considers the location and power of the jamming attacks and the UAV position related to the base station. The statistical method technique made it feasible to identify 84.38 % of attacks when the attacker was at 30 m from the UAV. Furthermore, the Deep network's accuracy was approximately 99.99 % for jamming powers greater than two and jammer distances less than 200 meters. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | true | false | false | 286,885 |
2106.06529 | The Limitations of Large Width in Neural Networks: A Deep Gaussian
Process Perspective | Large width limits have been a recent focus of deep learning research: modulo computational practicalities, do wider networks outperform narrower ones? Answering this question has been challenging, as conventional networks gain representational power with width, potentially masking any negative effects. Our analysis in this paper decouples capacity and width via the generalization of neural networks to Deep Gaussian Processes (Deep GP), a class of nonparametric hierarchical models that subsume neural nets. In doing so, we aim to understand how width affects (standard) neural networks once they have sufficient capacity for a given modeling task. Our theoretical and empirical results on Deep GP suggest that large width can be detrimental to hierarchical models. Surprisingly, we prove that even nonparametric Deep GP converge to Gaussian processes, effectively becoming shallower without any increase in representational power. The posterior, which corresponds to a mixture of data-adaptable basis functions, becomes less data-dependent with width. Our tail analysis demonstrates that width and depth have opposite effects: depth accentuates a model's non-Gaussianity, while width makes models increasingly Gaussian. We find there is a "sweet spot" that maximizes test performance before the limiting GP behavior prevents adaptability, occurring at width = 1 or width = 2 for nonparametric Deep GP. These results make strong predictions about the same phenomenon in conventional neural networks trained with L2 regularization (analogous to a Gaussian prior on parameters): we show that such neural networks may need up to 500 - 1000 hidden units for sufficient capacity - depending on the dataset - but further width degrades performance. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 240,515 |
1906.00271 | GLAD: Learning Sparse Graph Recovery | Recovering sparse conditional independence graphs from data is a fundamental problem in machine learning with wide applications. A popular formulation of the problem is an $\ell_1$ regularized maximum likelihood estimation. Many convex optimization algorithms have been designed to solve this formulation to recover the graph structure. Recently, there is a surge of interest to learn algorithms directly based on data, and in this case, learn to map empirical covariance to the sparse precision matrix. However, it is a challenging task in this case, since the symmetric positive definiteness (SPD) and sparsity of the matrix are not easy to enforce in learned algorithms, and a direct mapping from data to precision matrix may contain many parameters. We propose a deep learning architecture, GLAD, which uses an Alternating Minimization (AM) algorithm as our model inductive bias, and learns the model parameters via supervised learning. We show that GLAD learns a very compact and effective model for recovering sparse graphs from data. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 133,332 |
1004.4460 | Handling Overload Conditions In High Performance Trustworthy Information
Retrieval Systems | Web search engines retrieve a vast amount of information for a given search query. But the user needs only trustworthy and high-quality information from this vast retrieved data. The response time of the search engine must be a minimum value in order to satisfy the user. An optimum level of response time should be maintained even when the system is overloaded. This paper proposes an optimal Load Shedding algorithm which is used to handle overload conditions in real-time data stream applications and is adapted to the Information Retrieval System of a web search engine. Experiment results show that the proposed algorithm enables a web search engine to provide trustworthy search results to the user within an optimum response time, even during overload conditions. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 6,277 |
2309.12476 | Differentially Private Reward Functions in Policy Synthesis for Markov
Decision Processes | Markov decision processes often seek to maximize a reward function, but onlookers may infer reward functions by observing the states and actions of such systems, revealing sensitive information. Therefore, in this paper we introduce and compare two methods for privatizing reward functions in policy synthesis for multi-agent Markov decision processes, which generalize Markov decision processes. Reward functions are privatized using differential privacy, a statistical framework for protecting sensitive data. The methods we develop perturb either (1) each agent's individual reward function or (2) the joint reward function shared by all agents. We show that approach (1) provides better performance. We then develop a polynomial-time algorithm for the numerical computation of the performance loss due to privacy on a case-by-case basis. Next, using approach (1), we develop guidelines for selecting reward function values to preserve ``goal" and ``avoid" states while still remaining private, and we quantify the increase in computational complexity needed to compute policies from privatized rewards. Numerical simulations are performed on three classes of systems and they reveal a surprising compatibility with privacy: using reasonably strong privacy ($\epsilon =1.3$) on average induces as little as a~$5\%$ decrease in total accumulated reward and a $0.016\%$ increase in computation time. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 393,807 |
1911.04660 | Random Projections of Mel-Spectrograms as Low-Level Features for
Automatic Music Genre Classification | In this work, we analyse the random projections of Mel-spectrograms as low-level features for music genre classification. This approach was compared to handcrafted features, features learned using an auto-encoder and features obtained from a transfer learning setting. Tests in five different well-known, publicly available datasets show that random projections leads to results comparable to learned features and outperforms features obtained via transfer learning in a shallow learning scenario. Random projections do not require using extensive specialist knowledge and, simultaneously, requires less computational power for training than other projection-based low-level features. Therefore, they can be are a viable choice for usage in shallow learning content-based music genre classification. | false | false | true | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 153,048 |
2005.07037 | Training conformal predictors | Efficiency criteria for conformal prediction, such as \emph{observed fuzziness} (i.e., the sum of p-values associated with false labels), are commonly used to \emph{evaluate} the performance of given conformal predictors. Here, we investigate whether it is possible to exploit efficiency criteria to \emph{learn} classifiers, both conformal predictors and point classifiers, by using such criteria as training objective functions. The proposed idea is implemented for the problem of binary classification of hand-written digits. By choosing a 1-dimensional model class (with one real-valued free parameter), we can solve the optimization problems through an (approximate) exhaustive search over (a discrete version of) the parameter space. Our empirical results suggest that conformal predictors trained by minimizing their observed fuzziness perform better than conformal predictors trained in the traditional way by minimizing the \emph{prediction error} of the corresponding point classifier. They also have a reasonable performance in terms of their prediction error on the test set. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 177,175 |
2307.14009 | Car-Studio: Learning Car Radiance Fields from Single-View and Endless
In-the-wild Images | Compositional neural scene graph studies have shown that radiance fields can be an efficient tool in an editable autonomous driving simulator. However, previous studies learned within a sequence of autonomous driving datasets, resulting in unsatisfactory blurring when rotating the car in the simulator. In this letter, we propose a pipeline for learning unconstrained images and building a dataset from processed images. To meet the requirements of the simulator, which demands that the vehicle maintain clarity when the perspective changes and that the contour remains sharp from the background to avoid artifacts when editing, we design a radiation field of the vehicle, a crucial part of the urban scene foreground. Through experiments, we demonstrate that our model achieves competitive performance compared to baselines. Using the datasets built from in-the-wild images, our method gradually presents a controllable appearance editing function. We will release the dataset and code on https://lty2226262.github.io/car-studio/ to facilitate further research in the field. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 381,784 |
1609.08438 | Flows Generating Nonlinear Eigenfunctions | Nonlinear variational methods have become very powerful tools for many image processing tasks. Recently a new line of research has emerged, dealing with nonlinear eigenfunctions induced by convex functionals. This has provided new insights and better theoretical understanding of convex regularization and introduced new processing methods. However, the theory of nonlinear eigenvalue problems is still at its infancy. We present a new flow that can generate nonlinear eigenfunctions of the form $T(u)=\lambda u$, where $T(u)$ is a nonlinear operator and $\lambda \in \mathbb{R} $ is the eigenvalue. We develop the theory where $T(u)$ is a subgradient element of a regularizing one-homogeneous functional, such as total-variation (TV) or total-generalized-variation (TGV). We introduce two flows: a forward flow and an inverse flow; for which the steady state solution is a nonlinear eigenfunction. The forward flow monotonically smooths the solution (with respect to the regularizer) and simultaneously increases the $L^2$ norm. The inverse flow has the opposite characteristics. For both flows, the steady state depends on the initial condition, thus different initial conditions yield different eigenfunctions. This enables a deeper investigation into the space of nonlinear eigenfunctions, allowing to produce numerically diverse examples, which may be unknown yet. In addition we suggest an indicator to measure the affinity of a function to an eigenfunction and relate it to pseudo-eigenfunctions in the linear case. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 61,596 |
2312.15322 | Hardware-Aware DNN Compression via Diverse Pruning and Mixed-Precision
Quantization | Deep Neural Networks (DNNs) have shown significant advantages in a wide variety of domains. However, DNNs are becoming computationally intensive and energy hungry at an exponential pace, while at the same time, there is a vast demand for running sophisticated DNN-based services on resource constrained embedded devices. In this paper, we target energy-efficient inference on embedded DNN accelerators. To that end, we propose an automated framework to compress DNNs in a hardware-aware manner by jointly employing pruning and quantization. We explore, for the first time, per-layer fine- and coarse-grained pruning, in the same DNN architecture, in addition to low bit-width mixed-precision quantization for weights and activations. Reinforcement Learning (RL) is used to explore the associated design space and identify the pruning-quantization configuration so that the energy consumption is minimized whilst the prediction accuracy loss is retained at acceptable levels. Using our novel composite RL agent we are able to extract energy-efficient solutions without requiring retraining and/or fine tuning. Our extensive experimental evaluation over widely used DNNs and the CIFAR-10/100 and ImageNet datasets demonstrates that our framework achieves $39\%$ average energy reduction for $1.7\%$ average accuracy loss and outperforms significantly the state-of-the-art approaches. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 417,972 |
2112.05593 | A Review of Indoor Millimeter Wave Device-based Localization and
Device-free Sensing Technologies and Applications | The commercial availability of low-cost millimeter wave (mmWave) communication and radar devices is starting to improve the penetration of such technologies in consumer markets, paving the way for large-scale and dense deployments in fifth-generation (5G)-and-beyond as well as 6G networks. At the same time, pervasive mmWave access will enable device localization and device-free sensing with unprecedented accuracy, especially with respect to sub-6 GHz commercial-grade devices. This paper surveys the state of the art in device-based localization and device-free sensing using mmWave communication and radar devices, with a focus on indoor deployments. We first overview key concepts about mmWave signal propagation and system design. Then, we provide a detailed account of approaches and algorithms for localization and sensing enabled by mmWaves. We consider several dimensions in our analysis, including the main objectives, techniques, and performance of each work, whether each research reached some degree of implementation, and which hardware platforms were used for this purpose. We conclude by discussing that better algorithms for consumer-grade devices, data fusion methods for dense deployments, as well as an educated application of machine learning methods are promising, relevant and timely research directions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 270,888 |
2410.15460 | Hallucination Detox: Sensitivity Dropout (SenD) for Large Language Model
Training | As large language models (LLMs) are increasingly deployed across various industries, concerns regarding their reliability, particularly due to hallucinations - outputs that are factually inaccurate or irrelevant to user input - have grown. Our research investigates the relationship between the training process and the emergence of hallucinations to address a key gap in existing research that focuses primarily on post hoc detection and mitigation strategies. Using models from the Pythia suite (70M - 12B parameters) and several hallucination detection metrics, we analyze hallucination trends throughout training and explore LLM internal dynamics. We introduce Sensitivity Dropout (SenD), a novel training protocol designed to mitigate hallucinations by reducing variance during training. SenD achieves this by deterministically dropping embedding indices with significant variability, referred to as Sensitive Embedding Indices. In addition, we develop an unsupervised hallucination detection metric, Efficient EigenScore (EES), which approximates the traditional EigenScore at 2x speed. This efficient metric is integrated into our protocol, allowing SenD to be both computationally scalable and effective at reducing hallucinations. Our empirical evaluation demonstrates that our approach improves LLM reliability at test time by up to 40% compared to normal training while also providing an efficient method to improve factual accuracy when adapting LLMs to Wikipedia, Medical, and LegalBench domains. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 500,549 |
2011.10804 | BARS: Joint Search of Cell Topology and Layout for Accurate and
Efficient Binary ARchitectures | Binary Neural Networks (BNNs) have received significant attention due to their promising efficiency. Currently, most BNN studies directly adopt widely-used CNN architectures, which can be suboptimal for BNNs. This paper proposes a novel Binary ARchitecture Search (BARS) flow to discover superior binary architecture in a large design space. Specifically, we analyze the information bottlenecks that are related to both the topology and layout architecture design choices. And we propose to automatically search for the optimal information flow. To achieve that, we design a two-level (Macro & Micro) search space tailored for BNNs and apply a differentiable neural architecture search (NAS) to explore this search space efficiently. The macro-level search space includes width and depth decisions, which is required for better balancing the model performance and complexity. We also design the micro-level search space to strengthen the information flow for BNN. %A notable challenge of BNN architecture search lies in that binary operations exacerbate the "collapse" problem of differentiable NAS, for which we incorporate various search and derive strategies to stabilize the search process. On CIFAR-10, BARS achieves 1.5% higher accuracy with 2/3 binary operations and 1/10 floating-point operations comparing with existing BNN NAS studies. On ImageNet, with similar resource consumption, BARS-discovered architecture achieves a 6% accuracy gain than hand-crafted binary ResNet-18 architectures and outperforms other binary architectures while fully binarizing the architecture backbone. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 207,627 |
2412.06209 | Sound2Vision: Generating Diverse Visuals from Audio through Cross-Modal
Latent Alignment | How does audio describe the world around us? In this work, we propose a method for generating images of visual scenes from diverse in-the-wild sounds. This cross-modal generation task is challenging due to the significant information gap between auditory and visual signals. We address this challenge by designing a model that aligns audio-visual modalities by enriching audio features with visual information and translating them into the visual latent space. These features are then fed into the pre-trained image generator to produce images. To enhance image quality, we use sound source localization to select audio-visual pairs with strong cross-modal correlations. Our method achieves substantially better results on the VEGAS and VGGSound datasets compared to previous work and demonstrates control over the generation process through simple manipulations to the input waveform or latent space. Furthermore, we analyze the geometric properties of the learned embedding space and demonstrate that our learning approach effectively aligns audio-visual signals for cross-modal generation. Based on this analysis, we show that our method is agnostic to specific design choices, showing its generalizability by integrating various model architectures and different types of audio-visual data. | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 515,152 |
2110.11223 | Detection of Driver Drowsiness by Calculating the Speed of Eye Blinking | Many road accidents are caused by drowsiness of the driver. While there are methods to detect closed eyes, it is a non-trivial task to detect the gradual process of a driver becoming drowsy. We consider a simple real-time detection system for drowsiness merely based on the eye blinking rate derived from the eye aspect ratio. For the eye detection we use HOG and a linear SVM. If the speed of the eye blinking drops below some empirically determined threshold, the system triggers an alarm, hence preventing the driver from falling into microsleep. In this paper, we extensively evaluate the minimal requirements for the proposed system. We find that this system works well if the face is directed to the camera, but it becomes less reliable once the head is tilted significantly. The results of our evaluations provide the foundation for further developments of our drowsiness detection system. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 262,399 |
cs/0504028 | On Extrinsic Information of Good Codes Operating Over Discrete
Memoryless Channels | We show that the Extrinsic Information about the coded bits of any good (capacity achieving) code operating over a wide class of discrete memoryless channels (DMC) is zero when channel capacity is below the code rate and positive constant otherwise, that is, the Extrinsic Information Transfer (EXIT) chart is a step function of channel quality, for any capacity achieving code. It follows that, for a common class of iterative receivers where the error correcting decoder must operate at first iteration at rate above capacity (such as in turbo equalization, turbo channel estimation, parallel and serial concatenated coding and the like), classical good codes which achieve capacity over the DMC are not effective and should be replaced by different new ones. Another meaning of the results is that a good code operating at rate above channel capacity falls apart into its individual transmitted symbols in the sense that all the information about a coded transmitted symbol is contained in the corresponding received symbol and no information about it can be inferred from the other received symbols. The binary input additive white Gaussian noise channel is treated in part 1 of this report. Part 2 extends the results to the symmetric binary channel and to the binary erasure channel and provides an heuristic extension to wider class of channel models. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 538,649 |
2305.19339 | Less Likely Brainstorming: Using Language Models to Generate Alternative
Hypotheses | A human decision-maker benefits the most from an AI assistant that corrects for their biases. For problems such as generating interpretation of a radiology report given findings, a system predicting only highly likely outcomes may be less useful, where such outcomes are already obvious to the user. To alleviate biases in human decision-making, it is worth considering a broad differential diagnosis, going beyond the most likely options. We introduce a new task, "less likely brainstorming," that asks a model to generate outputs that humans think are relevant but less likely to happen. We explore the task in two settings: a brain MRI interpretation generation setting and an everyday commonsense reasoning setting. We found that a baseline approach of training with less likely hypotheses as targets generates outputs that humans evaluate as either likely or irrelevant nearly half of the time; standard MLE training is not effective. To tackle this problem, we propose a controlled text generation method that uses a novel contrastive learning strategy to encourage models to differentiate between generating likely and less likely outputs according to humans. We compare our method with several state-of-the-art controlled text generation models via automatic and human evaluations and show that our models' capability of generating less likely outputs is improved. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 369,478 |
2407.15589 | Exploring the Effectiveness of Object-Centric Representations in Visual
Question Answering: Comparative Insights with Foundation Models | Object-centric (OC) representations, which represent the state of a visual scene by modeling it as a composition of objects, have the potential to be used in various downstream tasks to achieve systematic compositional generalization and facilitate reasoning. However, these claims have not been thoroughly analyzed yet. Recently, foundation models have demonstrated unparalleled capabilities across diverse domains from language to computer vision, marking them as a potential cornerstone of future research for a multitude of computational tasks. In this paper, we conduct an extensive empirical study on representation learning for downstream Visual Question Answering (VQA), which requires an accurate compositional understanding of the scene. We thoroughly investigate the benefits and trade-offs of OC models and alternative approaches including large pre-trained foundation models on both synthetic and real-world data, and demonstrate a viable way to achieve the best of both worlds. The extensiveness of our study, encompassing over 600 downstream VQA models and 15 different types of upstream representations, also provides several additional insights that we believe will be of interest to the community at large. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 475,243 |
1711.10658 | Deep-Person: Learning Discriminative Deep Features for Person
Re-Identification | Recently, many methods of person re-identification (Re-ID) rely on part-based feature representation to learn a discriminative pedestrian descriptor. However, the spatial context between these parts is ignored for the independent extractor to each separate part. In this paper, we propose to apply Long Short-Term Memory (LSTM) in an end-to-end way to model the pedestrian, seen as a sequence of body parts from head to foot. Integrating the contextual information strengthens the discriminative ability of local representation. We also leverage the complementary information between local and global feature. Furthermore, we integrate both identification task and ranking task in one network, where a discriminative embedding and a similarity measurement are learned concurrently. This results in a novel three-branch framework named Deep-Person, which learns highly discriminative features for person Re-ID. Experimental results demonstrate that Deep-Person outperforms the state-of-the-art methods by a large margin on three challenging datasets including Market-1501, CUHK03, and DukeMTMC-reID. Specifically, combining with a re-ranking approach, we achieve a 90.84% mAP on Market-1501 under single query setting. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 85,637 |
2106.12406 | Mitigating the Impact of Distributed Generations on Relay Coordination
Using Fault Current Limiters | The use of distributed generation resources, in addition to considerable benefits, causes some problems in the power system. One of the most critical problems in the case of disruption is increasing short-circuit current level in grids, which leads to change the protection devices settings in the downstream and upstream grid. By using fault current limiters (FCL), short-circuit currents in grids with distributed generation can be reduced to acceptable levels, so there is no needed to change the protection relays settings of the downstream grid (including distributed generations). However, by locating the FCL in the tie-feeder, the downstream grid is not more effective than the upstream grid and thus its reliability indices also will be changed. Therefore, this paper shows that by locating the unidirectional fault current limiter (UFCL) in the tie-feeder, the necessity of changing in the relay protection settings of upstream grids is prevented. In this paper, the proposed method is implemented, and its efficiency is reported in six scenarios. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 242,718 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.