id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1408.0765 | Modulation Classification via Gibbs Sampling Based on a Latent Dirichlet
Bayesian Network | A novel Bayesian modulation classification scheme is proposed for a single-antenna system over frequency-selective fading channels. The method is based on Gibbs sampling as applied to a latent Dirichlet Bayesian network (BN). The use of the proposed latent Dirichlet BN provides a systematic solution to the convergence problem encountered by the conventional Gibbs sampling approach for modulation classification. The method generalizes, and is shown to improve upon, the state of the art. | false | false | false | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | 35,109 |
2001.08805 | Inexpensive and Portable System for Dexterous High-Density Myoelectric
Control of Multiarticulate Prostheses | Multiarticulate bionic arms are now capable of mimicking the endogenous movements of the human hand. 3D-printing has reduced the cost of prosthetic hands themselves, but there is currently no low-cost alternative to dexterous electromyographic (EMG) control systems. To address this need, we developed an inexpensive (~$675) and portable EMG control system by integrating low-cost microcontrollers with an EMG acquisition device. We validated signal acquisition by comparing the signal-to-noise ratio (SNR) of our system with that of a high-end research-grade system. We also demonstrate the ability to use the low-cost control system for proportional and independent control of various prosthetic hands in real-time. We found that the SNR of the low-cost control system was statistically no worse than 44% of the SNR of a research-grade control system. The RMSEs of predicted hand movements (from a modified Kalman filter) were typically a few percent better than, and not more than 6% worse than, RMSEs of a research-grade system for up to six degrees of freedom when only relatively few (six) EMG electrodes were used. However, RMSEs were generally higher than RMSEs of research-grade systems that utilize considerably more (32) EMG electrodes, guiding future work towards increasing electrode count. Successful instantiation of this low-cost control system constitutes an important step towards the commercialization and wide-spread availability of dexterous bionic hands. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 161,386 |
1809.10012 | Using Neural Networks to Generate Information Maps for Mobile Sensors | Target localization is a critical task for mobile sensors and has many applications. However, generating informative trajectories for these sensors is a challenging research problem. A common method uses information maps that estimate the value of taking measurements from any point in the sensor state space. These information maps are used to generate trajectories; for example, a trajectory might be designed so its distribution of measurements matches the distribution of the information map. Regardless of the trajectory generation method, generating information maps as new observations are made is critical. However, it can be challenging to compute these maps in real-time. We propose using convolutional neural networks to generate information maps from a target estimate and sensor model in real-time. Simulations show that maps are accurately rendered while offering orders of magnitude reduction in computation time. | false | false | false | false | false | false | true | true | false | false | true | false | false | false | false | false | false | false | 108,806 |
2104.02207 | Dissecting User-Perceived Latency of On-Device E2E Speech Recognition | As speech-enabled devices such as smartphones and smart speakers become increasingly ubiquitous, there is growing interest in building automatic speech recognition (ASR) systems that can run directly on-device; end-to-end (E2E) speech recognition models such as recurrent neural network transducers and their variants have recently emerged as prime candidates for this task. Apart from being accurate and compact, such systems need to decode speech with low user-perceived latency (UPL), producing words as soon as they are spoken. This work examines the impact of various techniques - model architectures, training criteria, decoding hyperparameters, and endpointer parameters - on UPL. Our analyses suggest that measures of model size (parameters, input chunk sizes), or measures of computation (e.g., FLOPS, RTF) that reflect the model's ability to process input frames are not always strongly correlated with observed UPL. Thus, conventional algorithmic latency measurements might be inadequate in accurately capturing latency observed when models are deployed on embedded devices. Instead, we find that factors affecting token emission latency, and endpointing behavior have a larger impact on UPL. We achieve the best trade-off between latency and word error rate when performing ASR jointly with endpointing, while utilizing the recently proposed alignment regularization mechanism. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 228,629 |
2306.05239 | Point-Voxel Absorbing Graph Representation Learning for Event Stream
based Recognition | Sampled point and voxel methods are usually employed to downsample the dense events into sparse ones. After that, one popular way is to leverage a graph model which treats the sparse points/voxels as nodes and adopts graph neural networks (GNNs) to learn the representation of event data. Although good performance can be obtained, however, their results are still limited mainly due to two issues. (1) Existing event GNNs generally adopt the additional max (or mean) pooling layer to summarize all node embeddings into a single graph-level representation for the whole event data representation. However, this approach fails to capture the importance of graph nodes and also fails to be fully aware of the node representations. (2) Existing methods generally employ either a sparse point or voxel graph representation model which thus lacks consideration of the complementary between these two types of representation models. To address these issues, we propose a novel dual point-voxel absorbing graph representation learning for event stream data representation. To be specific, given the input event stream, we first transform it into the sparse event cloud and voxel grids and build dual absorbing graph models for them respectively. Then, we design a novel absorbing graph convolutional network (AGCN) for our dual absorbing graph representation and learning. The key aspect of the proposed AGCN is its ability to effectively capture the importance of nodes and thus be fully aware of node representations in summarizing all node representations through the introduced absorbing nodes. Extensive experiments on multiple event-based classification benchmark datasets fully validated the effectiveness of our framework. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | true | false | false | 372,107 |
2501.13375 | Bridging The Multi-Modality Gaps of Audio, Visual and Linguistic for
Speech Enhancement | Speech Enhancement (SE) aims to improve the quality of noisy speech. It has been shown that additional visual cues can further improve performance. Given that speech communication involves audio, visual, and linguistic modalities, it is natural to expect another performance boost by incorporating linguistic information. However, bridging the modality gaps to efficiently incorporate linguistic information, along with audio and visual modalities during knowledge transfer, is a challenging task. In this paper, we propose a novel multi-modality learning framework for SE. In the model framework, a state-of-the-art diffusion Model backbone is utilized for Audio-Visual Speech Enhancement (AVSE) modeling where both audio and visual information are directly captured by microphones and video cameras. Based on this AVSE, the linguistic modality employs a PLM to transfer linguistic knowledge to the visual acoustic modality through a process termed Cross-Modal Knowledge Transfer (CMKT) during AVSE model training. After the model is trained, it is supposed that linguistic knowledge is encoded in the feature processing of the AVSE model by the CMKT, and the PLM will not be involved during inference stage. We carry out SE experiments to evaluate the proposed model framework. Experimental results demonstrate that our proposed AVSE system significantly enhances speech quality and reduces generative artifacts, such as phonetic confusion compared to the state-of-the-art. Moreover, our visualization results demonstrate that our Cross-Modal Knowledge Transfer method further improves the generated speech quality of our AVSE system. These findings not only suggest that Diffusion Model-based techniques hold promise for advancing the state-of-the-art in AVSE but also justify the effectiveness of incorporating linguistic information to improve the performance of Diffusion-based AVSE systems. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 526,657 |
2007.11261 | Contact-Implicit Trajectory Optimization using an Analytically Solvable
Contact Model for Locomotion on Variable Ground | This paper presents a novel contact-implicit trajectory optimization method using an analytically solvable contact model to enable planning of interactions with hard, soft, and slippery environments. Specifically, we propose a novel contact model that can be computed in closed-form, satisfies friction cone constraints and can be embedded into direct trajectory optimization frameworks without complementarity constraints. The closed-form solution decouples the computation of the contact forces from other actuation forces and this property is used to formulate a minimal direct optimization problem expressed with configuration variables only. Our simulation study demonstrates the advantages over the rigid contact model and a trajectory optimization approach based on complementarity constraints. The proposed model enables physics-based optimization for a wide range of interactions with hard, slippery, and soft grounds in a unified manner expressed by two parameters only. By computing trotting and jumping motions for a quadruped robot, the proposed optimization demonstrates the versatility for multi-contact motion planning on surfaces with different physical properties. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 188,511 |
2107.01238 | Solving Machine Learning Problems | Can a machine learn Machine Learning? This work trains a machine learning model to solve machine learning problems from a University undergraduate level course. We generate a new training set of questions and answers consisting of course exercises, homework, and quiz questions from MIT's 6.036 Introduction to Machine Learning course and train a machine learning model to answer these questions. Our system demonstrates an overall accuracy of 96% for open-response questions and 97% for multiple-choice questions, compared with MIT students' average of 93%, achieving grade A performance in the course, all in real-time. Questions cover all 12 topics taught in the course, excluding coding questions or questions with images. Topics include: (i) basic machine learning principles; (ii) perceptrons; (iii) feature extraction and selection; (iv) logistic regression; (v) regression; (vi) neural networks; (vii) advanced neural networks; (viii) convolutional neural networks; (ix) recurrent neural networks; (x) state machines and MDPs; (xi) reinforcement learning; and (xii) decision trees. Our system uses Transformer models within an encoder-decoder architecture with graph and tree representations. An important aspect of our approach is a data-augmentation scheme for generating new example problems. We also train a machine learning model to generate problem hints. Thus, our system automatically generates new questions across topics, answers both open-response questions and multiple-choice questions, classifies problems, and generates problem hints, pushing the envelope of AI for STEM education. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 244,413 |
2412.04180 | SKIM: Any-bit Quantization Pushing The Limits of Post-Training
Quantization | Large Language Models (LLMs) exhibit impressive performance across various tasks, but deploying them for inference poses challenges. Their high resource demands often necessitate complex, costly multi-GPU pipelines, or the use of smaller, less capable models. While quantization offers a promising solution utilizing lower precision for model storage, existing methods frequently experience significant performance drops at lower precision levels. Additionally, they typically provide only a limited set of solutions at specific bit levels, many of which are extensively manually tuned. To address these challenges, we propose a new method called SKIM: Scaled K-means clustering wIth Mixed precision. Our approach introduces two novel techniques: 1. A greedy algorithm to solve approximately optimal bit allocation across weight channels, and 2. A trainable scaling vector for non-differentiable K-means clustering. These techniques substantially improve performance and can be adapted to any given bit. Notably, in terms of model perplexity, our method narrows the gap between 3-bit quantized LLaMA models and their full precision counterparts by 16.3% on average. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 514,295 |
2003.00872 | AlignSeg: Feature-Aligned Segmentation Networks | Aggregating features in terms of different convolutional blocks or contextual embeddings has been proven to be an effective way to strengthen feature representations for semantic segmentation. However, most of the current popular network architectures tend to ignore the misalignment issues during the feature aggregation process caused by 1) step-by-step downsampling operations, and 2) indiscriminate contextual information fusion. In this paper, we explore the principles in addressing such feature misalignment issues and inventively propose Feature-Aligned Segmentation Networks (AlignSeg). AlignSeg consists of two primary modules, i.e., the Aligned Feature Aggregation (AlignFA) module and the Aligned Context Modeling (AlignCM) module. First, AlignFA adopts a simple learnable interpolation strategy to learn transformation offsets of pixels, which can effectively relieve the feature misalignment issue caused by multiresolution feature aggregation. Second, with the contextual embeddings in hand, AlignCM enables each pixel to choose private custom contextual information in an adaptive manner, making the contextual embeddings aligned better to provide appropriate guidance. We validate the effectiveness of our AlignSeg network with extensive experiments on Cityscapes and ADE20K, achieving new state-of-the-art mIoU scores of 82.6% and 45.95%, respectively. Our source code will be made available. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 166,471 |
2301.00427 | Conditional Diffusion Based on Discrete Graph Structures for Molecular
Graph Generation | Learning the underlying distribution of molecular graphs and generating high-fidelity samples is a fundamental research problem in drug discovery and material science. However, accurately modeling distribution and rapidly generating novel molecular graphs remain crucial and challenging goals. To accomplish these goals, we propose a novel Conditional Diffusion model based on discrete Graph Structures (CDGS) for molecular graph generation. Specifically, we construct a forward graph diffusion process on both graph structures and inherent features through stochastic differential equations (SDE) and derive discrete graph structures as the condition for reverse generative processes. We present a specialized hybrid graph noise prediction model that extracts the global context and the local node-edge dependency from intermediate graph states. We further utilize ordinary differential equation (ODE) solvers for efficient graph sampling, based on the semi-linear structure of the probability flow ODE. Experiments on diverse datasets validate the effectiveness of our framework. Particularly, the proposed method still generates high-quality molecular graphs in a limited number of steps. Our code is provided in https://github.com/GRAPH-0/CDGS. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 338,906 |
2401.16791 | Accelerated Cloud for Artificial Intelligence (ACAI) | Training an effective Machine learning (ML) model is an iterative process that requires effort in multiple dimensions. Vertically, a single pipeline typically includes an initial ETL (Extract, Transform, Load) of raw datasets, a model training stage, and an evaluation stage where the practitioners obtain statistics of the model performance. Horizontally, many such pipelines may be required to find the best model within a search space of model configurations. Many practitioners resort to maintaining logs manually and writing simple glue code to automate the workflow. However, carrying out this process on the cloud is not a trivial task in terms of resource provisioning, data management, and bookkeeping of job histories to make sure the results are reproducible. We propose an end-to-end cloud-based machine learning platform, Accelerated Cloud for AI (ACAI), to help improve the productivity of ML practitioners. ACAI achieves this goal by enabling cloud-based storage of indexed, labeled, and searchable data, as well as automatic resource provisioning, job scheduling, and experiment tracking. Specifically, ACAI provides practitioners (1) a data lake for storing versioned datasets and their corresponding metadata, and (2) an execution engine for executing ML jobs on the cloud with automatic resource provisioning (auto-provision), logging and provenance tracking. To evaluate ACAI, we test the efficacy of our auto-provisioner on the MNIST handwritten digit classification task, and we study the usability of our system using experiments and interviews. We show that our auto-provisioner produces a 1.7x speed-up and 39% cost reduction, and our system reduces experiment time for ML scientists by 20% on typical ML use cases. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 424,986 |
2404.19605 | Data-Driven Invertible Neural Surrogates of Atmospheric Transmission | We present a framework for inferring an atmospheric transmission profile from a spectral scene. This framework leverages a lightweight, physics-based simulator that is automatically tuned - by virtue of autodifferentiation and differentiable programming - to construct a surrogate atmospheric profile to model the observed data. We demonstrate utility of the methodology by (i) performing atmospheric correction, (ii) recasting spectral data between various modalities (e.g. radiance and reflectance at the surface and at the sensor), and (iii) inferring atmospheric transmission profiles, such as absorbing bands and their relative magnitudes. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 450,708 |
1703.08314 | Interacting Conceptual Spaces I : Grammatical Composition of Concepts | The categorical compositional approach to meaning has been successfully applied in natural language processing, outperforming other models in mainstream empirical language processing tasks. We show how this approach can be generalized to conceptual space models of cognition. In order to do this, first we introduce the category of convex relations as a new setting for categorical compositional semantics, emphasizing the convex structure important to conceptual space applications. We then show how to construct conceptual spaces for various types such as nouns, adjectives and verbs. Finally we show by means of examples how concepts can be systematically combined to establish the meanings of composite phrases from the meanings of their constituent parts. This provides the mathematical underpinnings of a new compositional approach to cognition. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | 70,560 |
2405.11437 | The First Swahili Language Scene Text Detection and Recognition Dataset | Scene text recognition is essential in many applications, including automated translation, information retrieval, driving assistance, and enhancing accessibility for individuals with visual impairments. Much research has been done to improve the accuracy and performance of scene text detection and recognition models. However, most of this research has been conducted in the most common languages, English and Chinese. There is a significant gap in low-resource languages, especially the Swahili Language. Swahili is widely spoken in East African countries but is still an under-explored language in scene text recognition. No studies have been focused explicitly on Swahili natural scene text detection and recognition, and no dataset for Swahili language scene text detection and recognition is publicly available. We propose a comprehensive dataset of Swahili scene text images and evaluate the dataset on different scene text detection and recognition models. The dataset contains 976 images collected in different places and under various circumstances. Each image has its annotation at the word level. The proposed dataset can also serve as a benchmark dataset specific to the Swahili language for evaluating and comparing different approaches and fostering future research endeavors. The dataset is available on GitHub via this link: https://github.com/FadilaW/Swahili-STR-Dataset | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 455,139 |
2406.09200 | Orthogonality and isotropy of speaker and phonetic information in
self-supervised speech representations | Self-supervised speech representations can hugely benefit downstream speech technologies, yet the properties that make them useful are still poorly understood. Two candidate properties related to the geometry of the representation space have been hypothesized to correlate well with downstream tasks: (1) the degree of orthogonality between the subspaces spanned by the speaker centroids and phone centroids, and (2) the isotropy of the space, i.e., the degree to which all dimensions are effectively utilized. To study them, we introduce a new measure, Cumulative Residual Variance (CRV), which can be used to assess both properties. Using linear classifiers for speaker and phone ID to probe the representations of six different self-supervised models and two untrained baselines, we ask whether either orthogonality or isotropy correlate with linear probing accuracy. We find that both measures correlate with phonetic probing accuracy, though our results on isotropy are more nuanced. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 463,819 |
2205.03234 | Real Time On Sensor Gait Phase Detection with 0.5KB Deep Learning Model | Gait phase detection with convolution neural network provides accurate classification but demands high computational cost, which inhibits real time low power on-sensor processing. This paper presents a segmentation based gait phase detection with a width and depth downscaled U-Net like model that only needs 0.5KB model size and 67K operations per second with 95.9% accuracy to be easily fitted into resource limited on sensor microcontroller. | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 295,215 |
1511.07902 | Performance Limits of Stochastic Sub-Gradient Learning, Part I: Single
Agent Case | In this work and the supporting Part II, we examine the performance of stochastic sub-gradient learning strategies under weaker conditions than usually considered in the literature. The new conditions are shown to be automatically satisfied by several important cases of interest including SVM, LASSO, and Total-Variation denoising formulations. In comparison, these problems do not satisfy the traditional assumptions used in prior analyses and, therefore, conclusions derived from these earlier treatments are not directly applicable to these problems. The results in this article establish that stochastic sub-gradient strategies can attain linear convergence rates, as opposed to sub-linear rates, to the steady-state regime. A realizable exponential-weighting procedure is employed to smooth the intermediate iterates and guarantee useful performance bounds in terms of convergence rate and excessive risk performance. Part I of this work focuses on single-agent scenarios, which are common in stand-alone learning applications, while Part II extends the analysis to networked learners. The theoretical conclusions are illustrated by several examples and simulations, including comparisons with the FISTA procedure. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | 49,477 |
2001.10615 | Indexical Cities: Articulating Personal Models of Urban Preference with
Geotagged Data | How to assess the potential of liking a city or a neighborhood before ever having been there. The concept of urban quality has until now pertained to global city ranking, where cities are evaluated under a grid of given parameters, or either to empirical and sociological approaches, often constrained by the amount of available information. Using state of the art machine learning techniques and thousands of geotagged satellite and perspective images from diverse urban cultures, this research characterizes personal preference in urban spaces and predicts a spectrum of unknown likeable places for a specific observer. Unlike most urban perception studies, our intention is not by any means to provide an objective measure of urban quality, but rather to portray personal views of the city or Cities of Indexes. | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | 161,860 |
1906.07748 | Joint Learning of Geometric and Probabilistic Constellation Shaping | The choice of constellations largely affects the performance of communication systems. When designing constellations, both the locations and probability of occurrence of the points can be optimized. These approaches are referred to as geometric and probabilistic shaping, respectively. Usually, the geometry of the constellation is fixed, e.g., quadrature amplitude modulation (QAM) is used. In such cases, the achievable information rate can still be improved by probabilistic shaping. In this work, we show how autoencoders can be leveraged to perform probabilistic shaping of constellations. We devise an information-theoretical description of autoencoders, which allows learning of capacity-achieving symbol distributions and constellations. Recently, machine learning techniques to perform geometric shaping were proposed. However, probabilistic shaping is more challenging as it requires the optimization of discrete distributions. Furthermore, the proposed method enables joint probabilistic and geometric shaping of constellations over any channel model. Simulation results show that the learned constellations achieve information rates very close to capacity on an additive white Gaussian noise (AWGN) channel and outperform existing approaches on both AWGN and fading channels. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 135,669 |
1103.0769 | Sparse Volterra and Polynomial Regression Models: Recoverability and
Estimation | Volterra and polynomial regression models play a major role in nonlinear system identification and inference tasks. Exciting applications ranging from neuroscience to genome-wide association analysis build on these models with the additional requirement of parsimony. This requirement has high interpretative value, but unfortunately cannot be met by least-squares based or kernel regression methods. To this end, compressed sampling (CS) approaches, already successful in linear regression settings, can offer a viable alternative. The viability of CS for sparse Volterra and polynomial models is the core theme of this work. A common sparse regression task is initially posed for the two models. Building on (weighted) Lasso-based schemes, an adaptive RLS-type algorithm is developed for sparse polynomial regressions. The identifiability of polynomial models is critically challenged by dimensionality. However, following the CS principle, when these models are sparse, they could be recovered by far fewer measurements. To quantify the sufficient number of measurements for a given level of sparsity, restricted isometry properties (RIP) are investigated in commonly met polynomial regression settings, generalizing known results for their linear counterparts. The merits of the novel (weighted) adaptive CS algorithms to sparse polynomial modeling are verified through synthetic as well as real data tests for genotype-phenotype analysis. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 9,470 |
1607.03665 | Energy- and Spectral-Efficiency Tradeoff in Full-Duplex Communications | This paper investigates the tradeoff between energyefficiency (EE) and spectral-efficiency (SE) for full-duplex (FD) enabled cellular networks.We assume that small cell base stations are working in the FD mode while user devices still work in the conventional half-duplex (HD) mode. First, a necessary condition for a FD transceiver to achieve better EE-SE tradeoff than a HD one is derived. Then, we analyze the EE-SE relation of a FD transceiver in the scenario of single pair of users and obtain a closed-form expression. Next, we extend the result into the multiuser scenario and prove that EE is a quasi-concave function of SE in general and develop an optimal algorithm to achieve the maximum EE based on the Lagrange dual method. Our analysis is finally verified by extensive numerical results. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 58,541 |
2007.05720 | ECML: An Ensemble Cascade Metric Learning Mechanism towards Face
Verification | Face verification can be regarded as a 2-class fine-grained visual recognition problem. Enhancing the feature's discriminative power is one of the key problems to improve its performance. Metric learning technology is often applied to address this need, while achieving a good tradeoff between underfitting and overfitting plays the vital role in metric learning. Hence, we propose a novel ensemble cascade metric learning (ECML) mechanism. In particular, hierarchical metric learning is executed in the cascade way to alleviate underfitting. Meanwhile, at each learning level, the features are split into non-overlapping groups. Then, metric learning is executed among the feature groups in the ensemble manner to resist overfitting. Considering the feature distribution characteristics of faces, a robust Mahalanobis metric learning method (RMML) with closed-form solution is additionally proposed. It can avoid the computation failure issue on inverse matrix faced by some well-known metric learning approaches (e.g., KISSME). Embedding RMML into the proposed ECML mechanism, our metric learning paradigm (EC-RMML) can run in the one-pass learning manner. Experimental results demonstrate that EC-RMML is superior to state-of-the-art metric learning methods for face verification. And, the proposed ensemble cascade metric learning mechanism is also applicable to other metric learning approaches. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 186,764 |
2110.04261 | Extragradient Method: $O(1/K)$ Last-Iterate Convergence for Monotone
Variational Inequalities and Connections With Cocoercivity | Extragradient method (EG) (Korpelevich, 1976) is one of the most popular methods for solving saddle point and variational inequalities problems (VIP). Despite its long history and significant attention in the optimization community, there remain important open questions about convergence of EG. In this paper, we resolve one of such questions and derive the first last-iterate $O(1/K)$ convergence rate for EG for monotone and Lipschitz VIP without any additional assumptions on the operator unlike the only known result of this type (Golowich et al., 2020) that relies on the Lipschitzness of the Jacobian of the operator. The rate is given in terms of reducing the squared norm of the operator. Moreover, we establish several results on the (non-)cocoercivity of the update operators of EG, Optimistic Gradient Method, and Hamiltonian Gradient Method, when the original operator is monotone and Lipschitz. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 259,820 |
2101.11222 | Automatic image annotation base on Naive Bayes and Decision Tree
classifiers using MPEG-7 | Recently it has become essential to search for and retrieve high-resolution and efficient images easily due to swift development of digital images, many present annotation algorithms facing a big challenge which is the variance for represent the image where high level represent image semantic and low level illustrate the features, this issue is known as semantic gab. This work has been used MPEG-7 standard to extract the features from the images, where the color feature was extracted by using Scalable Color Descriptor (SCD) and Color Layout Descriptor (CLD), whereas the texture feature was extracted by employing Edge Histogram Descriptor (EHD), the CLD produced high dimensionality feature vector therefore it is reduced by Principal Component Analysis (PCA). The features that have extracted by these three descriptors could be passing to the classifiers (Naive Bayes and Decision Tree) for training. Finally, they annotated the query image. In this study TUDarmstadt image bank had been used. The results of tests and comparative performance evaluation indicated better precision and executing time of Naive Bayes classification in comparison with Decision Tree classification. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 217,201 |
2202.09777 | An Analysis of Complex-Valued CNNs for RF Data-Driven Wireless Device
Classification | Recent deep neural network-based device classification studies show that complex-valued neural networks (CVNNs) yield higher classification accuracy than real-valued neural networks (RVNNs). Although this improvement is (intuitively) attributed to the complex nature of the input RF data (i.e., IQ symbols), no prior work has taken a closer look into analyzing such a trend in the context of wireless device identification. Our study provides a deeper understanding of this trend using real LoRa and WiFi RF datasets. We perform a deep dive into understanding the impact of (i) the input representation/type and (ii) the architectural layer of the neural network. For the input representation, we considered the IQ as well as the polar coordinates both partially and fully. For the architectural layer, we considered a series of ablation experiments that eliminate parts of the CVNN components. Our results show that CVNNs consistently outperform RVNNs counterpart in the various scenarios mentioned above, indicating that CVNNs are able to make better use of the joint information provided via the in-phase (I) and quadrature (Q) components of the signal. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 281,315 |
1802.00093 | Cross-domain CNN for Hyperspectral Image Classification | In this paper, we address the dataset scarcity issue with the hyperspectral image classification. As only a few thousands of pixels are available for training, it is difficult to effectively learn high-capacity Convolutional Neural Networks (CNNs). To cope with this problem, we propose a novel cross-domain CNN containing the shared parameters which can co-learn across multiple hyperspectral datasets. The network also contains the non-shared portions designed to handle the dataset specific spectral characteristics and the associated classification tasks. Our approach is the first attempt to learn a CNN for multiple hyperspectral datasets, in an end-to-end fashion. Moreover, we have experimentally shown that the proposed network trained on three of the widely used datasets outperform all the baseline networks which are trained on single dataset. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 89,342 |
2107.03909 | Weight Reparametrization for Budget-Aware Network Pruning | Pruning seeks to design lightweight architectures by removing redundant weights in overparameterized networks. Most of the existing techniques first remove structured sub-networks (filters, channels,...) and then fine-tune the resulting networks to maintain a high accuracy. However, removing a whole structure is a strong topological prior and recovering the accuracy, with fine-tuning, is highly cumbersome. In this paper, we introduce an "end-to-end" lightweight network design that achieves training and pruning simultaneously without fine-tuning. The design principle of our method relies on reparametrization that learns not only the weights but also the topological structure of the lightweight sub-network. This reparametrization acts as a prior (or regularizer) that defines pruning masks implicitly from the weights of the underlying network, without increasing the number of training parameters. Sparsity is induced with a budget loss that provides an accurate pruning. Extensive experiments conducted on the CIFAR10 and the TinyImageNet datasets, using standard architectures (namely Conv4, VGG19 and ResNet18), show compelling results without fine-tuning. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 245,289 |
2207.11889 | Salient Object Detection for Point Clouds | This paper researches the unexplored task-point cloud salient object detection (SOD). Differing from SOD for images, we find the attention shift of point clouds may provoke saliency conflict, i.e., an object paradoxically belongs to salient and non-salient categories. To eschew this issue, we present a novel view-dependent perspective of salient objects, reasonably reflecting the most eye-catching objects in point cloud scenarios. Following this formulation, we introduce PCSOD, the first dataset proposed for point cloud SOD consisting of 2,872 in-/out-door 3D views. The samples in our dataset are labeled with hierarchical annotations, e.g., super-/sub-class, bounding box, and segmentation map, which endows the brilliant generalizability and broad applicability of our dataset verifying various conjectures. To evidence the feasibility of our solution, we further contribute a baseline model and benchmark five representative models for a comprehensive comparison. The proposed model can effectively analyze irregular and unordered points for detecting salient objects. Thanks to incorporating the task-tailored designs, our method shows visible superiority over other baselines, producing more satisfactory results. Extensive experiments and discussions reveal the promising potential of this research field, paving the way for further study. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 309,815 |
2405.17921 | Towards Clinical AI Fairness: Filling Gaps in the Puzzle | The ethical integration of Artificial Intelligence (AI) in healthcare necessitates addressing fairness-a concept that is highly context-specific across medical fields. Extensive studies have been conducted to expand the technical components of AI fairness, while tremendous calls for AI fairness have been raised from healthcare. Despite this, a significant disconnect persists between technical advancements and their practical clinical applications, resulting in a lack of contextualized discussion of AI fairness in clinical settings. Through a detailed evidence gap analysis, our review systematically pinpoints several deficiencies concerning both healthcare data and the provided AI fairness solutions. We highlight the scarcity of research on AI fairness in many medical domains where AI technology is increasingly utilized. Additionally, our analysis highlights a substantial reliance on group fairness, aiming to ensure equality among demographic groups from a macro healthcare system perspective; in contrast, individual fairness, focusing on equity at a more granular level, is frequently overlooked. To bridge these gaps, our review advances actionable strategies for both the healthcare and AI research communities. Beyond applying existing AI fairness methods in healthcare, we further emphasize the importance of involving healthcare professionals to refine AI fairness concepts and methods to ensure contextually relevant and ethically sound AI applications in healthcare. | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | 458,189 |
2501.17688 | ContourFormer:Real-Time Contour-Based End-to-End Instance Segmentation
Transformer | This paper presents Contourformer, a real-time contour-based instance segmentation algorithm. The method is fully based on the DETR paradigm and achieves end-to-end inference through iterative and progressive mechanisms to optimize contours. To improve efficiency and accuracy, we develop two novel techniques: sub-contour decoupling mechanisms and contour fine-grained distribution refinement. In the sub-contour decoupling mechanism, we propose a deformable attention-based module that adaptively selects sampling regions based on the current predicted contour, enabling more effective capturing of object boundary information. Additionally, we design a multi-stage optimization process to enhance segmentation precision by progressively refining sub-contours. The contour fine-grained distribution refinement technique aims to further improve the ability to express fine details of contours. These innovations enable Contourformer to achieve stable and precise segmentation for each instance while maintaining real-time performance. Extensive experiments demonstrate the superior performance of Contourformer on multiple benchmark datasets, including SBD, COCO, and KINS. We conduct comprehensive evaluations and comparisons with existing state-of-the-art methods, showing significant improvements in both accuracy and inference speed. This work provides a new solution for contour-based instance segmentation tasks and lays a foundation for future research, with the potential to become a strong baseline method in this field. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 528,419 |
2404.04452 | Vision transformers in domain adaptation and domain generalization: a
study of robustness | Deep learning models are often evaluated in scenarios where the data distribution is different from those used in the training and validation phases. The discrepancy presents a challenge for accurately predicting the performance of models once deployed on the target distribution. Domain adaptation and generalization are widely recognized as effective strategies for addressing such shifts, thereby ensuring reliable performance. The recent promising results in applying vision transformers in computer vision tasks, coupled with advancements in self-attention mechanisms, have demonstrated their significant potential for robustness and generalization in handling distribution shifts. Motivated by the increased interest from the research community, our paper investigates the deployment of vision transformers in domain adaptation and domain generalization scenarios. For domain adaptation methods, we categorize research into feature-level, instance-level, model-level adaptations, and hybrid approaches, along with other categorizations with respect to diverse strategies for enhancing domain adaptation. Similarly, for domain generalization, we categorize research into multi-domain learning, meta-learning, regularization techniques, and data augmentation strategies. We further classify diverse strategies in research, underscoring the various approaches researchers have taken to address distribution shifts by integrating vision transformers. The inclusion of comprehensive tables summarizing these categories is a distinct feature of our work, offering valuable insights for researchers. These findings highlight the versatility of vision transformers in managing distribution shifts, crucial for real-world applications, especially in critical safety and decision-making scenarios. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 444,648 |
2210.16643 | XNOR-FORMER: Learning Accurate Approximations in Long Speech
Transformers | Transformers are among the state of the art for many tasks in speech, vision, and natural language processing, among others. Self-attentions, which are crucial contributors to this performance have quadratic computational complexity, which makes training on longer input sequences challenging. Prior work has produced state-of-the-art transformer variants with linear attention, however, current models sacrifice performance to achieve efficient implementations. In this work, we develop a novel linear transformer by examining the properties of the key-query product within self-attentions. Our model outperforms state of the art approaches on speech recognition and speech summarization, resulting in 1 % absolute WER improvement on the Librispeech-100 speech recognition benchmark and a new INTERVIEW speech recognition benchmark, and 5 points on ROUGE for summarization with How2. | false | false | true | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 327,407 |
2005.12439 | Personalized Fashion Recommendation from Personal Social Media Data: An
Item-to-Set Metric Learning Approach | With the growth of online shopping for fashion products, accurate fashion recommendation has become a critical problem. Meanwhile, social networks provide an open and new data source for personalized fashion analysis. In this work, we study the problem of personalized fashion recommendation from social media data, i.e. recommending new outfits to social media users that fit their fashion preferences. To this end, we present an item-to-set metric learning framework that learns to compute the similarity between a set of historical fashion items of a user to a new fashion item. To extract features from multi-modal street-view fashion items, we propose an embedding module that performs multi-modality feature extraction and cross-modality gated fusion. To validate the effectiveness of our approach, we collect a real-world social media dataset. Extensive experiments on the collected dataset show the superior performance of our proposed approach. | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | true | 178,725 |
0903.4545 | Computer- and robot-assisted Medical Intervention | Medical robotics includes assistive devices used by the physician in order to make his/her diagnostic or therapeutic practices easier and more efficient. This chapter focuses on such systems. It introduces the general field of Computer-Assisted Medical Interventions, its aims, its different components and describes the place of robots in that context. The evolutions in terms of general design and control paradigms in the development of medical robots are presented and issues specific to that application domain are discussed. A view of existing systems, on-going developments and future trends is given. A case-study is detailed. Other types of robotic help in the medical environment (such as for assisting a handicapped person, for rehabilitation of a patient or for replacement of some damaged/suppressed limbs or organs) are out of the scope of this chapter. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 3,420 |
2308.16539 | On a Connection between Differential Games, Optimal Control, and
Energy-based Models for Multi-Agent Interactions | Game theory offers an interpretable mathematical framework for modeling multi-agent interactions. However, its applicability in real-world robotics applications is hindered by several challenges, such as unknown agents' preferences and goals. To address these challenges, we show a connection between differential games, optimal control, and energy-based models and demonstrate how existing approaches can be unified under our proposed Energy-based Potential Game formulation. Building upon this formulation, this work introduces a new end-to-end learning application that combines neural networks for game-parameter inference with a differentiable game-theoretic optimization layer, acting as an inductive bias. The experiments using simulated mobile robot pedestrian interactions and real-world automated driving data provide empirical evidence that the game-theoretic layer improves the predictive performance of various neural network backbones. | false | false | false | false | true | false | true | true | false | false | false | false | false | false | true | false | false | true | 389,027 |
1910.05563 | On the expected behaviour of noise regularised deep neural networks as
Gaussian processes | Recent work has established the equivalence between deep neural networks and Gaussian processes (GPs), resulting in so-called neural network Gaussian processes (NNGPs). The behaviour of these models depends on the initialisation of the corresponding network. In this work, we consider the impact of noise regularisation (e.g. dropout) on NNGPs, and relate their behaviour to signal propagation theory in noise regularised deep neural networks. For ReLU activations, we find that the best performing NNGPs have kernel parameters that correspond to a recently proposed initialisation scheme for noise regularised ReLU networks. In addition, we show how the noise influences the covariance matrix of the NNGP, producing a stronger prior towards simple functions away from the training points. We verify our theoretical findings with experiments on MNIST and CIFAR-10 as well as on synthetic data. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 149,102 |
1910.08108 | Enforcing Linearity in DNN succours Robustness and Adversarial Image
Generation | Recent studies on the adversarial vulnerability of neural networks have shown that models trained with the objective of minimizing an upper bound on the worst-case loss over all possible adversarial perturbations improve robustness against adversarial attacks. Beside exploiting adversarial training framework, we show that by enforcing a Deep Neural Network (DNN) to be linear in transformed input and feature space improves robustness significantly. We also demonstrate that by augmenting the objective function with Local Lipschitz regularizer boost robustness of the model further. Our method outperforms most sophisticated adversarial training methods and achieves state of the art adversarial accuracy on MNIST, CIFAR10 and SVHN dataset. In this paper, we also propose a novel adversarial image generation method by leveraging Inverse Representation Learning and Linearity aspect of an adversarially trained deep neural network classifier. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 149,776 |
0807.0042 | A Simple Converse Proof and a Unified Capacity Formula for Channels with
Input Constraints | Given the single-letter capacity formula and the converse proof of a channel without constraints, we provide a simple approach to extend the results for the same channel but with constraints. The resulting capacity formula is the minimum of a Lagrange dual function. It gives an unified formula in the sense that it works regardless whether the problem is convex. If the problem is non-convex, we show that the capacity can be larger than the formula obtained by the naive approach of imposing constraints on the maximization in the capacity formula of the case without the constraints. The extension on the converse proof is simply by adding a term involving the Lagrange multiplier and the constraints. The rest of the proof does not need to be changed. We name the proof method the Lagrangian Converse Proof. In contrast, traditional approaches need to construct a better input distribution for convex problems or need to introduce a time sharing variable for non-convex problems. We illustrate the Lagrangian Converse Proof for three channels, the classic discrete time memoryless channel, the channel with non-causal channel-state information at the transmitter, the channel with limited channel-state feedback. The extension to the rate distortion theory is also provided. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 2,019 |
2103.09942 | Machine Vision based Sample-Tube Localization for Mars Sample Return | A potential Mars Sample Return (MSR) architecture is being jointly studied by NASA and ESA. As currently envisioned, the MSR campaign consists of a series of 3 missions: sample cache, fetch and return to Earth. In this paper, we focus on the fetch part of the MSR, and more specifically the problem of autonomously detecting and localizing sample tubes deposited on the Martian surface. Towards this end, we study two machine-vision based approaches: First, a geometry-driven approach based on template matching that uses hard-coded filters and a 3D shape model of the tube; and second, a data-driven approach based on convolutional neural networks (CNNs) and learned features. Furthermore, we present a large benchmark dataset of sample-tube images, collected in representative outdoor environments and annotated with ground truth segmentation masks and locations. The dataset was acquired systematically across different terrain, illumination conditions and dust-coverage; and benchmarking was performed to study the feasibility of each approach, their relative strengths and weaknesses, and robustness in the presence of adverse environmental conditions. | false | false | false | false | true | false | true | true | false | false | false | true | false | false | false | false | false | false | 225,295 |
2406.07848 | Multi-agent Reinforcement Learning with Deep Networks for Diverse
Q-Vectors | Multi-agent reinforcement learning (MARL) has become a significant research topic due to its ability to facilitate learning in complex environments. In multi-agent tasks, the state-action value, commonly referred to as the Q-value, can vary among agents because of their individual rewards, resulting in a Q-vector. Determining an optimal policy is challenging, as it involves more than just maximizing a single Q-value. Various optimal policies, such as a Nash equilibrium, have been studied in this context. Algorithms like Nash Q-learning and Nash Actor-Critic have shown effectiveness in these scenarios. This paper extends this research by proposing a deep Q-networks (DQN) algorithm capable of learning various Q-vectors using Max, Nash, and Maximin strategies. The effectiveness of this approach is demonstrated in an environment where dual robotic arms collaborate to lift a pot. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | 463,238 |
2410.08553 | Balancing Innovation and Privacy: Data Security Strategies in Natural
Language Processing Applications | This research addresses privacy protection in Natural Language Processing (NLP) by introducing a novel algorithm based on differential privacy, aimed at safeguarding user data in common applications such as chatbots, sentiment analysis, and machine translation. With the widespread application of NLP technology, the security and privacy protection of user data have become important issues that need to be solved urgently. This paper proposes a new privacy protection algorithm designed to effectively prevent the leakage of user sensitive information. By introducing a differential privacy mechanism, our model ensures the accuracy and reliability of data analysis results while adding random noise. This method not only reduces the risk caused by data leakage but also achieves effective processing of data while protecting user privacy. Compared to traditional privacy methods like data anonymization and homomorphic encryption, our approach offers significant advantages in terms of computational efficiency and scalability while maintaining high accuracy in data analysis. The proposed algorithm's efficacy is demonstrated through performance metrics such as accuracy (0.89), precision (0.85), and recall (0.88), outperforming other methods in balancing privacy and utility. As privacy protection regulations become increasingly stringent, enterprises and developers must take effective measures to deal with privacy risks. Our research provides an important reference for the application of privacy protection technology in the field of NLP, emphasizing the need to achieve a balance between technological innovation and user privacy. In the future, with the continuous advancement of technology, privacy protection will become a core element of data-driven applications and promote the healthy development of the entire industry. | false | false | false | false | true | false | false | false | true | false | false | false | true | false | false | false | false | false | 497,165 |
2011.06752 | Critic PI2: Master Continuous Planning via Policy Improvement with Path
Integrals and Deep Actor-Critic Reinforcement Learning | Constructing agents with planning capabilities has long been one of the main challenges in the pursuit of artificial intelligence. Tree-based planning methods from AlphaGo to Muzero have enjoyed huge success in discrete domains, such as chess and Go. Unfortunately, in real-world applications like robot control and inverted pendulum, whose action space is normally continuous, those tree-based planning techniques will be struggling. To address those limitations, in this paper, we present a novel model-based reinforcement learning frameworks called Critic PI2, which combines the benefits from trajectory optimization, deep actor-critic learning, and model-based reinforcement learning. Our method is evaluated for inverted pendulum models with applicability to many continuous control systems. Extensive experiments demonstrate that Critic PI2 achieved a new state of the art in a range of challenging continuous domains. Furthermore, we show that planning with a critic significantly increases the sample efficiency and real-time performance. Our work opens a new direction toward learning the components of a model-based planning system and how to use them. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | true | false | false | 206,329 |
1802.00393 | Large Scale Crowdsourcing and Characterization of Twitter Abusive
Behavior | In recent years, offensive, abusive and hateful language, sexism, racism and other types of aggressive and cyberbullying behavior have been manifesting with increased frequency, and in many online social media platforms. In fact, past scientific work focused on studying these forms in popular media, such as Facebook and Twitter. Building on such work, we present an 8-month study of the various forms of abusive behavior on Twitter, in a holistic fashion. Departing from past work, we examine a wide variety of labeling schemes, which cover different forms of abusive behavior, at the same time. We propose an incremental and iterative methodology, that utilizes the power of crowdsourcing to annotate a large scale collection of tweets with a set of abuse-related labels. In fact, by applying our methodology including statistical analysis for label merging or elimination, we identify a reduced but robust set of labels. Finally, we offer a first overview and findings of our collected and annotated dataset of 100 thousand tweets, which we make publicly available for further scientific exploration. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 89,407 |
2002.06345 | Panoptic Feature Fusion Net: A Novel Instance Segmentation Paradigm for
Biomedical and Biological Images | Instance segmentation is an important task for biomedical and biological image analysis. Due to the complicated background components, the high variability of object appearances, numerous overlapping objects, and ambiguous object boundaries, this task still remains challenging. Recently, deep learning based methods have been widely employed to solve these problems and can be categorized into proposal-free and proposal-based methods. However, both proposal-free and proposal-based methods suffer from information loss, as they focus on either global-level semantic or local-level instance features. To tackle this issue, we present a Panoptic Feature Fusion Net (PFFNet) that unifies the semantic and instance features in this work. Specifically, our proposed PFFNet contains a residual attention feature fusion mechanism to incorporate the instance prediction with the semantic features, in order to facilitate the semantic contextual information learning in the instance branch. Then, a mask quality sub-branch is designed to align the confidence score of each object with the quality of the mask prediction. Furthermore, a consistency regularization mechanism is designed between the semantic segmentation tasks in the semantic and instance branches, for the robust learning of both tasks. Extensive experiments demonstrate the effectiveness of our proposed PFFNet, which outperforms several state-of-the-art methods on various biomedical and biological datasets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 164,164 |
2404.18228 | TextGram: Towards a better domain-adaptive pretraining | For green AI, it is crucial to measure and reduce the carbon footprint emitted during the training of large language models. In NLP, performing pre-training on Transformer models requires significant computational resources. This pre-training involves using a large amount of text data to gain prior knowledge for performing downstream tasks. Thus, it is important that we select the correct data in the form of domain-specific data from this vast corpus to achieve optimum results aligned with our domain-specific tasks. While training on large unsupervised data is expensive, it can be optimized by performing a data selection step before pretraining. Selecting important data reduces the space overhead and the substantial amount of time required to pre-train the model while maintaining constant accuracy. We investigate the existing selection strategies and propose our own domain-adaptive data selection method - TextGram - that effectively selects essential data from large corpora. We compare and evaluate the results of finetuned models for text classification task with and without data selection. We show that the proposed strategy works better compared to other selection methods. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 450,180 |
2104.11554 | Sketch-based Normal Map Generation with Geometric Sampling | Normal map is an important and efficient way to represent complex 3D models. A designer may benefit from the auto-generation of high quality and accurate normal maps from freehand sketches in 3D content creation. This paper proposes a deep generative model for generating normal maps from users sketch with geometric sampling. Our generative model is based on Conditional Generative Adversarial Network with the curvature-sensitive points sampling of conditional masks. This sampling process can help eliminate the ambiguity of generation results as network input. In addition, we adopted a U-Net structure discriminator to help the generator be better trained. It is verified that the proposed framework can generate more accurate normal maps. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 231,946 |
2209.03615 | IMAP: Individual huMAn mobility Patterns visualizing platform | Understanding human mobility is essential for the development of smart cities and social behavior research. Human mobility models may be used in numerous applications, including pandemic control, urban planning, and traffic management. The existing models' accuracy in predicting users' mobility patterns is less than 25%. The low accuracy may be justified by the flexible nature of the human movement. Indeed, humans are not rigid in their daily movement. In addition, the rigid mobility models may result in missing the hidden regularities in users' records. Thus, we propose a novel perspective to study and analyze human mobility patterns and capture their flexibility. Typically, the mobility patterns are represented by a sequence of locations. We propose to define the mobility patterns by abstracting these locations into a set of places. Labeling these locations will allow us to detect close-to-reality hidden patterns. We present IMAP, an Individual huMAn mobility Patterns visualizing platform. Our platform enables users to visualize a graph of the places they visited based on their history records. In addition, our platform displays the most frequent mobility patterns computed using a modified PrefixSpan approach. | true | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 316,543 |
2306.09129 | Deep Learning for Energy Time-Series Analysis and Forecasting | Energy time-series analysis describes the process of analyzing past energy observations and possibly external factors so as to predict the future. Different tasks are involved in the general field of energy time-series analysis and forecasting, with electric load demand forecasting, personalized energy consumption forecasting, as well as renewable energy generation forecasting being among the most common ones. Following the exceptional performance of Deep Learning (DL) in a broad area of vision tasks, DL models have successfully been utilized in time-series forecasting tasks. This paper aims to provide insight into various DL methods geared towards improving the performance in energy time-series forecasting tasks, with special emphasis in Greek Energy Market, and equip the reader with the necessary knowledge to apply these methods in practice. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 373,690 |
2311.00055 | Rethinking Pre-Training in Tabular Data: A Neighborhood Embedding
Perspective | Pre-training is prevalent in deep learning for vision and text data, leveraging knowledge from other datasets to enhance downstream tasks. However, for tabular data, the inherent heterogeneity in attribute and label spaces across datasets complicates the learning of shareable knowledge. We propose Tabular data Pre-Training via Meta-representation (TabPTM), aiming to pre-train a general tabular model over diverse datasets. The core idea is to embed data instances into a shared feature space, where each instance is represented by its distance to a fixed number of nearest neighbors and their labels. This ''meta-representation'' transforms heterogeneous tasks into homogeneous local prediction problems, enabling the model to infer labels (or scores for each label) based on neighborhood information. As a result, the pre-trained TabPTM can be applied directly to new datasets, regardless of their diverse attributes and labels, without further fine-tuning. Extensive experiments on 101 datasets confirm TabPTM's effectiveness in both classification and regression tasks, with and without fine-tuning. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 404,494 |
2006.07862 | Exploiting Higher Order Smoothness in Derivative-free Optimization and
Continuous Bandits | We study the problem of zero-order optimization of a strongly convex function. The goal is to find the minimizer of the function by a sequential exploration of its values, under measurement noise. We study the impact of higher order smoothness properties of the function on the optimization error and on the cumulative regret. To solve this problem we consider a randomized approximation of the projected gradient descent algorithm. The gradient is estimated by a randomized procedure involving two function evaluations and a smoothing kernel. We derive upper bounds for this algorithm both in the constrained and unconstrained settings and prove minimax lower bounds for any sequential search method. Our results imply that the zero-order algorithm is nearly optimal in terms of sample complexity and the problem parameters. Based on this algorithm, we also propose an estimator of the minimum value of the function achieving almost sharp oracle behavior. We compare our results with the state-of-the-art, highlighting a number of key improvements. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 181,980 |
2204.03354 | Predictive coding and stochastic resonance as fundamental principles of
auditory perception | How is information processed in the brain during perception? Mechanistic insight is achieved only when experiments are employed to test formal or computational models. In analogy to lesion studies, phantom perception may serve as a vehicle to understand the fundamental processing principles underlying auditory perception. With a special focus on tinnitus -- as the prime example of auditory phantom perception -- we review recent work at the intersection of artificial intelligence, psychology, and neuroscience. In particular, we discuss why everyone with tinnitus suffers from hearing loss, but not everyone with hearing loss suffers from tinnitus. We argue that the increase of sensory precision due to Bayesian inference could be caused by intrinsic neural noise and lead to a prediction error in the cerebral cortex. Hence, two fundamental processing principles - being ubiquitous in the brain - provide the most explanatory power for the emergence of tinnitus: predictive coding as a top-down, and stochastic resonance as a complementary bottom-up mechanism. We conclude that both principles play a crucial role in healthy auditory perception. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 290,277 |
cs/0609097 | Traveing Salesperson Problems for a double integrator | In this paper we propose some novel path planning strategies for a double integrator with bounded velocity and bounded control inputs. First, we study the following version of the Traveling Salesperson Problem (TSP): given a set of points in $\real^d$, find the fastest tour over the point set for a double integrator. We first give asymptotic bounds on the time taken to complete such a tour in the worst-case. Then, we study a stochastic version of the TSP for double integrator where the points are randomly sampled from a uniform distribution in a compact environment in $\real^2$ and $\real^3$. We propose novel algorithms that perform within a constant factor of the optimal strategy with high probability. Lastly, we study a dynamic TSP: given a stochastic process that generates targets, is there a policy which guarantees that the number of unvisited targets does not diverge over time? If such stable policies exist, what is the minimum wait for a target? We propose novel stabilizing receding-horizon algorithms whose performances are within a constant factor from the optimum with high probability, in $\real^2$ as well as $\real^3$. We also argue that these algorithms give identical performances for a particular nonholonomic vehicle, Dubins vehicle. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 539,710 |
1707.06002 | Argotario: Computational Argumentation Meets Serious Games | An important skill in critical thinking and argumentation is the ability to spot and recognize fallacies. Fallacious arguments, omnipresent in argumentative discourse, can be deceptive, manipulative, or simply leading to `wrong moves' in a discussion. Despite their importance, argumentation scholars and NLP researchers with focus on argumentation quality have not yet investigated fallacies empirically. The nonexistence of resources dealing with fallacious argumentation calls for scalable approaches to data acquisition and annotation, for which the serious games methodology offers an appealing, yet unexplored, alternative. We present Argotario, a serious game that deals with fallacies in everyday argumentation. Argotario is a multilingual, open-source, platform-independent application with strong educational aspects, accessible at www.argotario.net. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 77,338 |
2408.12814 | From Few to More: Scribble-based Medical Image Segmentation via Masked
Context Modeling and Continuous Pseudo Labels | Scribble-based weakly supervised segmentation techniques offer comparable performance to fully supervised methods while significantly reducing annotation costs, making them an appealing alternative. Existing methods often rely on auxiliary tasks to enforce semantic consistency and use hard pseudo labels for supervision. However, these methods often overlook the unique requirements of models trained with sparse annotations. Since the model must predict pixel-wise segmentation maps with limited annotations, the ability to handle varying levels of annotation richness is critical. In this paper, we adopt the principle of `from few to more' and propose MaCo, a weakly supervised framework designed for medical image segmentation. MaCo employs masked context modeling (MCM) and continuous pseudo labels (CPL). MCM uses an attention-based masking strategy to disrupt the input image, compelling the model's predictions to remain consistent with those of the original image. CPL converts scribble annotations into continuous pixel-wise labels by applying an exponential decay function to distance maps, resulting in continuous maps that represent the confidence of each pixel belonging to a specific category, rather than using hard pseudo labels. We evaluate MaCo against other weakly supervised methods using three public datasets. The results indicate that MaCo outperforms competing methods across all datasets, setting a new record in weakly supervised medical image segmentation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 482,898 |
1205.2601 | Most Relevant Explanation: Properties, Algorithms, and Evaluations | Most Relevant Explanation (MRE) is a method for finding multivariate explanations for given evidence in Bayesian networks [12]. This paper studies the theoretical properties of MRE and develops an algorithm for finding multiple top MRE solutions. Our study shows that MRE relies on an implicit soft relevance measure in automatically identifying the most relevant target variables and pruning less relevant variables from an explanation. The soft measure also enables MRE to capture the intuitive phenomenon of explaining away encoded in Bayesian networks. Furthermore, our study shows that the solution space of MRE has a special lattice structure which yields interesting dominance relations among the solutions. A K-MRE algorithm based on these dominance relations is developed for generating a set of top solutions that are more representative. Our empirical results show that MRE methods are promising approaches for explanation in Bayesian networks. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 15,909 |
1409.4237 | On Analysis And Generation Of Biologically Important Boolean Functions | Boolean networks are used to model biological networks such as gene regulatory networks. Often Boolean networks show very chaotic behavior which is sensitive to any small perturbations.In order to reduce the chaotic behavior and to attain stability in the gene regulatory network,nested canalizing functions(NCF)are best suited NCF and its variants have a wide range of applications in system biology. Previously many work were done on the application of canalizing functions but there were fewer methods to check if any arbitrary Boolean function is canalizing or not. In this paper, by using Karnaugh Map this problem gas been solved and also it has been shown that when the canalizing functions of n variable is given, all the canalizing functions of n+1 variable could be generated by the method of concatenation. In this paper we have uniquely identified the number of NCFs having a particular hamming distance (H.D) generated by each variable x as starting canalizing input. Partially nested canalizing functions of 4 variables have also been studied in this paper. Keywords: Karnaugh Map, Canalizing function, Nested canalizing function, Partially nested canalizing function,concatenation | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 36,051 |
2212.02931 | Leveraging Different Learning Styles for Improved Knowledge Distillation
in Biomedical Imaging | Learning style refers to a type of training mechanism adopted by an individual to gain new knowledge. As suggested by the VARK model, humans have different learning preferences, like Visual (V), Auditory (A), Read/Write (R), and Kinesthetic (K), for acquiring and effectively processing information. Our work endeavors to leverage this concept of knowledge diversification to improve the performance of model compression techniques like Knowledge Distillation (KD) and Mutual Learning (ML). Consequently, we use a single-teacher and two-student network in a unified framework that not only allows for the transfer of knowledge from teacher to students (KD) but also encourages collaborative learning between students (ML). Unlike the conventional approach, where the teacher shares the same knowledge in the form of predictions or feature representations with the student network, our proposed approach employs a more diversified strategy by training one student with predictions and the other with feature maps from the teacher. We further extend this knowledge diversification by facilitating the exchange of predictions and feature maps between the two student networks, enriching their learning experiences. We have conducted comprehensive experiments with three benchmark datasets for both classification and segmentation tasks using two different network architecture combinations. These experimental results demonstrate that knowledge diversification in a combined KD and ML framework outperforms conventional KD or ML techniques (with similar network configuration) that only use predictions with an average improvement of 2%. Furthermore, consistent improvement in performance across different tasks, with various network architectures, and over state-of-the-art techniques establishes the robustness and generalizability of the proposed model | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 334,937 |
2208.05577 | Reducing Retraining by Recycling Parameter-Efficient Prompts | Parameter-efficient methods are able to use a single frozen pre-trained large language model (LLM) to perform many tasks by learning task-specific soft prompts that modulate model behavior when concatenated to the input text. However, these learned prompts are tightly coupled to a given frozen model -- if the model is updated, corresponding new prompts need to be obtained. In this work, we propose and investigate several approaches to "Prompt Recycling'" where a prompt trained on a source model is transformed to work with the new target model. Our methods do not rely on supervised pairs of prompts, task-specific data, or training updates with the target model, which would be just as costly as re-tuning prompts with the target model from scratch. We show that recycling between models is possible (our best settings are able to successfully recycle $88.9\%$ of prompts, producing a prompt that out-performs baselines), but significant performance headroom remains, requiring improved recycling techniques. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 312,434 |
2112.05559 | Collaborative Learning over Wireless Networks: An Introductory Overview | In this chapter, we will mainly focus on collaborative training across wireless devices. Training a ML model is equivalent to solving an optimization problem, and many distributed optimization algorithms have been developed over the last decades. These distributed ML algorithms provide data locality; that is, a joint model can be trained collaboratively while the data available at each participating device remains local. This addresses, to some extend, the privacy concern. They also provide computational scalability as they allow exploiting computational resources distributed across many edge devices. However, in practice, this does not directly lead to a linear gain in the overall learning speed with the number of devices. This is partly due to the communication bottleneck limiting the overall computation speed. Additionally, wireless devices are highly heterogeneous in their computational capabilities, and both their computation speed and communication rate can be highly time-varying due to physical factors. Therefore, distributed learning algorithms, particularly those to be implemented at the wireless network edge, must be carefully designed taking into account the impact of time-varying communication network as well as the heterogeneous and stochastic computation capabilities of devices. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 270,877 |
2111.14821 | End-to-End Referring Video Object Segmentation with Multimodal
Transformers | The referring video object segmentation task (RVOS) involves segmentation of a text-referred object instance in the frames of a given video. Due to the complex nature of this multimodal task, which combines text reasoning, video understanding, instance segmentation and tracking, existing approaches typically rely on sophisticated pipelines in order to tackle it. In this paper, we propose a simple Transformer-based approach to RVOS. Our framework, termed Multimodal Tracking Transformer (MTTR), models the RVOS task as a sequence prediction problem. Following recent advancements in computer vision and natural language processing, MTTR is based on the realization that video and text can be processed together effectively and elegantly by a single multimodal Transformer model. MTTR is end-to-end trainable, free of text-related inductive bias components and requires no additional mask-refinement post-processing steps. As such, it simplifies the RVOS pipeline considerably compared to existing methods. Evaluation on standard benchmarks reveals that MTTR significantly outperforms previous art across multiple metrics. In particular, MTTR shows impressive +5.7 and +5.0 mAP gains on the A2D-Sentences and JHMDB-Sentences datasets respectively, while processing 76 frames per second. In addition, we report strong results on the public validation set of Refer-YouTube-VOS, a more challenging RVOS dataset that has yet to receive the attention of researchers. The code to reproduce our experiments is available at https://github.com/mttr2021/MTTR | false | false | false | false | false | false | true | false | true | false | false | true | false | false | false | false | false | false | 268,718 |
0810.3451 | The many faces of optimism - Extended version | The exploration-exploitation dilemma has been an intriguing and unsolved problem within the framework of reinforcement learning. "Optimism in the face of uncertainty" and model building play central roles in advanced exploration methods. Here, we integrate several concepts and obtain a fast and simple algorithm. We show that the proposed algorithm finds a near-optimal policy in polynomial time, and give experimental evidence that it is robust and efficient compared to its ascendants. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 2,526 |
2401.07606 | RedEx: Beyond Fixed Representation Methods via Convex Optimization | Optimizing Neural networks is a difficult task which is still not well understood. On the other hand, fixed representation methods such as kernels and random features have provable optimization guarantees but inferior performance due to their inherent inability to learn the representations. In this paper, we aim at bridging this gap by presenting a novel architecture called RedEx (Reduced Expander Extractor) that is as expressive as neural networks and can also be trained in a layer-wise fashion via a convex program with semi-definite constraints and optimization guarantees. We also show that RedEx provably surpasses fixed representation methods, in the sense that it can efficiently learn a family of target functions which fixed representation methods cannot. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 421,607 |
2308.04259 | Generalized Forgetting Recursive Least Squares: Stability and Robustness
Guarantees | This work presents generalized forgetting recursive least squares (GF-RLS), a generalization of recursive least squares (RLS) that encompasses many extensions of RLS as special cases. First, sufficient conditions are presented for the 1) Lyapunov stability, 2) uniform Lyapunov stability, 3) global asymptotic stability, and 4) global uniform exponential stability of parameter estimation error in GF-RLS when estimating fixed parameters without noise. Second, robustness guarantees are derived for the estimation of time-varying parameters in the presence of measurement noise and regressor noise. These robustness guarantees are presented in terms of global uniform ultimate boundedness of the parameter estimation error. A specialization of this result gives a bound to the asymptotic bias of least squares estimators in the errors-in-variables problem. Lastly, a survey is presented to show how GF-RLS can be used to analyze various extensions of RLS from the literature. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 384,346 |
2002.01924 | Explicit Wiretap Channel Codes via Source Coding, Universal Hashing, and
Distribution Approximation, When the Channels' Statistics are Uncertain | We consider wiretap channels with uncertainty on the eavesdropper channel under (i) noisy blockwise type II, (ii) compound, or (iii) arbitrarily varying models. We present explicit wiretap codes that can handle these models in a unified manner and only rely on three primitives, namely source coding with side information, universal hashing, and distribution approximation. Our explicit wiretap codes achieve the best known single-letter achievable rates, previously obtained non-constructively, for the models considered. Our results are obtained for strong secrecy, do not require a pre-shared secret between the legitimate users, and do not require any symmetry properties on the channel. An extension of our results to compound main channels is also derived via new capacity-achieving polar coding schemes for compound settings. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 162,776 |
2008.03720 | Disentangled Multidimensional Metric Learning for Music Similarity | Music similarity search is useful for a variety of creative tasks such as replacing one music recording with another recording with a similar "feel", a common task in video editing. For this task, it is typically necessary to define a similarity metric to compare one recording to another. Music similarity, however, is hard to define and depends on multiple simultaneous notions of similarity (i.e. genre, mood, instrument, tempo). While prior work ignore this issue, we embrace this idea and introduce the concept of multidimensional similarity and unify both global and specialized similarity metrics into a single, semantically disentangled multidimensional similarity metric. To do so, we adapt a variant of deep metric learning called conditional similarity networks to the audio domain and extend it using track-based information to control the specificity of our model. We evaluate our method and show that our single, multidimensional model outperforms both specialized similarity spaces and alternative baselines. We also run a user-study and show that our approach is favored by human annotators as well. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 191,012 |
2206.00489 | Attack-Agnostic Adversarial Detection | The growing number of adversarial attacks in recent years gives attackers an advantage over defenders, as defenders must train detectors after knowing the types of attacks, and many models need to be maintained to ensure good performance in detecting any upcoming attacks. We propose a way to end the tug-of-war between attackers and defenders by treating adversarial attack detection as an anomaly detection problem so that the detector is agnostic to the attack. We quantify the statistical deviation caused by adversarial perturbations in two aspects. The Least Significant Component Feature (LSCF) quantifies the deviation of adversarial examples from the statistics of benign samples and Hessian Feature (HF) reflects how adversarial examples distort the landscape of the model's optima by measuring the local loss curvature. Empirical results show that our method can achieve an overall ROC AUC of 94.9%, 89.7%, and 94.6% on CIFAR10, CIFAR100, and SVHN, respectively, and has comparable performance to adversarial detectors trained with adversarial examples on most of the attacks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 300,141 |
2309.07550 | Naturalistic Robot Arm Trajectory Generation via Representation Learning | The integration of manipulator robots in household environments suggests a need for more predictable and human-like robot motion. This holds especially true for wheelchair-mounted assistive robots that can support the independence of people with paralysis. One method of generating naturalistic motion trajectories is via the imitation of human demonstrators. This paper explores a self-supervised imitation learning method using an autoregressive spatio-temporal graph neural network for an assistive drinking task. We address learning from diverse human motion trajectory data that were captured via wearable IMU sensors on a human arm as the action-free task demonstrations. Observed arm motion data from several participants is used to generate natural and functional drinking motion trajectories for a UR5e robot arm. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 391,825 |
2308.12264 | Enhancing Energy-Awareness in Deep Learning through Fine-Grained Energy
Measurement | With the increasing usage, scale, and complexity of Deep Learning (DL) models, their rapidly growing energy consumption has become a critical concern. Promoting green development and energy awareness at different granularities is the need of the hour to limit carbon emissions of DL systems. However, the lack of standard and repeatable tools to accurately measure and optimize energy consumption at a fine granularity (e.g., at method level) hinders progress in this area. This paper introduces FECoM (Fine-grained Energy Consumption Meter), a framework for fine-grained DL energy consumption measurement. FECoM enables researchers and developers to profile DL APIs from energy perspective. FECoM addresses the challenges of measuring energy consumption at fine-grained level by using static instrumentation and considering various factors, including computational load and temperature stability. We assess FECoM's capability to measure fine-grained energy consumption for one of the most popular open-source DL frameworks, namely TensorFlow. Using FECoM, we also investigate the impact of parameter size and execution time on energy consumption, enriching our understanding of TensorFlow APIs' energy profiles. Furthermore, we elaborate on the considerations, issues, and challenges that one needs to consider while designing and implementing a fine-grained energy consumption measurement tool. This work will facilitate further advances in DL energy measurement and the development of energy-aware practices for DL systems. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 387,476 |
2212.09429 | On the Complexity of Representation Learning in Contextual Linear
Bandits | In contextual linear bandits, the reward function is assumed to be a linear combination of an unknown reward vector and a given embedding of context-arm pairs. In practice, the embedding is often learned at the same time as the reward vector, thus leading to an online representation learning problem. Existing approaches to representation learning in contextual bandits are either very generic (e.g., model-selection techniques or algorithms for learning with arbitrary function classes) or specialized to particular structures (e.g., nested features or representations with certain spectral properties). As a result, the understanding of the cost of representation learning in contextual linear bandit is still limited. In this paper, we take a systematic approach to the problem and provide a comprehensive study through an instance-dependent perspective. We show that representation learning is fundamentally more complex than linear bandits (i.e., learning with a given representation). In particular, learning with a given set of representations is never simpler than learning with the worst realizable representation in the set, while we show cases where it can be arbitrarily harder. We complement this result with an extensive discussion of how it relates to existing literature and we illustrate positive instances where representation learning is as complex as learning with a fixed representation and where sub-logarithmic regret is achievable. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 337,109 |
2206.04405 | Conformal Off-Policy Prediction in Contextual Bandits | Most off-policy evaluation methods for contextual bandits have focused on the expected outcome of a policy, which is estimated via methods that at best provide only asymptotic guarantees. However, in many applications, the expectation may not be the best measure of performance as it does not capture the variability of the outcome. In addition, particularly in safety-critical settings, stronger guarantees than asymptotic correctness may be required. To address these limitations, we consider a novel application of conformal prediction to contextual bandits. Given data collected under a behavioral policy, we propose \emph{conformal off-policy prediction} (COPP), which can output reliable predictive intervals for the outcome under a new target policy. We provide theoretical finite-sample guarantees without making any additional assumptions beyond the standard contextual bandit setup, and empirically demonstrate the utility of COPP compared with existing methods on synthetic and real-world data. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 301,614 |
1708.08142 | Study of Set-Membership Kernel Adaptive Algorithms and Applications | Adaptive algorithms based on kernel structures have been a topic of significant research over the past few years. The main advantage is that they form a family of universal approximators, offering an elegant solution to problems with nonlinearities. Nevertheless these methods deal with kernel expansions, creating a growing structure also known as dictionary, whose size depends on the number of new inputs. In this paper we derive the set-membership kernel-based normalized least-mean square (SM-NKLMS) algorithm, which is capable of limiting the size of the dictionary created in stationary environments. We also derive as an extension the set-membership kernelized affine projection (SM-KAP) algorithm. Finally several experiments are presented to compare the proposed SM-NKLMS and SM-KAP algorithms to the existing methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 79,599 |
1805.10105 | Effects of Social Bots in the Iran-Debate on Twitter | 2018 started with massive protests in Iran, bringing back the impressions of the so called "Arab Spring" and it's revolutionary impact for the Maghreb states, Syria and Egypt. Many reports and scientific examinations considered online social networks (OSN's) such as Twitter or Facebook to play a critical role in the opinion making of people behind those protests. Beside that, there is also evidence for directed manipulation of opinion with the help of social bots and fake accounts. So, it is obvious to ask, if there is an attempt to manipulate the opinion-making process related to the Iranian protest in OSN by employing social bots, and how such manipulations will affect the discourse as a whole. Based on a sample of ca. 900,000 Tweets relating to the topic "Iran" we show, that there are Twitter profiles, that have to be considered as social bot accounts. By using text mining methods, we show that these social bots are responsible for negative sentiment in the debate. Thereby, we would like to illustrate a detectable effect of social bots on political discussions on Twitter. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 98,580 |
1612.09534 | Channel Measurements and Models for High-Speed Train Wireless
Communication Systems in Tunnel Scenarios: A Survey | The rapid developments of high-speed trains (HSTs) introduce new challenges to HST wireless communication systems. Realistic HST channel models play a critical role in designing and evaluating HST communication systems. Due to the length limitation, bounding of tunnel itself, and waveguide effect, channel characteristics in tunnel scenarios are very different from those in other HST scenarios. Therefore, accurate tunnel channel models considering both large-scale and small-scale fading characteristics are essential for HST communication systems. Moreover, certain characteristics of tunnel channels have not been investigated sufficiently. This article provides a comprehensive review of the measurement campaigns in tunnels and presents some tunnel channel models using various modeling methods. Finally, future directions in HST tunnel channel measurements and modeling are discussed. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 66,202 |
1712.06365 | 'Indifference' methods for managing agent rewards | `Indifference' refers to a class of methods used to control reward based agents. Indifference techniques aim to achieve one or more of three distinct goals: rewards dependent on certain events (without the agent being motivated to manipulate the probability of those events), effective disbelief (where agents behave as if particular events could never happen), and seamless transition from one reward function to another (with the agent acting as if this change is unanticipated). This paper presents several methods for achieving these goals in the POMDP setting, establishing their uses, strengths, and requirements. These methods of control work even when the implications of the agent's reward are otherwise not fully understood. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 86,879 |
2303.10311 | On the rise of fear speech in online social media | Recently, social media platforms are heavily moderated to prevent the spread of online hate speech, which is usually fertile in toxic words and is directed toward an individual or a community. Owing to such heavy moderation, newer and more subtle techniques are being deployed. One of the most striking among these is fear speech. Fear speech, as the name suggests, attempts to incite fear about a target community. Although subtle, it might be highly effective, often pushing communities toward a physical conflict. Therefore, understanding their prevalence in social media is of paramount importance. This article presents a large-scale study to understand the prevalence of 400K fear speech and over 700K hate speech posts collected from Gab.com. Remarkably, users posting a large number of fear speech accrue more followers and occupy more central positions in social networks than users posting a large number of hate speech. They can also reach out to benign users more effectively than hate speech users through replies, reposts, and mentions. This connects to the fact that, unlike hate speech, fear speech has almost zero toxic content, making it look plausible. Moreover, while fear speech topics mostly portray a community as a perpetrator using a (fake) chain of argumentation, hate speech topics hurl direct multitarget insults, thus pointing to why general users could be more gullible to fear speech. Our findings transcend even to other platforms (Twitter and Facebook) and thus necessitate using sophisticated moderation policies and mass awareness to combat fear speech. | false | false | false | true | false | false | false | false | true | false | false | false | false | true | false | false | false | false | 352,389 |
1412.3697 | Hybrid recommendation methods in complex networks | We propose here two new recommendation methods, based on the appropriate normalization of already existing similarity measures, and on the convex combination of the recommendation scores derived from similarity between users and between objects. We validate the proposed measures on three relevant data sets, and we compare their performance with several recommendation systems recently proposed in the literature. We show that the proposed similarity measures allow to attain an improvement of performances of up to 20\% with respect to existing non-parametric methods, and that the accuracy of a recommendation can vary widely from one specific bipartite network to another, which suggests that a careful choice of the most suitable method is highly relevant for an effective recommendation on a given system. Finally, we studied how an increasing presence of random links in the network affects the recommendation scores, and we found that one of the two recommendation algorithms introduced here can systematically outperform the others in noisy data sets. | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 38,311 |
2408.08892 | Leveraging Large Language Models for Enhanced Process Model
Comprehension | In Business Process Management (BPM), effectively comprehending process models is crucial yet poses significant challenges, particularly as organizations scale and processes become more complex. This paper introduces a novel framework utilizing the advanced capabilities of Large Language Models (LLMs) to enhance the interpretability of complex process models. We present different methods for abstracting business process models into a format accessible to LLMs, and we implement advanced prompting strategies specifically designed to optimize LLM performance within our framework. Additionally, we present a tool, AIPA, that implements our proposed framework and allows for conversational process querying. We evaluate our framework and tool by i) an automatic evaluation comparing different LLMs, model abstractions, and prompting strategies and ii) a user study designed to assess AIPA's effectiveness comprehensively. Results demonstrate our framework's ability to improve the accessibility and interpretability of process models, pioneering new pathways for integrating AI technologies into the BPM field. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | false | 481,193 |
1411.4044 | Benchmarking DataStax Enterprise/Cassandra with HiBench | This report evaluates the new analytical capabilities of DataStax Enterprise (DSE) [1] through the use of standard Hadoop workloads. In particular, we run experiments with CPU and I/O bound micro-benchmarks as well as OLAP-style analytical query workloads. The performed tests should show that DSE is capable of successfully executing Hadoop applications without the need to adapt them for the underlying Cassandra distributed storage system [2]. Due to the Cassandra File System (CFS) [3], which supports the Hadoop Distributed File System API, Hadoop stack applications should seamlessly run in DSE. The report is structured as follows: Section 2 provides a brief description of the technologies involved in our study. An overview of our used hardware and software components of the experimental environment is given in Section 3. Our benchmark methodology is defined in Section 4. The performed experiments together with the evaluation of the results are presented in Section 5. Finally, Section 6 concludes with lessons learned. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | true | 37,570 |
1808.06148 | Generalized Bregman and Jensen divergences which include some
f-divergences | In this paper, we introduce new classes of divergences by extending the definitions of the Bregman divergence and the skew Jensen divergence. These new divergence classes (g-Bregman divergence and skew g-Jensen divergence) satisfy some properties similar to the Bregman or skew Jensen divergence. We show these g-divergences include divergences which belong to a class of f-divergence (the Hellinger distance, the chi-square divergence and the alpha-divergence in addition to the Kullback-Leibler divergence). Moreover, we derive an inequality between the g-Bregman divergence and the skew g-Jensen divergence and show this inequality is a generalization of Lin's inequality. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 105,474 |
2408.09908 | $p$SVM: Soft-margin SVMs with $p$-norm Hinge Loss | Support Vector Machines (SVMs) based on hinge loss have been extensively discussed and applied to various binary classification tasks. These SVMs achieve a balance between margin maximization and the minimization of slack due to outliers. Although many efforts have been dedicated to enhancing the performance of SVMs with hinge loss, studies on $p$SVMs, soft-margin SVMs with $p$-norm hinge loss, remain relatively scarce. In this paper, we explore the properties, performance, and training algorithms of $p$SVMs. We first derive the generalization bound of $p$SVMs, then formulate the dual optimization problem, comparing it with the traditional approach. Furthermore, we discuss a generalized version of the Sequential Minimal Optimization (SMO) algorithm, $p$SMO, to train our $p$SVM model. Comparative experiments on various datasets, including binary and multi-class classification tasks, demonstrate the effectiveness and advantages of our $p$SVM model and the $p$SMO method. Code is available at https://github.com/CoderBak/pSVM. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 481,641 |
0805.4112 | On the entropy and log-concavity of compound Poisson measures | Motivated, in part, by the desire to develop an information-theoretic foundation for compound Poisson approximation limit theorems (analogous to the corresponding developments for the central limit theorem and for simple Poisson approximation), this work examines sufficient conditions under which the compound Poisson distribution has maximal entropy within a natural class of probability measures on the nonnegative integers. We show that the natural analog of the Poisson maximum entropy property remains valid if the measures under consideration are log-concave, but that it fails in general. A parallel maximum entropy result is established for the family of compound binomial measures. The proofs are largely based on ideas related to the semigroup approach introduced in recent work by Johnson for the Poisson family. Sufficient conditions are given for compound distributions to be log-concave, and specific examples are presented illustrating all the above results. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 1,832 |
2211.05985 | Using Persuasive Writing Strategies to Explain and Detect Health
Misinformation | Nowadays, the spread of misinformation is a prominent problem in society. Our research focuses on aiding the automatic identification of misinformation by analyzing the persuasive strategies employed in textual documents. We introduce a novel annotation scheme encompassing common persuasive writing tactics to achieve our objective. Additionally, we provide a dataset on health misinformation, thoroughly annotated by experts utilizing our proposed scheme. Our contribution includes proposing a new task of annotating pieces of text with their persuasive writing strategy types. We evaluate fine-tuning and prompt-engineering techniques with pre-trained language models of the BERT family and the generative large language models of the GPT family using persuasive strategies as an additional source of information. We evaluate the effects of employing persuasive strategies as intermediate labels in the context of misinformation detection. Our results show that those strategies enhance accuracy and improve the explainability of misinformation detection models. The persuasive strategies can serve as valuable insights and explanations, enabling other models or even humans to make more informed decisions regarding the trustworthiness of the information. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 329,738 |
1806.08279 | Don't only Feel Read: Using Scene text to understand advertisements | We propose a framework for automated classification of Advertisement Images, using not just Visual features but also Textual cues extracted from embedded text. Our approach takes inspiration from the assumption that Ad images contain meaningful textual content, that can provide discriminative semantic interpretetion, and can thus aid in classifcation tasks. To this end, we develop a framework using off-the-shelf components, and demonstrate the effectiveness of Textual cues in semantic Classfication tasks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 101,131 |
2412.10009 | Class flipping for uplift modeling and Heterogeneous Treatment Effect
estimation on imbalanced RCT data | Uplift modeling and Heterogeneous Treatment Effect (HTE) estimation aim at predicting the causal effect of an action, such as a medical treatment or a marketing campaign on a specific individual. In this paper, we focus on data from Randomized Controlled Experiments which guarantee causal interpretation of the outcomes. Class and treatment imbalance are important problems in uplift modeling/HTE, but classical undersampling or oversampling based approaches are hard to apply in this case since they distort the predicted effect. Calibration methods have been proposed in the past, however, they do not guarantee correct predictions. In this work, we propose an approach alternative to undersampling, based on flipping the class value of selected records. We show that the proposed approach does not distort the predicted effect and does not require calibration. The method is especially useful for models based on class variable transformation (modified outcome models). We address those models separately, designing a transformation scheme which guarantees correct predictions and addresses also the problem of treatment imbalance which is especially important for those models. Experiments fully confirm our theoretical results. Additionally, we demonstrate that our method is a viable alternative also for standard classification problems. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 516,754 |
2003.04630 | Lagrangian Neural Networks | Accurate models of the world are built upon notions of its underlying symmetries. In physics, these symmetries correspond to conservation laws, such as for energy and momentum. Yet even though neural network models see increasing use in the physical sciences, they struggle to learn these symmetries. In this paper, we propose Lagrangian Neural Networks (LNNs), which can parameterize arbitrary Lagrangians using neural networks. In contrast to models that learn Hamiltonians, LNNs do not require canonical coordinates, and thus perform well in situations where canonical momenta are unknown or difficult to compute. Unlike previous approaches, our method does not restrict the functional form of learned energies and will produce energy-conserving models for a variety of tasks. We test our approach on a double pendulum and a relativistic particle, demonstrating energy conservation where a baseline approach incurs dissipation and modeling relativity without canonical coordinates where a Hamiltonian approach fails. Finally, we show how this model can be applied to graphs and continuous systems using a Lagrangian Graph Network, and demonstrate it on the 1D wave equation. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 167,605 |
1712.06139 | TensorFlow-Serving: Flexible, High-Performance ML Serving | We describe TensorFlow-Serving, a system to serve machine learning models inside Google which is also available in the cloud and via open-source. It is extremely flexible in terms of the types of ML platforms it supports, and ways to integrate with systems that convey new models and updated versions from training to serving. At the same time, the core code paths around model lookup and inference have been carefully optimized to avoid performance pitfalls observed in naive implementations. Google uses it in many production deployments, including a multi-tenant model hosting service called TFS^2. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 86,839 |
1604.06648 | Automatic verbal aggression detection for Russian and American
imageboards | The problem of aggression for Internet communities is rampant. Anonymous forums usually called imageboards are notorious for their aggressive and deviant behaviour even in comparison with other Internet communities. This study is aimed at studying ways of automatic detection of verbal expression of aggression for the most popular American (4chan.org) and Russian (2ch.hk) imageboards. A set of 1,802,789 messages was used for this study. The machine learning algorithm word2vec was applied to detect the state of aggression. A decent result is obtained for English (88%), the results for Russian are yet to be improved. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 54,973 |
1606.07729 | On Lossless Feedback Delay Networks | Lossless Feedback Delay Networks (FDNs) are commonly used as a design prototype for artificial reverberation algorithms. The lossless property is dependent on the feedback matrix, which connects the output of a set of delays to their inputs, and the lengths of the delays. Both, unitary and triangular feedback matrices are known to constitute lossless FDNs, however, the most general class of lossless feedback matrices has not been identified. In this contribution, it is shown that the FDN is lossless for any set of delays, if all irreducible components of the feedback matrix are diagonally similar to a unitary matrix. The necessity of the generalized class of feedback matrices is demonstrated by examples of FDN designs proposed in literature. | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 57,771 |
2108.06009 | SAR image matching algorithm based on multi-class features | Synthetic aperture radar has the ability to work 24/7 and 24/7, and has high application value. Propose a new SAR image matching algorithm based on multi class features, mainly using two different types of features: straight lines and regions to enhance the robustness of the matching algorithm; On the basis of using prior knowledge of images, combined with LSD (Line Segment Detector) line detection and template matching algorithm, by analyzing the attribute correlation between line and surface features in SAR images, selecting line and region features in SAR images to match the images, the matching accuracy between SAR images and visible light images is improved, and the probability of matching errors is reduced. The experimental results have verified that this algorithm can obtain high-precision matching results, achieve precise target positioning, and has good robustness to changes in perspective and lighting. The results are accurate and false positives are controllable. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 250,476 |
2309.02534 | Experience and Prediction: A Metric of Hardness for a Novel Litmus Test | In the last decade, the Winograd Schema Challenge (WSC) has become a central aspect of the research community as a novel litmus test. Consequently, the WSC has spurred research interest because it can be seen as the means to understand human behavior. In this regard, the development of new techniques has made possible the usage of Winograd schemas in various fields, such as the design of novel forms of CAPTCHAs. Work from the literature that established a baseline for human adult performance on the WSC has shown that not all schemas are the same, meaning that they could potentially be categorized according to their perceived hardness for humans. In this regard, this \textit{hardness-metric} could be used in future challenges or in the WSC CAPTCHA service to differentiate between Winograd schemas. Recent work of ours has shown that this could be achieved via the design of an automated system that is able to output the hardness-indexes of Winograd schemas, albeit with limitations regarding the number of schemas it could be applied on. This paper adds to previous research by presenting a new system that is based on Machine Learning (ML), able to output the hardness of any Winograd schema faster and more accurately than any other previously used method. Our developed system, which works within two different approaches, namely the random forest and deep learning (LSTM-based), is ready to be used as an extension of any other system that aims to differentiate between Winograd schemas, according to their perceived hardness for humans. At the same time, along with our developed system we extend previous work by presenting the results of a large-scale experiment that shows how human performance varies across Winograd schemas. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 390,069 |
2411.14347 | DINO-X: A Unified Vision Model for Open-World Object Detection and
Understanding | In this paper, we introduce DINO-X, which is a unified object-centric vision model developed by IDEA Research with the best open-world object detection performance to date. DINO-X employs the same Transformer-based encoder-decoder architecture as Grounding DINO 1.5 to pursue an object-level representation for open-world object understanding. To make long-tailed object detection easy, DINO-X extends its input options to support text prompt, visual prompt, and customized prompt. With such flexible prompt options, we develop a universal object prompt to support prompt-free open-world detection, making it possible to detect anything in an image without requiring users to provide any prompt. To enhance the model's core grounding capability, we have constructed a large-scale dataset with over 100 million high-quality grounding samples, referred to as Grounding-100M, for advancing the model's open-vocabulary detection performance. Pre-training on such a large-scale grounding dataset leads to a foundational object-level representation, which enables DINO-X to integrate multiple perception heads to simultaneously support multiple object perception and understanding tasks, including detection, segmentation, pose estimation, object captioning, object-based QA, etc. Experimental results demonstrate the superior performance of DINO-X. Specifically, the DINO-X Pro model achieves 56.0 AP, 59.8 AP, and 52.4 AP on the COCO, LVIS-minival, and LVIS-val zero-shot object detection benchmarks, respectively. Notably, it scores 63.3 AP and 56.5 AP on the rare classes of LVIS-minival and LVIS-val benchmarks, improving the previous SOTA performance by 5.8 AP and 5.0 AP. Such a result underscores its significantly improved capacity for recognizing long-tailed objects. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 510,119 |
1401.3872 | Second-Order Consistencies | In this paper, we propose a comprehensive study of second-order consistencies (i.e., consistencies identifying inconsistent pairs of values) for constraint satisfaction. We build a full picture of the relationships existing between four basic second-order consistencies, namely path consistency (PC), 3-consistency (3C), dual consistency (DC) and 2-singleton arc consistency (2SAC), as well as their conservative and strong variants. Interestingly, dual consistency is an original property that can be established by using the outcome of the enforcement of generalized arc consistency (GAC), which makes it rather easy to obtain since constraint solvers typically maintain GAC during search. On binary constraint networks, DC is equivalent to PC, but its restriction to existing constraints, called conservative dual consistency (CDC), is strictly stronger than traditional conservative consistencies derived from path consistency, namely partial path consistency (PPC) and conservative path consistency (CPC). After introducing a general algorithm to enforce strong (C)DC, we present the results of an experimentation over a wide range of benchmarks that demonstrate the interest of (conservative) dual consistency. In particular, we show that enforcing (C)DC before search clearly improves the performance of MAC (the algorithm that maintains GAC during search) on several binary and non-binary structured problems. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 29,986 |
1110.6650 | Summarization and Matching of Density-Based Clusters in Streaming
Environments | Density-based cluster mining is known to serve a broad range of applications ranging from stock trade analysis to moving object monitoring. Although methods for efficient extraction of density-based clusters have been studied in the literature, the problem of summarizing and matching of such clusters with arbitrary shapes and complex cluster structures remains unsolved. Therefore, the goal of our work is to extend the state-of-art of density-based cluster mining in streams from cluster extraction only to now also support analysis and management of the extracted clusters. Our work solves three major technical challenges. First, we propose a novel multi-resolution cluster summarization method, called Skeletal Grid Summarization (SGS), which captures the key features of density-based clusters, covering both their external shape and internal cluster structures. Second, in order to summarize the extracted clusters in real-time, we present an integrated computation strategy C-SGS, which piggybacks the generation of cluster summarizations within the online clustering process. Lastly, we design a mechanism to efficiently execute cluster matching queries, which identify similar clusters for given cluster of analyst's interest from clusters extracted earlier in the stream history. Our experimental study using real streaming data shows the clear superiority of our proposed methods in both efficiency and effectiveness for cluster summarization and cluster matching queries to other potential alternatives. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 12,823 |
1912.11894 | Analysis of Reference and Citation Copying in Evolving Bibliographic
Networks | Extensive literature demonstrates how the copying of references (links) can lead to the emergence of various structural properties (e.g., power-law degree distribution and bipartite cores) in bibliographic and other similar directed networks. However, it is also well known that the copying process is incapable of mimicking the number of directed triangles in such networks; neither does it have the power to explain the obsolescence of older papers. In this paper, we propose RefOrCite, a new model that allows for copying of both the references from (i.e., out-neighbors of) as well as the citations to (i.e., in-neighbors of) an existing node. In contrast, the standard copying model (CP) only copies references. While retaining its spirit, RefOrCite differs from the Forest Fire (FF) model in ways that makes RefOrCite amenable to mean-field analysis for degree distribution, triangle count, and densification. Empirically, RefOrCite gives the best overall agreement with observed degree distribution, triangle count, diameter, h-index, and the growth of citations to newer papers. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 158,693 |
2206.12839 | Repository-Level Prompt Generation for Large Language Models of Code | With the success of large language models (LLMs) of code and their use as code assistants (e.g. Codex used in GitHub Copilot), techniques for introducing domain-specific knowledge in the prompt design process become important. In this work, we propose a framework called Repo-Level Prompt Generator that learns to generate example-specific prompts using prompt proposals. The prompt proposals take context from the entire repository, thereby incorporating both the structure of the repository and the context from other relevant files (e.g. imports, parent class files). Our technique doesn't require any access to the weights of the LLM, making it applicable in cases where we only have black-box access to the LLM. We conduct experiments on the task of single-line code-autocompletion using code repositories taken from Google Code archives. We demonstrate that an oracle constructed from our prompt proposals gives a remarkably high relative improvement of 36% over Codex, showing the quality of these proposals. Further, we show that when we train a model to predict a prompt proposal, we can achieve significant performance gains over Codex and other baselines. We release our code, data, and trained checkpoints at: \url{https://github.com/shrivastavadisha/repo_level_prompt_generation}. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 304,751 |
2302.06834 | Improved Regret Bounds for Linear Adversarial MDPs via Linear
Optimization | Learning Markov decision processes (MDP) in an adversarial environment has been a challenging problem. The problem becomes even more challenging with function approximation, since the underlying structure of the loss function and transition kernel are especially hard to estimate in a varying environment. In fact, the state-of-the-art results for linear adversarial MDP achieve a regret of $\tilde{O}(K^{6/7})$ ($K$ denotes the number of episodes), which admits a large room for improvement. In this paper, we investigate the problem with a new view, which reduces linear MDP into linear optimization by subtly setting the feature maps of the bandit arms of linear optimization. This new technique, under an exploratory assumption, yields an improved bound of $\tilde{O}(K^{4/5})$ for linear adversarial MDP without access to a transition simulator. The new view could be of independent interest for solving other MDP problems that possess a linear structure. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 345,546 |
1803.04375 | A Feature-Rich Vietnamese Named-Entity Recognition Model | In this paper, we present a feature-based named-entity recognition (NER) model that achieves the start-of-the-art accuracy for Vietnamese language. We combine word, word-shape features, PoS, chunk, Brown-cluster-based features, and word-embedding-based features in the Conditional Random Fields (CRF) model. We also explore the effects of word segmentation, PoS tagging, and chunking results of many popular Vietnamese NLP toolkits on the accuracy of the proposed feature-based NER model. Up to now, our work is the first work that systematically performs an extrinsic evaluation of basic Vietnamese NLP toolkits on the downstream NER task. Experimental results show that while automatically-generated word segmentation is useful, PoS and chunking information generated by Vietnamese NLP tools does not show their benefits for the proposed feature-based NER model. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 92,447 |
1904.07964 | 3D Shape Synthesis for Conceptual Design and Optimization Using
Variational Autoencoders | We propose a data-driven 3D shape design method that can learn a generative model from a corpus of existing designs, and use this model to produce a wide range of new designs. The approach learns an encoding of the samples in the training corpus using an unsupervised variational autoencoder-decoder architecture, without the need for an explicit parametric representation of the original designs. To facilitate the generation of smooth final surfaces, we develop a 3D shape representation based on a distance transformation of the original 3D data, rather than using the commonly utilized binary voxel representation. Once established, the generator maps the latent space representations to the high-dimensional distance transformation fields, which are then automatically surfaced to produce 3D representations amenable to physics simulations or other objective function evaluation modules. We demonstrate our approach for the computational design of gliders that are optimized to attain prescribed performance scores. Our results show that when combined with genetic optimization, the proposed approach can generate a rich set of candidate concept designs that achieve prescribed functional goals, even when the original dataset has only a few or no solutions that achieve these goals. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | true | 127,922 |
1811.12569 | Are All Training Examples Created Equal? An Empirical Study | Modern computer vision algorithms often rely on very large training datasets. However, it is conceivable that a carefully selected subsample of the dataset is sufficient for training. In this paper, we propose a gradient-based importance measure that we use to empirically analyze relative importance of training images in four datasets of varying complexity. We find that in some cases, a small subsample is indeed sufficient for training. For other datasets, however, the relative differences in importance are negligible. These results have important implications for active learning on deep networks. Additionally, our analysis method can be used as a general tool to better understand diversity of training examples in datasets. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 115,042 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.