id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2011.03348 | Drone Positioning for Visible Light Communication with Drone-Mounted LED
and Camera | The world is often stricken by catastrophic disasters. On-demand drone-mounted visible light communication (VLC) networks are suitable for monitoring disaster-stricken areas for leveraging disaster-response operations. The concept of an image sensor-based VLC has also attracted attention in the recent past for establishing stable links using unstably moving drones. However, existing works did not sufficiently consider the one-to-many image sensor-based VLC system. Thus, this paper proposes the concept of a one-to-many image sensor-based VLC between a camera and multiple drone-mounted LED lights with a drone-positioning algorithm to avoid interference among VLC links. Multiple drones are deployed on-demand in a disaster-stricken area to monitor the ground and continuously send image data to a camera with image sensor-based visible light communication (VLC) links. The proposed idea is demonstrated with the proof-of-concept (PoC) implemented with drones that are equipped with LED panels and a 4K camera. As a result, we confirmed the feasibility of the proposed system. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 205,220 |
2501.12322 | A General Achievable Scheme for Linear Computation Broadcast Channel | This paper presents a new achievable scheme for the Linear Computation Broadcast Channel (LCBC), which is based on a generalized subspace decomposition derived from representable polymatroid space. This decomposition enables the server to serve user demands with an approach of effective multicast and interference elimination. We extend existing results by introducing a linear programming framework to optimize multicast opportunities across an arbitrary number of users. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 526,258 |
2002.04289 | To Share or Not To Share: A Comprehensive Appraisal of Weight-Sharing | Weight-sharing (WS) has recently emerged as a paradigm to accelerate the automated search for efficient neural architectures, a process dubbed Neural Architecture Search (NAS). Although very appealing, this framework is not without drawbacks and several works have started to question its capabilities on small hand-crafted benchmarks. In this paper, we take advantage of the \nasbench dataset to challenge the efficiency of WS on a representative search space. By comparing a SOTA WS approach to a plain random search we show that, despite decent correlations between evaluations using weight-sharing and standalone ones, WS is only rarely significantly helpful to NAS. In particular we highlight the impact of the search space itself on the benefits. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | true | false | false | 163,568 |
2412.17344 | Reinforcement Learning with a Focus on Adjusting Policies to Reach
Targets | The objective of a reinforcement learning agent is to discover better actions through exploration. However, typical exploration techniques aim to maximize rewards, often incurring high costs in both exploration and learning processes. We propose a novel deep reinforcement learning method, which prioritizes achieving an aspiration level over maximizing expected return. This method flexibly adjusts the degree of exploration based on the proportion of target achievement. Through experiments on a motion control task and a navigation task, this method achieved returns equal to or greater than other standard methods. The results of the analysis showed two things: our method flexibly adjusts the exploration scope, and it has the potential to enable the agent to adapt to non-stationary environments. These findings indicated that this method may have effectiveness in improving exploration efficiency in practical applications of reinforcement learning. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 519,940 |
1705.10702 | Cautious Model Predictive Control using Gaussian Process Regression | Gaussian process (GP) regression has been widely used in supervised machine learning due to its flexibility and inherent ability to describe uncertainty in function estimation. In the context of control, it is seeing increasing use for modeling of nonlinear dynamical systems from data, as it allows the direct assessment of residual model uncertainty. We present a model predictive control (MPC) approach that integrates a nominal system with an additive nonlinear part of the dynamics modeled as a GP. Approximation techniques for propagating the state distribution are reviewed and we describe a principled way of formulating the chance constrained MPC problem, which takes into account residual uncertainties provided by the GP model to enable cautious control. Using additional approximations for efficient computation, we finally demonstrate the approach in a simulation example, as well as in a hardware implementation for autonomous racing of remote controlled race cars, highlighting improvements with regard to both performance and safety over a nominal controller. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 74,452 |
2206.08124 | Using adversarial images to improve outcomes of federated learning for
non-IID data | One of the important problems in federated learning is how to deal with unbalanced data. This contribution introduces a novel technique designed to deal with label skewed non-IID data, using adversarial inputs, created by the I-FGSM method. Adversarial inputs guide the training process and allow the Weighted Federated Averaging to give more importance to clients with 'selected' local label distributions. Experimental results, gathered from image classification tasks, for MNIST and CIFAR-10 datasets, are reported and analyzed. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 303,013 |
2407.18906 | A Scalable Quantum Non-local Neural Network for Image Classification | Non-local operations play a crucial role in computer vision enabling the capture of long-range dependencies through weighted sums of features across the input, surpassing the constraints of traditional convolution operations that focus solely on local neighborhoods. Non-local operations typically require computing pairwise relationships between all elements in a set, leading to quadratic complexity in terms of time and memory. Due to the high computational and memory demands, scaling non-local neural networks to large-scale problems can be challenging. This article introduces a hybrid quantum-classical scalable non-local neural network, referred to as Quantum Non-Local Neural Network (QNL-Net), to enhance pattern recognition. The proposed QNL-Net relies on inherent quantum parallelism to allow the simultaneous processing of a large number of input features enabling more efficient computations in quantum-enhanced feature space and involving pairwise relationships through quantum entanglement. We benchmark our proposed QNL-Net with other quantum counterparts to binary classification with datasets MNIST and CIFAR-10. The simulation findings showcase our QNL-Net achieves cutting-edge accuracy levels in binary image classification among quantum classifiers while utilizing fewer qubits. | false | false | false | false | true | false | true | false | false | true | false | true | false | false | false | false | false | false | 476,559 |
1603.02501 | Mixture Proportion Estimation via Kernel Embedding of Distributions | Mixture proportion estimation (MPE) is the problem of estimating the weight of a component distribution in a mixture, given samples from the mixture and component. This problem constitutes a key part in many "weakly supervised learning" problems like learning with positive and unlabelled samples, learning with label noise, anomaly detection and crowdsourcing. While there have been several methods proposed to solve this problem, to the best of our knowledge no efficient algorithm with a proven convergence rate towards the true proportion exists for this problem. We fill this gap by constructing a provably correct algorithm for MPE, and derive convergence rates under certain assumptions on the distribution. Our method is based on embedding distributions onto an RKHS, and implementing it only requires solving a simple convex quadratic programming problem a few times. We run our algorithm on several standard classification datasets, and demonstrate that it performs comparably to or better than other algorithms on most datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 53,018 |
1505.06646 | A Survey on Retrieval of Mathematical Knowledge | We present a short survey of the literature on indexing and retrieval of mathematical knowledge, with pointers to 72 papers and tentative taxonomies of both retrieval problems and recurring techniques. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | 43,458 |
2409.17331 | ChatCam: Empowering Camera Control through Conversational AI | Cinematographers adeptly capture the essence of the world, crafting compelling visual narratives through intricate camera movements. Witnessing the strides made by large language models in perceiving and interacting with the 3D world, this study explores their capability to control cameras with human language guidance. We introduce ChatCam, a system that navigates camera movements through conversations with users, mimicking a professional cinematographer's workflow. To achieve this, we propose CineGPT, a GPT-based autoregressive model for text-conditioned camera trajectory generation. We also develop an Anchor Determinator to ensure precise camera trajectory placement. ChatCam understands user requests and employs our proposed tools to generate trajectories, which can be used to render high-quality video footage on radiance field representations. Our experiments, including comparisons to state-of-the-art approaches and user studies, demonstrate our approach's ability to interpret and execute complex instructions for camera operation, showing promising applications in real-world production settings. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 491,743 |
2409.09828 | Latent Diffusion Models for Controllable RNA Sequence Generation | This work presents RNAdiffusion, a latent diffusion model for generating and optimizing discrete RNA sequences of variable lengths. RNA is a key intermediary between DNA and protein, exhibiting high sequence diversity and complex three-dimensional structures to support a wide range of functions. We utilize pretrained BERT-type models to encode raw RNA sequences into token-level, biologically meaningful representations. A Query Transformer is employed to compress such representations into a set of fixed-length latent vectors, with an autoregressive decoder trained to reconstruct RNA sequences from these latent variables. We then develop a continuous diffusion model within this latent space. To enable optimization, we integrate the gradients of reward models--surrogates for RNA functional properties--into the backward diffusion process, thereby generating RNAs with high reward scores. Empirical results confirm that RNAdiffusion generates non-coding RNAs that align with natural distributions across various biological metrics. Further, we fine-tune the diffusion model on mRNA 5' untranslated regions (5'-UTRs) and optimize sequences for high translation efficiencies. Our guided diffusion model effectively generates diverse 5'-UTRs with high Mean Ribosome Loading (MRL) and Translation Efficiency (TE), outperforming baselines in balancing rewards and structural stability trade-off. Our findings hold potential for advancing RNA sequence-function research and therapeutic RNA design. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 488,493 |
1311.4782 | Universal Generator for Complementary Pairs of Sequences Based on
Boolean Functions | We present a general algorithm for generating arbitrary standard complementary pairs of sequences (including binary, polyphase, M-PSK and QAM) of length 2^N using Boolean functions. The algorithm follows our earlier paraunitary algorithm, but does not require matrix multiplications. The algorithm can be easily and efficiently implemented in hardware. As a special case, it reduces to the non-recursive (direct) algorithm for generating binary sequences given by Golay, to the algorithm for generating M-PSK sequences given by Davis and Jedwab (and later improved by Paterson) and to all published algorithms for generating QAM sequences. However the algorithm does not solve the problem of sequence uniqueness (except for the trivial M-PSK case), which must be treated separately for each QAM constellation. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 28,522 |
2303.05246 | Efficient Certified Training and Robustness Verification of Neural ODEs | Neural Ordinary Differential Equations (NODEs) are a novel neural architecture, built around initial value problems with learned dynamics which are solved during inference. Thought to be inherently more robust against adversarial perturbations, they were recently shown to be vulnerable to strong adversarial attacks, highlighting the need for formal guarantees. However, despite significant progress in robustness verification for standard feed-forward architectures, the verification of high dimensional NODEs remains an open problem. In this work, we address this challenge and propose GAINS, an analysis framework for NODEs combining three key ideas: (i) a novel class of ODE solvers, based on variable but discrete time steps, (ii) an efficient graph representation of solver trajectories, and (iii) a novel abstraction algorithm operating on this graph representation. Together, these advances enable the efficient analysis and certified training of high-dimensional NODEs, by reducing the runtime from an intractable $O(\exp(d)+\exp(T))$ to ${O}(d+T^2 \log^2T)$ in the dimensionality $d$ and integration time $T$. In an extensive evaluation on computer vision (MNIST and FMNIST) and time-series forecasting (PHYSIO-NET) problems, we demonstrate the effectiveness of both our certified training and verification methods. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 350,398 |
2008.04851 | TextRay: Contour-based Geometric Modeling for Arbitrary-shaped Scene
Text Detection | Arbitrary-shaped text detection is a challenging task due to the complex geometric layouts of texts such as large aspect ratios, various scales, random rotations and curve shapes. Most state-of-the-art methods solve this problem from bottom-up perspectives, seeking to model a text instance of complex geometric layouts with simple local units (e.g., local boxes or pixels) and generate detections with heuristic post-processings. In this work, we propose an arbitrary-shaped text detection method, namely TextRay, which conducts top-down contour-based geometric modeling and geometric parameter learning within a single-shot anchor-free framework. The geometric modeling is carried out under polar system with a bidirectional mapping scheme between shape space and parameter space, encoding complex geometric layouts into unified representations. For effective learning of the representations, we design a central-weighted training strategy and a content loss which builds propagation paths between geometric encodings and visual content. TextRay outputs simple polygon detections at one pass with only one NMS post-processing. Experiments on several benchmark datasets demonstrate the effectiveness of the proposed approach. The code is available at https://github.com/LianaWang/TextRay. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 191,336 |
2402.18805 | VEC-SBM: Optimal Community Detection with Vectorial Edges Covariates | Social networks are often associated with rich side information, such as texts and images. While numerous methods have been developed to identify communities from pairwise interactions, they usually ignore such side information. In this work, we study an extension of the Stochastic Block Model (SBM), a widely used statistical framework for community detection, that integrates vectorial edges covariates: the Vectorial Edges Covariates Stochastic Block Model (VEC-SBM). We propose a novel algorithm based on iterative refinement techniques and show that it optimally recovers the latent communities under the VEC-SBM. Furthermore, we rigorously assess the added value of leveraging edge's side information in the community detection process. We complement our theoretical results with numerical experiments on synthetic and semi-synthetic data. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 433,567 |
2012.06733 | Human-in-the-Loop Imitation Learning using Remote Teleoperation | Imitation Learning is a promising paradigm for learning complex robot manipulation skills by reproducing behavior from human demonstrations. However, manipulation tasks often contain bottleneck regions that require a sequence of precise actions to make meaningful progress, such as a robot inserting a pod into a coffee machine to make coffee. Trained policies can fail in these regions because small deviations in actions can lead the policy into states not covered by the demonstrations. Intervention-based policy learning is an alternative that can address this issue -- it allows human operators to monitor trained policies and take over control when they encounter failures. In this paper, we build a data collection system tailored to 6-DoF manipulation settings, that enables remote human operators to monitor and intervene on trained policies. We develop a simple and effective algorithm to train the policy iteratively on new data collected by the system that encourages the policy to learn how to traverse bottlenecks through the interventions. We demonstrate that agents trained on data collected by our intervention-based system and algorithm outperform agents trained on an equivalent number of samples collected by non-interventional demonstrators, and further show that our method outperforms multiple state-of-the-art baselines for learning from the human interventions on a challenging robot threading task and a coffee making task. Additional results and videos at https://sites.google.com/stanford.edu/iwr . | false | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | false | 211,201 |
2303.11169 | Self-supervised Geometric Features Discovery via Interpretable Attentio
for Vehicle Re-Identification and Beyond (Complete Version) | To learn distinguishable patterns, most of recent works in vehicle re-identification (ReID) struggled to redevelop official benchmarks to provide various supervisions, which requires prohibitive human labors. In this paper, we seek to achieve the similar goal but do not involve more human efforts. To this end, we introduce a novel framework, which successfully encodes both geometric local features and global representations to distinguish vehicle instances, optimized only by the supervision from official ID labels. Specifically, given our insight that objects in ReID share similar geometric characteristics, we propose to borrow self-supervised representation learning to facilitate geometric features discovery. To condense these features, we introduce an interpretable attention module, with the core of local maxima aggregation instead of fully automatic learning, whose mechanism is completely understandable and whose response map is physically reasonable. To the best of our knowledge, we are the first that perform self-supervised learning to discover geometric features. We conduct comprehensive experiments on three most popular datasets for vehicle ReID, i.e., VeRi-776, CityFlow-ReID, and VehicleID. We report our state-of-the-art (SOTA) performances and promising visualization results. We also show the excellent scalability of our approach on other ReID related tasks, i.e., person ReID and multi-target multi-camera (MTMC) vehicle tracking. The code is available at https://github.com/ ming1993li/Self-supervised-Geometric. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 352,733 |
2208.12646 | Automatic detection of faults in race walking from a smartphone camera:
a comparison of an Olympic medalist and university athletes | Automatic fault detection is a major challenge in many sports. In race walking, referees visually judge faults according to the rules. Hence, ensuring objectivity and fairness while judging is important. To address this issue, some studies have attempted to use sensors and machine learning to automatically detect faults. However, there are problems associated with sensor attachments and equipment such as a high-speed camera, which conflict with the visual judgement of referees, and the interpretability of the fault detection models. In this study, we proposed a fault detection system for non-contact measurement. We used pose estimation and machine learning models trained based on the judgements of multiple qualified referees to realize fair fault judgement. We verified them using smartphone videos of normal race walking and walking with intentional faults in several athletes including the medalist of the Tokyo Olympics. The validation results show that the proposed system detected faults with an average accuracy of over 90%. We also revealed that the machine learning model detects faults according to the rules of race walking. In addition, the intentional faulty walking movement of the medalist was different from that of university walkers. This finding informs realization of a more general fault detection model. The code and data are available at https://github.com/SZucchini/racewalk-aijudge. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 314,811 |
2401.02610 | DHGCN: Dynamic Hop Graph Convolution Network for Self-Supervised Point
Cloud Learning | Recent works attempt to extend Graph Convolution Networks (GCNs) to point clouds for classification and segmentation tasks. These works tend to sample and group points to create smaller point sets locally and mainly focus on extracting local features through GCNs, while ignoring the relationship between point sets. In this paper, we propose the Dynamic Hop Graph Convolution Network (DHGCN) for explicitly learning the contextual relationships between the voxelized point parts, which are treated as graph nodes. Motivated by the intuition that the contextual information between point parts lies in the pairwise adjacent relationship, which can be depicted by the hop distance of the graph quantitatively, we devise a novel self-supervised part-level hop distance reconstruction task and design a novel loss function accordingly to facilitate training. In addition, we propose the Hop Graph Attention (HGA), which takes the learned hop distance as input for producing attention weights to allow edge features to contribute distinctively in aggregation. Eventually, the proposed DHGCN is a plug-and-play module that is compatible with point-based backbone networks. Comprehensive experiments on different backbones and tasks demonstrate that our self-supervised method achieves state-of-the-art performance. Our source code is available at: https://github.com/Jinec98/DHGCN. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 419,778 |
2412.08061 | Go-Oracle: Automated Test Oracle for Go Concurrency Bugs | The Go programming language has gained significant traction for developing software, especially in various infrastructure systems. Nonetheless, concurrency bugs have become a prevalent issue within Go, presenting a unique challenge due to the language's dual concurrency mechanisms-communicating sequential processes and shared memory. Detecting concurrency bugs and accurately classifying program executions as pass or fail presents an immense challenge, even for domain experts. We conducted a survey with expert developers at Bytedance that confirmed this challenge. Our work seeks to address the test oracle problem for Go programs, to automatically classify test executions as pass or fail. This problem has not been investigated in the literature for Go programs owing to its distinctive programming model. Our approach involves collecting both passing and failing execution traces from various subject Go programs. We capture a comprehensive array of execution events using the native Go execution tracer. Subsequently, we preprocess and encode these traces before training a transformer-based neural network to effectively classify the traces as either passing or failing. The evaluation of our approach encompasses 8 subject programs sourced from the GoBench repository. These subject programs are routinely used as benchmarks in an industry setting. Encouragingly, our test oracle, Go-Oracle, demonstrates high accuracies even when operating with a limited dataset, showcasing the efficacy and potential of our methodology. Developers at Bytedance strongly agreed that they would use the Go-Oracle tool over the current practice of manual inspections to classify tests for Go programs as pass or fail. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 515,920 |
2408.09230 | Siamese Multiple Attention Temporal Convolution Networks for Human
Mobility Signature Identification | The Human Mobility Signature Identification (HuMID) problem stands as a fundamental task within the realm of driving style representation, dedicated to discerning latent driving behaviors and preferences from diverse driver trajectories for driver identification. Its solutions hold significant implications across various domains (e.g., ride-hailing, insurance), wherein their application serves to safeguard users and mitigate potential fraudulent activities. Present HuMID solutions often exhibit limitations in adaptability when confronted with lengthy trajectories, consequently incurring substantial computational overhead. Furthermore, their inability to effectively extract crucial local information further impedes their performance. To address this problem, we propose a Siamese Multiple Attention Temporal Convolutional Network (Siamese MA-TCN) to capitalize on the strengths of both TCN architecture and multi-head self-attention, enabling the proficient extraction of both local and long-term dependencies. Additionally, we devise a novel attention mechanism tailored for the efficient aggregation of multi-scale representations derived from our model. Experimental evaluations conducted on two real-world taxi trajectory datasets reveal that our proposed model effectively extracts both local key information and long-term dependencies. These findings highlight the model's outstanding generalization capabilities, demonstrating its robustness and adaptability across datasets of varying sizes. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 481,341 |
2108.06313 | Accelerating Approximate Aggregation Queries with Expensive Predicates | Researchers and industry analysts are increasingly interested in computing aggregation queries over large, unstructured datasets with selective predicates that are computed using expensive deep neural networks (DNNs). As these DNNs are expensive and because many applications can tolerate approximate answers, analysts are interested in accelerating these queries via approximations. Unfortunately, standard approximate query processing techniques to accelerate such queries are not applicable because they assume the result of the predicates are available ahead of time. Furthermore, recent work using cheap approximations (i.e., proxies) do not support aggregation queries with predicates. To accelerate aggregation queries with expensive predicates, we develop and analyze a query processing algorithm that leverages proxies (ABae). ABae must account for the key challenge that it may sample records that do not satisfy the predicate. To address this challenge, we first use the proxy to group records into strata so that records satisfying the predicate are ideally grouped into few strata. Given these strata, ABae uses pilot sampling and plugin estimates to sample according to the optimal allocation. We show that ABae converges at an optimal rate in a novel analysis of stratified sampling with draws that may not satisfy the predicate. We further show that ABae outperforms on baselines on six real-world datasets, reducing labeling costs by up to 2.3x. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 250,567 |
2108.03177 | Shift-invariant waveform learning on epileptic ECoG | Seizure detection algorithms must discriminate abnormal neuronal activity associated with a seizure from normal neural activity in a variety of conditions. Our approach is to seek spatiotemporal waveforms with distinct morphology in electrocorticographic (ECoG) recordings of epileptic patients that are indicative of a subsequent seizure (preictal) versus non-seizure segments (interictal). To find these waveforms we apply a shift-invariant k-means algorithm to segments of spatially filtered signals to learn codebooks of prototypical waveforms. The frequency of the cluster labels from the codebooks is then used to train a binary classifier that predicts the class (preictal or interictal) of a test ECoG segment. We use the Matthews correlation coefficient to evaluate the performance of the classifier and the quality of the codebooks. We found that our method finds recurrent non-sinusoidal waveforms that could be used to build interpretable features for seizure prediction and that are also physiologically meaningful. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 249,582 |
2007.11201 | IITK at the FinSim Task: Hypernym Detection in Financial Domain via
Context-Free and Contextualized Word Embeddings | In this paper, we present our approaches for the FinSim 2020 shared task on "Learning Semantic Representations for the Financial Domain". The goal of this task is to classify financial terms into the most relevant hypernym (or top-level) concept in an external ontology. We leverage both context-dependent and context-independent word embeddings in our analysis. Our systems deploy Word2vec embeddings trained from scratch on the corpus (Financial Prospectus in English) along with pre-trained BERT embeddings. We divide the test dataset into two subsets based on a domain rule. For one subset, we use unsupervised distance measures to classify the term. For the second subset, we use simple supervised classifiers like Naive Bayes, on top of the embeddings, to arrive at a final prediction. Finally, we combine both the results. Our system ranks 1st based on both the metrics, i.e., mean rank and accuracy. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 188,494 |
1508.02774 | Benchmarking of LSTM Networks | LSTM (Long Short-Term Memory) recurrent neural networks have been highly successful in a number of application areas. This technical report describes the use of the MNIST and UW3 databases for benchmarking LSTM networks and explores the effect of different architectural and hyperparameter choices on performance. Significant findings include: (1) LSTM performance depends smoothly on learning rates, (2) batching and momentum has no significant effect on performance, (3) softmax training outperforms least square training, (4) peephole units are not useful, (5) the standard non-linearities (tanh and sigmoid) perform best, (6) bidirectional training combined with CTC performs better than other methods. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 45,937 |
1408.1135 | It is hard to see a needle in a haystack: Modeling contrast masking
effect in a numerical observer | Within the framework of a virtual clinical trial for breast imaging, we aim to develop numerical observers that follow the same detection performance trends as those of a typical human observer. In our prior work, we showed that by including spatiotemporal contrast sensitivity function (stCSF) of human visual system (HVS) in a multi-slice channelized Hotelling observer (msCHO), we can correctly predict trends of a typical human observer performance with the viewing parameters of browsing speed, viewing distance and contrast. In this work we further improve our numerical observer by modeling contrast masking. After stCSF, contrast masking is the second most prominent property of HVS and it refers to the fact that the presence of one signal affects the visibility threshold for another signal. Our results indicate that the improved numerical observer better predicts changes in detection performance with background complexity. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 35,134 |
1811.09245 | Train Sparsely, Generate Densely: Memory-efficient Unsupervised Training
of High-resolution Temporal GAN | Training of Generative Adversarial Network (GAN) on a video dataset is a challenge because of the sheer size of the dataset and the complexity of each observation. In general, the computational cost of training GAN scales exponentially with the resolution. In this study, we present a novel memory efficient method of unsupervised learning of high-resolution video dataset whose computational cost scales only linearly with the resolution. We achieve this by designing the generator model as a stack of small sub-generators and training the model in a specific way. We train each sub-generator with its own specific discriminator. At the time of the training, we introduce between each pair of consecutive sub-generators an auxiliary subsampling layer that reduces the frame-rate by a certain ratio. This procedure can allow each sub-generator to learn the distribution of the video at different levels of resolution. We also need only a few GPUs to train a highly complex generator that far outperforms the predecessor in terms of inception scores. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 114,213 |
1112.1496 | Re-initialization Free Level Set Evolution via Reaction Diffusion | This paper presents a novel reaction-diffusion (RD) method for implicit active contours, which is completely free of the costly re-initialization procedure in level set evolution (LSE). A diffusion term is introduced into LSE, resulting in a RD-LSE equation, to which a piecewise constant solution can be derived. In order to have a stable numerical solution of the RD based LSE, we propose a two-step splitting method (TSSM) to iteratively solve the RD-LSE equation: first iterating the LSE equation, and then solving the diffusion equation. The second step regularizes the level set function obtained in the first step to ensure stability, and thus the complex and costly re-initialization procedure is completely eliminated from LSE. By successfully applying diffusion to LSE, the RD-LSE model is stable by means of the simple finite difference method, which is very easy to implement. The proposed RD method can be generalized to solve the LSE for both variational level set method and PDE-based level set method. The RD-LSE method shows very good performance on boundary anti-leakage, and it can be readily extended to high dimensional level set method. The extensive and promising experimental results on synthetic and real images validate the effectiveness of the proposed RD-LSE approach. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 13,347 |
2405.06965 | A De-singularity Subgradient Approach for the Extended Weber Location
Problem | The extended Weber location problem is a classical optimization problem that has inspired some new works in several machine learning scenarios recently. However, most existing algorithms may get stuck due to the singularity at the data points when the power of the cost function $1\leqslant q<2$, such as the widely-used iterative Weiszfeld approach. In this paper, we establish a de-singularity subgradient approach for this problem. We also provide a complete proof of convergence which has fixed some incomplete statements of the proofs for some previous Weiszfeld algorithms. Moreover, we deduce a new theoretical result of superlinear convergence for the iteration sequence in a special case where the minimum point is a singular point. We conduct extensive experiments in a real-world machine learning scenario to show that the proposed approach solves the singularity problem, produces the same results as in the non-singularity cases, and shows a reasonable rate of linear convergence. The results also indicate that the $q$-th power case ($1<q<2$) is more advantageous than the $1$-st power case and the $2$-nd power case in some situations. Hence the de-singularity subgradient approach is beneficial to advancing both theory and practice for the extended Weber location problem. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 453,521 |
2301.11526 | Direct Parameterization of Lipschitz-Bounded Deep Networks | This paper introduces a new parameterization of deep neural networks (both fully-connected and convolutional) with guaranteed $\ell^2$ Lipschitz bounds, i.e. limited sensitivity to input perturbations. The Lipschitz guarantees are equivalent to the tightest-known bounds based on certification via a semidefinite program (SDP). We provide a ``direct'' parameterization, i.e., a smooth mapping from $\mathbb R^N$ onto the set of weights satisfying the SDP-based bound. Moreover, our parameterization is complete, i.e. a neural network satisfies the SDP bound if and only if it can be represented via our parameterization. This enables training using standard gradient methods, without any inner approximation or computationally intensive tasks (e.g. projections or barrier terms) for the SDP constraint. The new parameterization can equivalently be thought of as either a new layer type (the \textit{sandwich layer}), or a novel parameterization of standard feedforward networks with parameter sharing between neighbouring layers. A comprehensive set of experiments on image classification shows that sandwich layers outperform previous approaches on both empirical and certified robust accuracy. Code is available at \url{https://github.com/acfr/LBDN}. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 342,180 |
2310.12620 | Predict the Future from the Past? On the Temporal Data Distribution
Shift in Financial Sentiment Classifications | Temporal data distribution shift is prevalent in the financial text. How can a financial sentiment analysis system be trained in a volatile market environment that can accurately infer sentiment and be robust to temporal data distribution shifts? In this paper, we conduct an empirical study on the financial sentiment analysis system under temporal data distribution shifts using a real-world financial social media dataset that spans three years. We find that the fine-tuned models suffer from general performance degradation in the presence of temporal distribution shifts. Furthermore, motivated by the unique temporal nature of the financial text, we propose a novel method that combines out-of-distribution detection with time series modeling for temporal financial sentiment analysis. Experimental results show that the proposed method enhances the model's capability to adapt to evolving temporal shifts in a volatile financial market. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 401,092 |
1412.4444 | Asymptotics and Non-asymptotics for Universal Fixed-to-Variable Source
Coding | Universal fixed-to-variable lossless source coding for memoryless sources is studied in the finite blocklength and higher-order asymptotics regimes. Optimal third-order coding rates are derived for general fixed-to-variable codes and for prefix codes. It is shown that the non-prefix Type Size code, in which codeword lengths are chosen in ascending order of type class size, achieves the optimal third-order rate and outperforms classical Two-Stage codes. Converse results are proved making use of a result on the distribution of the empirical entropy and Laplace's approximation. Finally, the fixed-to-variable coding problem without a prefix constraint is shown to be essentially the same as the universal guessing problem. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 38,396 |
2305.10080 | Automatic Traffic Scenario Conversion from OpenSCENARIO to CommonRoad | Scenarios are a crucial element for developing, testing, and verifying autonomous driving systems. However, open-source scenarios are often formulated using different terminologies. This limits their usage across different applications as many scenario representation formats are not directly compatible with each other. To address this problem, we present the first open-source converter from the OpenSCENARIO format to the CommonRoad format, which are two of the most popular scenario formats used in autonomous driving. Our converter employs a simulation tool to execute the dynamic elements defined by OpenSCENARIO. The converter is available at commonroad.in.tum.de and we demonstrate its usefulness by converting publicly available scenarios in the OpenSCENARIO format and evaluating them using CommonRoad tools. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | 364,902 |
2006.09545 | Go with the Flow: Adaptive Control for Neural ODEs | Despite their elegant formulation and lightweight memory cost, neural ordinary differential equations (NODEs) suffer from known representational limitations. In particular, the single flow learned by NODEs cannot express all homeomorphisms from a given data space to itself, and their static weight parameterization restricts the type of functions they can learn compared to discrete architectures with layer-dependent weights. Here, we describe a new module called neurally controlled ODE (N-CODE) designed to improve the expressivity of NODEs. The parameters of N-CODE modules are dynamic variables governed by a trainable map from initial or current activation state, resulting in forms of open-loop and closed-loop control, respectively. A single module is sufficient for learning a distribution on non-autonomous flows that adaptively drive neural representations. We provide theoretical and empirical evidence that N-CODE circumvents limitations of previous NODEs models and show how increased model expressivity manifests in several supervised and unsupervised learning problems. These favorable empirical results indicate the potential of using data- and activity-dependent plasticity in neural networks across numerous domains. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 182,579 |
2207.04796 | TArC: Tunisian Arabish Corpus First complete release | In this paper we present the final result of a project on Tunisian Arabic encoded in Arabizi, the Latin-based writing system for digital conversations. The project led to the creation of two integrated and independent resources: a corpus and a NLP tool created to annotate the former with various levels of linguistic information: word classification, transliteration, tokenization, POS-tagging, lemmatization. We discuss our choices in terms of computational and linguistic methodology and the strategies adopted to improve our results. We report on the experiments performed in order to outline our research path. Finally, we explain why we believe in the potential of these resources for both computational and linguistic researches. Keywords: Tunisian Arabizi, Annotated Corpus, Neural Network Architecture | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 307,315 |
2007.01298 | Image Classification by Reinforcement Learning with Two-State Q-Learning | In this paper, a simple and efficient Hybrid Classifier is presented which is based on deep learning and reinforcement learning. Here, Q-Learning has been used with two states and 'two or three' actions. Other techniques found in the literature use feature map extracted from Convolutional Neural Networks and use these in the Q-states along with past history. This leads to technical difficulties in these approaches because the number of states is high due to large dimensions of the feature map. Because the proposed technique uses only two Q-states it is straightforward and consequently has much lesser number of optimization parameters, and thus also has a simple reward function. Also, the proposed technique uses novel actions for processing images as compared to other techniques found in literature. The performance of the proposed technique is compared with other recent algorithms like ResNet50, InceptionV3, etc. on popular databases including ImageNet, Cats and Dogs Dataset, and Caltech-101 Dataset. The proposed approach outperforms others techniques on all the datasets used. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 185,388 |
2008.13336 | Shape Defense Against Adversarial Attacks | Humans rely heavily on shape information to recognize objects. Conversely, convolutional neural networks (CNNs) are biased more towards texture. This is perhaps the main reason why CNNs are vulnerable to adversarial examples. Here, we explore how shape bias can be incorporated into CNNs to improve their robustness. Two algorithms are proposed, based on the observation that edges are invariant to moderate imperceptible perturbations. In the first one, a classifier is adversarially trained on images with the edge map as an additional channel. At inference time, the edge map is recomputed and concatenated to the image. In the second algorithm, a conditional GAN is trained to translate the edge maps, from clean and/or perturbed images, into clean images. Inference is done over the generated image corresponding to the input's edge map. Extensive experiments over 10 datasets demonstrate the effectiveness of the proposed algorithms against FGSM and $\ell_\infty$ PGD-40 attacks. Further, we show that a) edge information can also benefit other adversarial training methods, and b) CNNs trained on edge-augmented inputs are more robust against natural image corruptions such as motion blur, impulse noise and JPEG compression, than CNNs trained solely on RGB images. From a broader perspective, our study suggests that CNNs do not adequately account for image structures that are crucial for robustness. Code is available at:~\url{https://github.com/aliborji/Shapedefense.git}. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 193,820 |
1012.0197 | Low-Rank Matrix Approximation with Weights or Missing Data is NP-hard | Weighted low-rank approximation (WLRA), a dimensionality reduction technique for data analysis, has been successfully used in several applications, such as in collaborative filtering to design recommender systems or in computer vision to recover structure from motion. In this paper, we study the computational complexity of WLRA and prove that it is NP-hard to find an approximate solution, even when a rank-one approximation is sought. Our proofs are based on a reduction from the maximum-edge biclique problem, and apply to strictly positive weights as well as binary weights (the latter corresponding to low-rank matrix approximation with missing data). | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 8,383 |
2010.01112 | FOCAL: Efficient Fully-Offline Meta-Reinforcement Learning via Distance
Metric Learning and Behavior Regularization | We study the offline meta-reinforcement learning (OMRL) problem, a paradigm which enables reinforcement learning (RL) algorithms to quickly adapt to unseen tasks without any interactions with the environments, making RL truly practical in many real-world applications. This problem is still not fully understood, for which two major challenges need to be addressed. First, offline RL usually suffers from bootstrapping errors of out-of-distribution state-actions which leads to divergence of value functions. Second, meta-RL requires efficient and robust task inference learned jointly with control policy. In this work, we enforce behavior regularization on learned policy as a general approach to offline RL, combined with a deterministic context encoder for efficient task inference. We propose a novel negative-power distance metric on bounded context embedding space, whose gradients propagation is detached from the Bellman backup. We provide analysis and insight showing that some simple design choices can yield substantial improvements over recent approaches involving meta-RL and distance metric learning. To the best of our knowledge, our method is the first model-free and end-to-end OMRL algorithm, which is computationally efficient and demonstrated to outperform prior algorithms on several meta-RL benchmarks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 198,524 |
2406.08673 | HelpSteer2: Open-source dataset for training top-performing reward
models | High-quality preference datasets are essential for training reward models that can effectively guide large language models (LLMs) in generating high-quality responses aligned with human preferences. As LLMs become stronger and better aligned, permissively licensed preference datasets, such as Open Assistant, HH-RLHF, and HelpSteer need to be updated to remain effective for reward modeling. Methods that distil preference data from proprietary LLMs such as GPT-4 have restrictions on commercial usage imposed by model providers. To improve upon both generated responses and attribute labeling quality, we release HelpSteer2, a permissively licensed preference dataset (CC-BY-4.0). Using a powerful internal base model trained on HelpSteer2, we are able to achieve the SOTA score (92.0%) on Reward-Bench's primary dataset, outperforming currently listed open and proprietary models, as of June 12th, 2024. Notably, HelpSteer2 consists of only ten thousand response pairs, an order of magnitude fewer than existing preference datasets (e.g., HH-RLHF), which makes it highly efficient for training reward models. Our extensive experiments demonstrate that reward models trained with HelpSteer2 are effective in aligning LLMs. In particular, we propose SteerLM 2.0, a model alignment approach that can effectively make use of the rich multi-attribute score predicted by our reward models. HelpSteer2 is available at https://huggingface.co/datasets/nvidia/HelpSteer2 and code is available at https://github.com/NVIDIA/NeMo-Aligner | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 463,578 |
1811.03679 | Practical Bayesian Learning of Neural Networks via Adaptive Optimisation
Methods | We introduce a novel framework for the estimation of the posterior distribution over the weights of a neural network, based on a new probabilistic interpretation of adaptive optimisation algorithms such as AdaGrad and Adam. We demonstrate the effectiveness of our Bayesian Adam method, Badam, by experimentally showing that the learnt uncertainties correctly relate to the weights' predictive capabilities by weight pruning. We also demonstrate the quality of the derived uncertainty measures by comparing the performance of Badam to standard methods in a Thompson sampling setting for multi-armed bandits, where good uncertainty measures are required for an agent to balance exploration and exploitation. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 112,896 |
2211.02254 | How Does Adaptive Optimization Impact Local Neural Network Geometry? | Adaptive optimization methods are well known to achieve superior convergence relative to vanilla gradient methods. The traditional viewpoint in optimization, particularly in convex optimization, explains this improved performance by arguing that, unlike vanilla gradient schemes, adaptive algorithms mimic the behavior of a second-order method by adapting to the global geometry of the loss function. We argue that in the context of neural network optimization, this traditional viewpoint is insufficient. Instead, we advocate for a local trajectory analysis. For iterate trajectories produced by running a generic optimization algorithm OPT, we introduce $R^{\text{OPT}}_{\text{med}}$, a statistic that is analogous to the condition number of the loss Hessian evaluated at the iterates. Through extensive experiments, we show that adaptive methods such as Adam bias the trajectories towards regions where $R^{\text{Adam}}_{\text{med}}$ is small, where one might expect faster convergence. By contrast, vanilla gradient methods like SGD bias the trajectories towards regions where $R^{\text{SGD}}_{\text{med}}$ is comparatively large. We complement these empirical observations with a theoretical result that provably demonstrates this phenomenon in the simplified setting of a two-layer linear network. We view our findings as evidence for the need of a new explanation of the success of adaptive methods, one that is different than the conventional wisdom. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 328,525 |
2307.14246 | A New Perspective on Evaluation Methods for Explainable Artificial
Intelligence (XAI) | Within the field of Requirements Engineering (RE), the increasing significance of Explainable Artificial Intelligence (XAI) in aligning AI-supported systems with user needs, societal expectations, and regulatory standards has garnered recognition. In general, explainability has emerged as an important non-functional requirement that impacts system quality. However, the supposed trade-off between explainability and performance challenges the presumed positive influence of explainability. If meeting the requirement of explainability entails a reduction in system performance, then careful consideration must be given to which of these quality aspects takes precedence and how to compromise between them. In this paper, we critically examine the alleged trade-off. We argue that it is best approached in a nuanced way that incorporates resource availability, domain characteristics, and considerations of risk. By providing a foundation for future research and best practices, this work aims to advance the field of RE for AI. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 381,858 |
2204.11131 | Data Debugging with Shapley Importance over End-to-End Machine Learning
Pipelines | Developing modern machine learning (ML) applications is data-centric, of which one fundamental challenge is to understand the influence of data quality to ML training -- "Which training examples are 'guilty' in making the trained ML model predictions inaccurate or unfair?" Modeling data influence for ML training has attracted intensive interest over the last decade, and one popular framework is to compute the Shapley value of each training example with respect to utilities such as validation accuracy and fairness of the trained ML model. Unfortunately, despite recent intensive interest and research, existing methods only consider a single ML model "in isolation" and do not consider an end-to-end ML pipeline that consists of data transformations, feature extractors, and ML training. We present DataScope (ease.ml/datascope), the first system that efficiently computes Shapley values of training examples over an end-to-end ML pipeline, and illustrate its applications in data debugging for ML training. To this end, we first develop a novel algorithmic framework that computes Shapley value over a specific family of ML pipelines that we call canonical pipelines: a positive relational algebra query followed by a K-nearest-neighbor (KNN) classifier. We show that, for many subfamilies of canonical pipelines, computing Shapley value is in PTIME, contrasting the exponential complexity of computing Shapley value in general. We then put this to practice -- given an sklearn pipeline, we approximate it with a canonical pipeline to use as a proxy. We conduct extensive experiments illustrating different use cases and utilities. Our results show that DataScope is up to four orders of magnitude faster over state-of-the-art Monte Carlo-based methods, while being comparably, and often even more, effective in data debugging. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | true | false | 293,039 |
1911.05369 | Fair Adversarial Gradient Tree Boosting | Fair classification has become an important topic in machine learning research. While most bias mitigation strategies focus on neural networks, we noticed a lack of work on fair classifiers based on decision trees even though they have proven very efficient. In an up-to-date comparison of state-of-the-art classification algorithms in tabular data, tree boosting outperforms deep learning. For this reason, we have developed a novel approach of adversarial gradient tree boosting. The objective of the algorithm is to predict the output $Y$ with gradient tree boosting while minimizing the ability of an adversarial neural network to predict the sensitive attribute $S$. The approach incorporates at each iteration the gradient of the neural network directly in the gradient tree boosting. We empirically assess our approach on 4 popular data sets and compare against state-of-the-art algorithms. The results show that our algorithm achieves a higher accuracy while obtaining the same level of fairness, as measured using a set of different common fairness definitions. | false | false | false | false | true | false | true | false | false | false | false | false | false | true | false | false | false | false | 153,240 |
2105.11781 | A unified framework based on graph consensus term for multi-view
learning | In recent years, multi-view learning technologies for various applications have attracted a surge of interest. Due to more compatible and complementary information from multiple views, existing multi-view methods could achieve more promising performance than conventional single-view methods in most situations. However, there are still no sufficient researches on the unified framework in existing multi-view works. Meanwhile, how to efficiently integrate multi-view information is still full of challenges. In this paper, we propose a novel multi-view learning framework, which aims to leverage most existing graph embedding works into a unified formula via introducing the graph consensus term. In particular, our method explores the graph structure in each view independently to preserve the diversity property of graph embedding methods. Meanwhile, we choose heterogeneous graphs to construct the graph consensus term to explore the correlations among multiple views jointly. To this end, the diversity and complementary information among different views could be simultaneously considered. Furthermore, the proposed framework is utilized to implement the multi-view extension of Locality Linear Embedding, named Multi-view Locality Linear Embedding (MvLLE), which could be efficiently solved by applying the alternating optimization strategy. Empirical validations conducted on six benchmark datasets can show the effectiveness of our proposed method. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 236,820 |
2301.07294 | Enhancing Self-Training Methods | Semi-supervised learning approaches train on small sets of labeled data along with large sets of unlabeled data. Self-training is a semi-supervised teacher-student approach that often suffers from the problem of "confirmation bias" that occurs when the student model repeatedly overfits to incorrect pseudo-labels given by the teacher model for the unlabeled data. This bias impedes improvements in pseudo-label accuracy across self-training iterations, leading to unwanted saturation in model performance after just a few iterations. In this work, we describe multiple enhancements to improve the self-training pipeline to mitigate the effect of confirmation bias. We evaluate our enhancements over multiple datasets showing performance gains over existing self-training design choices. Finally, we also study the extendability of our enhanced approach to Open Set unlabeled data (containing classes not seen in labeled data). | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 340,877 |
1802.06246 | Backlash Identification in Two-Mass Systems by Delayed Relay Feedback | Backlash, also known as mechanical play, is a piecewise differentiable nonlinearity which exists in several actuated systems, comprising, e.g., rack-and-pinion drives, shaft couplings, toothed gears, and other machine elements. Generally, the backlash is nested between the moving parts of a complex dynamic system, which handicaps its proper detection and identification. A classical example is the two-mass system which can approximate numerous mechanisms connected by a shaft (or link) with relatively high stiffness and backlash in series. Information about the presence and extent of the backlash is seldom exactly known and is rather conditional upon factors such as wear, fatigue and incipient failures in the components. This paper proposes a novel backlash identification method using one-side sensing of a two-mass system. The method is based on the delayed relay operator in feedback that allows stable and controllable limit cycles to be induced and operated within the (unknown) backlash gap. The system model, with structural transformations required for the one-side backlash measurements, is given, along with the analysis of the delayed relay in velocity feedback. Experimental evaluations are shown for a two-inertia motor bench that has coupling with backlash gap of about one degree. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 90,623 |
1812.10967 | TROVE Feature Detection for Online Pose Recovery by Binocular Cameras | This paper proposes a new and efficient method to estimate 6-DoF ego-states: attitudes and positions in real time. The proposed method extract information of ego-states by observing a feature called "TROVE" (Three Rays and One VErtex). TROVE features are projected from structures that are ubiquitous on man-made constructions and objects. The proposed method does not search for conventional corner-type features nor use Perspective-n-Point (PnP) methods, and it achieves a real-time estimation of attitudes and positions up to 60 Hz. The accuracy of attitude estimates can reach 0.3 degrees and that of position estimates can reach 2 cm in an indoor environment. The result shows a promising approach for unmanned robots to localize in an environment that is rich in man-made structures. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 117,475 |
2112.11483 | BACON: Deep-Learning Powered AI for Poetry Generation with Author
Linguistic Style Transfer | This paper describes BACON, a basic prototype of an automatic poetry generator with author linguistic style transfer. It combines concepts and techniques from finite state machinery, probabilistic models, artificial neural networks and deep learning, to write original poetry with rich aesthetic-qualities in the style of any given author. Extrinsic evaluation of the output generated by BACON shows that participants were unable to tell the difference between human and AI-generated poems in any statistically significant way. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 272,719 |
1310.5426 | MLI: An API for Distributed Machine Learning | MLI is an Application Programming Interface designed to address the challenges of building Machine Learn- ing algorithms in a distributed setting based on data-centric computing. Its primary goal is to simplify the development of high-performance, scalable, distributed algorithms. Our initial results show that, relative to existing systems, this interface can be used to build distributed implementations of a wide variety of common Machine Learning algorithms with minimal complexity and highly competitive performance and scalability. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 27,899 |
1202.0024 | Predicting epidemic outbreak from individual features of the spreaders | Knowing which individuals can be more efficient in spreading a pathogen throughout a determinate environment is a fundamental question in disease control. Indeed, over the last years the spread of epidemic diseases and its relationship with the topology of the involved system have been a recurrent topic in complex network theory, taking into account both network models and real-world data. In this paper we explore possible correlations between the heterogeneous spread of an epidemic disease governed by the susceptible-infected-recovered (SIR) model, and several attributes of the originating vertices, considering Erd\"os-R\'enyi (ER), Barab\'asi-Albert (BA) and random geometric graphs (RGG), as well as a real case of study, the US Air Transportation Network that comprises the US 500 busiest airports along with inter-connections. Initially, the heterogeneity of the spreading is achieved considering the RGG networks, in which we analytically derive an expression for the distribution of the spreading rates among the established contacts, by assuming that such rates decay exponentially with the distance that separates the individuals. Such distribution is also considered for the ER and BA models, where we observe topological effects on the correlations. In the case of the airport network, the spreading rates are empirically defined, assumed to be directly proportional to the seat availability. Among both the theoretical and the real networks considered, we observe a high correlation between the total epidemic prevalence and the degree, as well as the strength and the accessibility of the epidemic sources. For attributes such as the betweenness centrality and the $k$-shell index, however, the correlation depends on the topology considered. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 14,037 |
2205.02892 | Ontology Reuse: the Real Test of Ontological Design | Reusing ontologies in practice is still very challenging, especially when multiple ontologies are (jointly) involved. Moreover, despite recent advances, the realization of systematic ontology quality assurance remains a difficult problem. In this work, the quality of thirty biomedical ontologies, and the Computer Science Ontology are investigated, from the perspective of a practical use case. Special scrutiny is given to cross-ontology references, which are vital for combining ontologies. Diverse methods to detect potential issues are proposed, including natural language processing and network analysis. Moreover, several suggestions for improving ontologies and their quality assurance processes are presented. It is argued that while the advancing automatic tools for ontology quality assurance are crucial for ontology improvement, they will not solve the problem entirely. It is ontology reuse that is the ultimate method for continuously verifying and improving ontology quality, as well as for guiding its future development. Specifically, multiple issues can be found and fixed primarily through practical and diverse ontology reuse scenarios. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | true | 295,095 |
2208.00525 | Learning to generate Reliable Broadcast Algorithms | Modern distributed systems are supported by fault-tolerant algorithms, like Reliable Broadcast and Consensus, that assure the correct operation of the system even when some of the nodes of the system fail. However, the development of distributed algorithms is a manual and complex process, resulting in scientific papers that usually present a single algorithm or variations of existing ones. To automate the process of developing such algorithms, this work presents an intelligent agent that uses Reinforcement Learning to generate correct and efficient fault-tolerant distributed algorithms. We show that our approach is able to generate correct fault-tolerant Reliable Broadcast algorithms with the same performance of others available in the literature, in only 12,000 learning episodes. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 310,887 |
2010.08844 | Finding Physical Adversarial Examples for Autonomous Driving with Fast
and Differentiable Image Compositing | There is considerable evidence that deep neural networks are vulnerable to adversarial perturbations applied directly to their digital inputs. However, it remains an open question whether this translates to vulnerabilities in real systems. For example, an attack on self-driving cars would in practice entail modifying the driving environment, which then impacts the video inputs to the car's controller, thereby indirectly leading to incorrect driving decisions. Such attacks require accounting for system dynamics and tracking viewpoint changes. We propose a scalable approach for finding adversarial modifications of a simulated autonomous driving environment using a differentiable approximation for the mapping from environmental modifications (rectangles on the road) to the corresponding video inputs to the controller neural network. Given the parameters of the rectangles, our proposed differentiable mapping composites them onto pre-recorded video streams of the original environment, accounting for geometric and color variations. Moreover, we propose a multiple trajectory sampling approach that enables our attacks to be robust to a car's self-correcting behavior. When combined with a neural network-based controller, our approach allows the design of adversarial modifications through end-to-end gradient-based optimization. Using the Carla autonomous driving simulator, we show that our approach is significantly more scalable and far more effective at identifying autonomous vehicle vulnerabilities in simulation experiments than a state-of-the-art approach based on Bayesian Optimization. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 201,322 |
2312.09454 | Uncertainty Quantification in Machine Learning for Biosignal
Applications -- A Review | Uncertainty Quantification (UQ) has gained traction in an attempt to fix the black-box nature of Deep Learning. Specifically (medical) biosignals such as electroencephalography (EEG), electrocardiography (ECG), electroocculography (EOG) and electromyography (EMG) could benefit from good UQ, since these suffer from a poor signal to noise ratio, and good human interpretability is pivotal for medical applications and Brain Computer Interfaces. In this paper, we review the state of the art at the intersection of Uncertainty Quantification and Biosignal with Machine Learning. We present various methods, shortcomings, uncertainty measures and theoretical frameworks that currently exist in this application domain. Overall it can be concluded that promising UQ methods are available, but that research is needed on how people and systems may interact with an uncertainty model in a (clinical) environment. | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 415,729 |
2008.09165 | Linear Optimal Transport Embedding: Provable Wasserstein classification
for certain rigid transformations and perturbations | Discriminating between distributions is an important problem in a number of scientific fields. This motivated the introduction of Linear Optimal Transportation (LOT), which embeds the space of distributions into an $L^2$-space. The transform is defined by computing the optimal transport of each distribution to a fixed reference distribution, and has a number of benefits when it comes to speed of computation and to determining classification boundaries. In this paper, we characterize a number of settings in which LOT embeds families of distributions into a space in which they are linearly separable. This is true in arbitrary dimension, and for families of distributions generated through perturbations of shifts and scalings of a fixed distribution.We also prove conditions under which the $L^2$ distance of the LOT embedding between two distributions in arbitrary dimension is nearly isometric to Wasserstein-2 distance between those distributions. This is of significant computational benefit, as one must only compute $N$ optimal transport maps to define the $N^2$ pairwise distances between $N$ distributions. We demonstrate the benefits of LOT on a number of distribution classification problems. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 192,630 |
2312.10136 | Gradient-based Parameter Selection for Efficient Fine-Tuning | With the growing size of pre-trained models, full fine-tuning and storing all the parameters for various downstream tasks is costly and infeasible. In this paper, we propose a new parameter-efficient fine-tuning method, Gradient-based Parameter Selection (GPS), demonstrating that only tuning a few selected parameters from the pre-trained model while keeping the remainder of the model frozen can generate similar or better performance compared with the full model fine-tuning method. Different from the existing popular and state-of-the-art parameter-efficient fine-tuning approaches, our method does not introduce any additional parameters and computational costs during both the training and inference stages. Another advantage is the model-agnostic and non-destructive property, which eliminates the need for any other design specific to a particular model. Compared with the full fine-tuning, GPS achieves 3.33% (91.78% vs. 88.45%, FGVC) and 9.61% (73.1% vs. 65.57%, VTAB) improvement of the accuracy with tuning only 0.36% parameters of the pre-trained model on average over 24 image classification tasks; it also demonstrates a significant improvement of 17% and 16.8% in mDice and mIoU, respectively, on medical image segmentation task. Moreover, GPS achieves state-of-the-art performance compared with existing PEFT methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 416,036 |
1904.05569 | A high quality and phonetic balanced speech corpus for Vietnamese | This paper presents a high quality Vietnamese speech corpus that can be used for analyzing Vietnamese speech characteristic as well as building speech synthesis models. The corpus consists of 5400 clean-speech utterances spoken by 12 speakers including 6 males and 6 females. The corpus is designed with phonetic balanced in mind so that it can be used for speech synthesis, especially, speech adaptation approaches. Specifically, all speakers utter a common dataset contains 250 phonetic balanced sentences. To increase the variety of speech context, each speaker also utters another 200 non-shared, phonetic-balanced sentences. The speakers are selected to cover a wide range of age and come from different regions of the North of Vietnam. The audios are recorded in a soundproof studio room, they are sampling at 48 kHz, 16 bits PCM, mono channel. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 127,353 |
2406.15658 | TorchSpatial: A Location Encoding Framework and Benchmark for Spatial
Representation Learning | Spatial representation learning (SRL) aims at learning general-purpose neural network representations from various types of spatial data (e.g., points, polylines, polygons, networks, images, etc.) in their native formats. Learning good spatial representations is a fundamental problem for various downstream applications such as species distribution modeling, weather forecasting, trajectory generation, geographic question answering, etc. Even though SRL has become the foundation of almost all geospatial artificial intelligence (GeoAI) research, we have not yet seen significant efforts to develop an extensive deep learning framework and benchmark to support SRL model development and evaluation. To fill this gap, we propose TorchSpatial, a learning framework and benchmark for location (point) encoding, which is one of the most fundamental data types of spatial representation learning. TorchSpatial contains three key components: 1) a unified location encoding framework that consolidates 15 commonly recognized location encoders, ensuring scalability and reproducibility of the implementations; 2) the LocBench benchmark tasks encompassing 7 geo-aware image classification and 10 geo-aware image regression datasets; 3) a comprehensive suite of evaluation metrics to quantify geo-aware model's overall performance as well as their geographic bias, with a novel Geo-Bias Score metric. Finally, we provide a detailed analysis and insights into the model performance and geographic bias of different location encoders. We believe TorchSpatial will foster future advancement of spatial representation learning and spatial fairness in GeoAI research. The TorchSpatial model framework and LocBench benchmark are available at https://github.com/seai-lab/TorchSpatial, and the Geo-Bias Score evaluation framework is available at https://github.com/seai-lab/PyGBS. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 466,811 |
2303.07987 | Practically Solving LPN in High Noise Regimes Faster Using Neural
Networks | We conduct a systematic study of solving the learning parity with noise problem (LPN) using neural networks. Our main contribution is designing families of two-layer neural networks that practically outperform classical algorithms in high-noise, low-dimension regimes. We consider three settings where the numbers of LPN samples are abundant, very limited, and in between. In each setting we provide neural network models that solve LPN as fast as possible. For some settings we are also able to provide theories that explain the rationale of the design of our models. Comparing with the previous experiments of Esser, Kubler, and May (CRYPTO 2017), for dimension $n = 26$, noise rate $\tau = 0.498$, the ''Guess-then-Gaussian-elimination'' algorithm takes 3.12 days on 64 CPU cores, whereas our neural network algorithm takes 66 minutes on 8 GPUs. Our algorithm can also be plugged into the hybrid algorithms for solving middle or large dimension LPN instances. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 351,457 |
2112.14445 | Differentially-Private Clustering of Easy Instances | Clustering is a fundamental problem in data analysis. In differentially private clustering, the goal is to identify $k$ cluster centers without disclosing information on individual data points. Despite significant research progress, the problem had so far resisted practical solutions. In this work we aim at providing simple implementable differentially private clustering algorithms that provide utility when the data is "easy," e.g., when there exists a significant separation between the clusters. We propose a framework that allows us to apply non-private clustering algorithms to the easy instances and privately combine the results. We are able to get improved sample complexity bounds in some cases of Gaussian mixtures and $k$-means. We complement our theoretical analysis with an empirical evaluation on synthetic data. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | true | 273,536 |
2109.12622 | Using Soft Labels to Model Uncertainty in Medical Image Segmentation | Medical image segmentation is inherently uncertain. For a given image, there may be multiple plausible segmentation hypotheses, and physicians will often disagree on lesion and organ boundaries. To be suited to real-world application, automatic segmentation systems must be able to capture this uncertainty and variability. Thus far, this has been addressed by building deep learning models that, through dropout, multiple heads, or variational inference, can produce a set - infinite, in some cases - of plausible segmentation hypotheses for any given image. However, in clinical practice, it may not be practical to browse all hypotheses. Furthermore, recent work shows that segmentation variability plateaus after a certain number of independent annotations, suggesting that a large enough group of physicians may be able to represent the whole space of possible segmentations. Inspired by this, we propose a simple method to obtain soft labels from the annotations of multiple physicians and train models that, for each image, produce a single well-calibrated output that can be thresholded at multiple confidence levels, according to each application's precision-recall requirements. We evaluated our method on the MICCAI 2021 QUBIQ challenge, showing that it performs well across multiple medical image segmentation tasks, produces well-calibrated predictions, and, on average, performs better at matching physicians' predictions than other physicians. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 257,359 |
2207.05991 | Brick Tic-Tac-Toe: Exploring the Generalizability of AlphaZero to Novel
Test Environments | Traditional reinforcement learning (RL) environments typically are the same for both the training and testing phases. Hence, current RL methods are largely not generalizable to a test environment which is conceptually similar but different from what the method has been trained on, which we term the novel test environment. As an effort to push RL research towards algorithms which can generalize to novel test environments, we introduce the Brick Tic-Tac-Toe (BTTT) test bed, where the brick position in the test environment is different from that in the training environment. Using a round-robin tournament on the BTTT environment, we show that traditional RL state-search approaches such as Monte Carlo Tree Search (MCTS) and Minimax are more generalizable to novel test environments than AlphaZero is. This is surprising because AlphaZero has been shown to achieve superhuman performance in environments such as Go, Chess and Shogi, which may lead one to think that it performs well in novel test environments. Our results show that BTTT, though simple, is rich enough to explore the generalizability of AlphaZero. We find that merely increasing MCTS lookahead iterations was insufficient for AlphaZero to generalize to some novel test environments. Rather, increasing the variety of training environments helps to progressively improve generalizability across all possible starting brick configurations. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 307,735 |
2103.16792 | Learning Camera Localization via Dense Scene Matching | Camera localization aims to estimate 6 DoF camera poses from RGB images. Traditional methods detect and match interest points between a query image and a pre-built 3D model. Recent learning-based approaches encode scene structures into a specific convolutional neural network (CNN) and thus are able to predict dense coordinates from RGB images. However, most of them require re-training or re-adaption for a new scene and have difficulties in handling large-scale scenes due to limited network capacity. We present a new method for scene agnostic camera localization using dense scene matching (DSM), where a cost volume is constructed between a query image and a scene. The cost volume and the corresponding coordinates are processed by a CNN to predict dense coordinates. Camera poses can then be solved by PnP algorithms. In addition, our method can be extended to temporal domain, which leads to extra performance boost during testing time. Our scene-agnostic approach achieves comparable accuracy as the existing scene-specific approaches, such as KFNet, on the 7scenes and Cambridge benchmark. This approach also remarkably outperforms state-of-the-art scene-agnostic dense coordinate regression network SANet. The Code is available at https://github.com/Tangshitao/Dense-Scene-Matching. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 227,706 |
1709.08054 | Design, Modeling and Dynamic Compensation PID Control of a
Fully-Actuated Aerial Manipulation System | This paper addresses design, modeling and dynamic-compensation PID (dc-PID) control of a novel type of fully-actuated aerial manipulation (AM) system. Firstly, design of novel mechanical structure of the AM is presented. Secondly, kinematics and dynamics of AM are modeled using Craig parameters and recursion Newton-Euler equations respectively, which give rise to a more accurate dynamic relationship between aerial platform and manipulator. Then, the dynamic-compensation PID control is proposed to solve the problem of fully-actuated control of AM. Finally, uniform coupled matrix equations between driving forces/moments and rotor speeds are derived, which can support design and analysis of parameters and decoupling theoretically. It is taken into account practical problems including noise and perturbation, parameter uncertainty, and power limitation in simulations, and results from simulations shows that the AM system presented can be fully-actued controlled with advanced control performances, which can not achieved theoretically in traditional AM. And with compared to backstepping control dc-PID has better control accuracy and capability to disturbance rejection in two simulations of aerial operation tasks with motion of joint. The experiment of dc-pid proves the availability and effectiveness of the method proposed. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 81,398 |
1910.07755 | Reducing the Computational Complexity of Pseudoinverse for the
Incremental Broad Learning System on Added Inputs | In this brief, we improve the Broad Learning System (BLS) [7] by reducing the computational complexity of the incremental learning for added inputs. We utilize the inverse of a sum of matrices in [8] to improve a step in the pseudoinverse of a row-partitioned matrix. Accordingly we propose two fast algorithms for the cases of q > k and q < k, respectively, where q and k denote the number of additional training samples and the total number of nodes, respectively. Specifically, when q > k, the proposed algorithm computes only a k * k matrix inverse, instead of a q * q matrix inverse in the existing algorithm. Accordingly it can reduce the complexity dramatically. Our simulations, which follow those for Table V in [7], show that the proposed algorithm and the existing algorithm achieve the same testing accuracy, while the speedups in BLS training time of the proposed algorithm over the existing algorithm are 1.24 - 1.30. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 149,696 |
2409.00046 | Rethinking Molecular Design: Integrating Latent Variable and
Auto-Regressive Models for Goal Directed Generation | De novo molecule design has become a highly active research area, advanced significantly through the use of state-of-the-art generative models. Despite these advances, several fundamental questions remain unanswered as the field increasingly focuses on more complex generative models and sophisticated molecular representations as an answer to the challenges of drug design. In this paper, we return to the simplest representation of molecules, and investigate overlooked limitations of classical generative approaches, particularly Variational Autoencoders (VAEs) and auto-regressive models. We propose a hybrid model in the form of a novel regularizer that leverages the strengths of both to improve validity, conditional generation, and style transfer of molecular sequences. Additionally, we provide an in depth discussion of overlooked assumptions of these models' behaviour. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 484,738 |
1803.08976 | Speech2Vec: A Sequence-to-Sequence Framework for Learning Word
Embeddings from Speech | In this paper, we propose a novel deep neural network architecture, Speech2Vec, for learning fixed-length vector representations of audio segments excised from a speech corpus, where the vectors contain semantic information pertaining to the underlying spoken words, and are close to other vectors in the embedding space if their corresponding underlying spoken words are semantically similar. The proposed model can be viewed as a speech version of Word2Vec. Its design is based on a RNN Encoder-Decoder framework, and borrows the methodology of skipgrams or continuous bag-of-words for training. Learning word embeddings directly from speech enables Speech2Vec to make use of the semantic information carried by speech that does not exist in plain text. The learned word embeddings are evaluated and analyzed on 13 widely used word similarity benchmarks, and outperform word embeddings learned by Word2Vec from the transcriptions. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 93,377 |
2206.14719 | Trial2Vec: Zero-Shot Clinical Trial Document Similarity Search using
Self-Supervision | Clinical trials are essential for drug development but are extremely expensive and time-consuming to conduct. It is beneficial to study similar historical trials when designing a clinical trial. However, lengthy trial documents and lack of labeled data make trial similarity search difficult. We propose a zero-shot clinical trial retrieval method, Trial2Vec, which learns through self-supervision without annotating similar clinical trials. Specifically, the meta-structure of trial documents (e.g., title, eligibility criteria, target disease) along with clinical knowledge (e.g., UMLS knowledge base https://www.nlm.nih.gov/research/umls/index.html) are leveraged to automatically generate contrastive samples. Besides, Trial2Vec encodes trial documents considering meta-structure thus producing compact embeddings aggregating multi-aspect information from the whole document. We show that our method yields medically interpretable embeddings by visualization and it gets a 15% average improvement over the best baselines on precision/recall for trial retrieval, which is evaluated on our labeled 1600 trial pairs. In addition, we prove the pre-trained embeddings benefit the downstream trial outcome prediction task over 240k trials. Software ias available at https://github.com/RyanWangZf/Trial2Vec. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 305,378 |
0902.1284 | Multi-Label Prediction via Compressed Sensing | We consider multi-label prediction problems with large output spaces under the assumption of output sparsity -- that the target (label) vectors have small support. We develop a general theory for a variant of the popular error correcting output code scheme, using ideas from compressed sensing for exploiting this sparsity. The method can be regarded as a simple reduction from multi-label regression problems to binary regression problems. We show that the number of subproblems need only be logarithmic in the total number of possible labels, making this approach radically more efficient than others. We also state and prove robustness guarantees for this method in the form of regret transform bounds (in general), and also provide a more detailed analysis for the linear prediction setting. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 3,129 |
2009.11551 | Residual Feature Distillation Network for Lightweight Image
Super-Resolution | Recent advances in single image super-resolution (SISR) explored the power of convolutional neural network (CNN) to achieve a better performance. Despite the great success of CNN-based methods, it is not easy to apply these methods to edge devices due to the requirement of heavy computation. To solve this problem, various fast and lightweight CNN models have been proposed. The information distillation network is one of the state-of-the-art methods, which adopts the channel splitting operation to extract distilled features. However, it is not clear enough how this operation helps in the design of efficient SISR models. In this paper, we propose the feature distillation connection (FDC) that is functionally equivalent to the channel splitting operation while being more lightweight and flexible. Thanks to FDC, we can rethink the information multi-distillation network (IMDN) and propose a lightweight and accurate SISR model called residual feature distillation network (RFDN). RFDN uses multiple feature distillation connections to learn more discriminative feature representations. We also propose a shallow residual block (SRB) as the main building block of RFDN so that the network can benefit most from residual learning while still being lightweight enough. Extensive experimental results show that the proposed RFDN achieve a better trade-off against the state-of-the-art methods in terms of performance and model complexity. Moreover, we propose an enhanced RFDN (E-RFDN) and won the first place in the AIM 2020 efficient super-resolution challenge. Code will be available at https://github.com/njulj/RFDN. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 197,202 |
2008.07545 | Whitening and second order optimization both make information in the
dataset unusable during training, and can reduce or prevent generalization | Machine learning is predicated on the concept of generalization: a model achieving low error on a sufficiently large training set should also perform well on novel samples from the same distribution. We show that both data whitening and second order optimization can harm or entirely prevent generalization. In general, model training harnesses information contained in the sample-sample second moment matrix of a dataset. For a general class of models, namely models with a fully connected first layer, we prove that the information contained in this matrix is the only information which can be used to generalize. Models trained using whitened data, or with certain second order optimization schemes, have less access to this information, resulting in reduced or nonexistent generalization ability. We experimentally verify these predictions for several architectures, and further demonstrate that generalization continues to be harmed even when theoretical requirements are relaxed. However, we also show experimentally that regularized second order optimization can provide a practical tradeoff, where training is accelerated but less information is lost, and generalization can in some circumstances even improve. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 192,138 |
1502.06161 | Using NLP to measure democracy | This paper uses natural language processing to create the first machine-coded democracy index, which I call Automated Democracy Scores (ADS). The ADS are based on 42 million news articles from 6,043 different sources and cover all independent countries in the 1993-2012 period. Unlike the democracy indices we have today the ADS are replicable and have standard errors small enough to actually distinguish between cases. The ADS are produced with supervised learning. Three approaches are tried: a) a combination of Latent Semantic Analysis and tree-based regression methods; b) a combination of Latent Dirichlet Allocation and tree-based regression methods; and c) the Wordscores algorithm. The Wordscores algorithm outperforms the alternatives, so it is the one on which the ADS are based. There is a web application where anyone can change the training set and see how the results change: democracy-scores.org | false | false | false | false | false | true | true | false | true | false | false | false | false | false | false | false | false | false | 40,462 |
1012.3651 | Cascades on a class of clustered random networks | We present an analytical approach to determining the expected cascade size in a broad range of dynamical models on the class of random networks with arbitrary degree distribution and nonzero clustering introduced in [M.E.J. Newman, Phys. Rev. Lett. 103, 058701 (2009)]. A condition for the existence of global cascades is derived as well as a general criterion which determines whether increasing the level of clustering will increase, or decrease, the expected cascade size. Applications, examples of which are provided, include site percolation, bond percolation, and Watts' threshold model; in all cases analytical results give excellent agreement with numerical simulations. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 8,562 |
2304.12208 | Can ChatGPT be used to generate scientific hypotheses? | We investigate whether large language models can perform the creative hypothesis generation that human researchers regularly do. While the error rate is high, generative AI seems to be able to effectively structure vast amounts of scientific knowledge and provide interesting and testable hypotheses. The future scientific enterprise may include synergistic efforts with a swarm of "hypothesis machines", challenged by automated experimentation and adversarial peer reviews. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 360,126 |
2501.19391 | Perceptive Mixed-Integer Footstep Control for Underactuated Bipedal
Walking on Rough Terrain | Traversing rough terrain requires dynamic bipeds to stabilize themselves through foot placement without stepping in unsafe areas. Planning these footsteps online is challenging given non-convexity of the safe terrain, and imperfect perception and state estimation. This paper addresses these challenges with a full-stack perception and control system for achieving underactuated walking on discontinuous terrain. First, we develop model-predictive footstep control (MPFC), a single mixed-integer quadratic program which assumes a convex polygon terrain decomposition to optimize over discrete foothold choice, footstep position, ankle torque, template dynamics, and footstep timing at over 100 Hz. We then propose a novel approach for generating convex polygon terrain decompositions online. Our perception stack decouples safe-terrain classification from fitting planar polygons, generating a temporally consistent terrain segmentation in real time using a single CPU thread. We demonstrate the performance of our perception and control stack through outdoor experiments with the underactuated biped Cassie, achieving state of the art perceptive bipedal walking on discontinuous terrain. Supplemental Video: https://youtu.be/eCOD1bMi638 | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 529,144 |
0905.4541 | Turbo Packet Combining Strategies for the MIMO-ISI ARQ Channel | This paper addresses the issue of efficient turbo packet combining techniques for coded transmission with a Chase-type automatic repeat request (ARQ) protocol operating over a multiple-input--multiple-output (MIMO) channel with intersymbol interference (ISI). First of all, we investigate the outage probability and the outage-based power loss of the MIMO-ISI ARQ channel when optimal maximum a posteriori (MAP) turbo packet combining is used at the receiver. We show that the ARQ delay (i.e., the maximum number of ARQ rounds) does not completely translate into a diversity gain. We then introduce two efficient turbo packet combining algorithms that are inspired by minimum mean square error (MMSE)-based turbo equalization techniques. Both schemes can be viewed as low-complexity versions of the optimal MAP turbo combiner. The first scheme is called signal-level turbo combining and performs packet combining and multiple transmission ISI cancellation jointly at the signal-level. The second scheme, called symbol-level turbo combining, allows ARQ rounds to be separately turbo equalized, while combining is performed at the filter output. We conduct a complexity analysis where we demonstrate that both algorithms have almost the same computational cost as the conventional log-likelihood ratio (LLR)-level combiner. Simulation results show that both proposed techniques outperform LLR-level combining, while for some representative MIMO configurations, signal-level combining has better ISI cancellation capability and achievable diversity order than that of symbol-level combining. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 3,782 |
2410.07689 | When the Small-Loss Trick is Not Enough: Multi-Label Image
Classification with Noisy Labels Applied to CCTV Sewer Inspections | The maintenance of sewerage networks, with their millions of kilometers of pipe, heavily relies on efficient Closed-Circuit Television (CCTV) inspections. Many promising approaches based on multi-label image classification have leveraged databases of historical inspection reports to automate these inspections. However, the significant presence of label noise in these databases, although known, has not been addressed. While extensive research has explored the issue of label noise in singlelabel classification (SLC), little attention has been paid to label noise in multi-label classification (MLC). To address this, we first adapted three sample selection SLC methods (Co-teaching, CoSELFIE, and DISC) that have proven robust to label noise. Our findings revealed that sample selection based solely on the small-loss trick can handle complex label noise, but it is sub-optimal. Adapting hybrid sample selection methods to noisy MLC appeared to be a more promising approach. In light of this, we developed a novel method named MHSS (Multi-label Hybrid Sample Selection) based on CoSELFIE. Through an in-depth comparative study, we demonstrated the superior performance of our approach in dealing with both synthetic complex noise and real noise, thus contributing to the ongoing efforts towards effective automation of CCTV sewer pipe inspections. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 496,750 |
1902.03633 | Diverse Exploration via Conjugate Policies for Policy Gradient Methods | We address the challenge of effective exploration while maintaining good performance in policy gradient methods. As a solution, we propose diverse exploration (DE) via conjugate policies. DE learns and deploys a set of conjugate policies which can be conveniently generated as a byproduct of conjugate gradient descent. We provide both theoretical and empirical results showing the effectiveness of DE at achieving exploration, improving policy performance, and the advantage of DE over exploration by random policy perturbations. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 121,161 |
2309.01075 | Muti-Stage Hierarchical Food Classification | Food image classification serves as a fundamental and critical step in image-based dietary assessment, facilitating nutrient intake analysis from captured food images. However, existing works in food classification predominantly focuses on predicting 'food types', which do not contain direct nutritional composition information. This limitation arises from the inherent discrepancies in nutrition databases, which are tasked with associating each 'food item' with its respective information. Therefore, in this work we aim to classify food items to align with nutrition database. To this end, we first introduce VFN-nutrient dataset by annotating each food image in VFN with a food item that includes nutritional composition information. Such annotation of food items, being more discriminative than food types, creates a hierarchical structure within the dataset. However, since the food item annotations are solely based on nutritional composition information, they do not always show visual relations with each other, which poses significant challenges when applying deep learning-based techniques for classification. To address this issue, we then propose a multi-stage hierarchical framework for food item classification by iteratively clustering and merging food items during the training process, which allows the deep model to extract image features that are discriminative across labels. Our method is evaluated on VFN-nutrient dataset and achieve promising results compared with existing work in terms of both food type and food item classification. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 389,531 |
1401.7289 | Spatially-Coupled MacKay-Neal Codes with No Bit Nodes of Degree Two
Achieve the Capacity of BEC | Obata et al. proved that spatially-coupled (SC) MacKay-Neal (MN) codes achieve the capacity of BEC. However, the SC-MN codes codes have many variable nodes of degree two and have higher error floors. In this paper, we prove that SC-MN codes with no variable nodes of degree two achieve the capacity of BEC. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 30,442 |
2212.11745 | Satellite-derived solar radiation for intra-hour and intra-day
applications: Biases and uncertainties by season and altitude | Accurate estimates of the surface solar radiation (SSR) are a prerequisite for intra-day forecasts of solar resources and photovoltaic power generation. Intra-day SSR forecasts are of interest to power traders and to operators of solar plants and power grids who seek to optimize their revenues and maintain the grid stability by matching power supply and demand. Our study analyzes systematic biases and the uncertainty of SSR estimates derived from Meteosat with the SARAH-2 and HelioMont algorithms at intra-hour and intra-day time scales. The satellite SSR estimates are analyzed based on 136 ground stations across altitudes from 200 m to 3570 m Switzerland in 2018. We find major biases and uncertainties in the instantaneous, hourly and daily-mean SSR. In peak daytime periods, the instantaneous satellite SSR deviates from the ground-measured SSR by a mean absolute deviation (MAD) of 110.4 and 99.6 W/m2 for SARAH-2 and HelioMont, respectively. For the daytime SSR, the instantaneous, hourly and daily-mean MADs amount to 91.7, 81.1, 50.8 and 82.5, 66.7, 42.9 W/m2 for SARAH-2 and HelioMont, respectively. Further, the SARAH-2 instantaneous SSR drastically underestimates the solar resources at altitudes above 1000 m in the winter half year. A possible explanation in line with the seasonality of the bias is that snow cover may be misinterpreted as clouds at higher altitudes. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 337,873 |
2309.03992 | ConDA: Contrastive Domain Adaptation for AI-generated Text Detection | Large language models (LLMs) are increasingly being used for generating text in a variety of use cases, including journalistic news articles. Given the potential malicious nature in which these LLMs can be used to generate disinformation at scale, it is important to build effective detectors for such AI-generated text. Given the surge in development of new LLMs, acquiring labeled training data for supervised detectors is a bottleneck. However, there might be plenty of unlabeled text data available, without information on which generator it came from. In this work we tackle this data problem, in detecting AI-generated news text, and frame the problem as an unsupervised domain adaptation task. Here the domains are the different text generators, i.e. LLMs, and we assume we have access to only the labeled source data and unlabeled target data. We develop a Contrastive Domain Adaptation framework, called ConDA, that blends standard domain adaptation techniques with the representation power of contrastive learning to learn domain invariant representations that are effective for the final unsupervised detection task. Our experiments demonstrate the effectiveness of our framework, resulting in average performance gains of 31.7% from the best performing baselines, and within 0.8% margin of a fully supervised detector. All our code and data is available at https://github.com/AmritaBh/ConDA-gen-text-detection. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 390,579 |
2001.09296 | Max-Min Fair Wireless-Powered Cell-Free Massive MIMO for Uncorrelated
Rician Fading Channels | This paper considers cell-free massive multiple-input multiple-output systems where the multiple-antenna access points (APs) assist the single-antenna user equipments (UEs) by wireless power transfer. The UEs utilize the energy harvested in the downlink to transmit uplink pilot and information signals to the APs. We consider practical Rician fading with the line-of-sight components of the channels being phase-shifted in each coherence block. The uplink spectral efficiency (SE) is derived for this model and the max-min fairness problem is considered where the optimization variables are the AP and UE power control coefficients together with the large-scale fading decoding vectors. The objective is to maximize the minimum SE of the users under APs' and UEs' transmission power constraints. An alternating optimization algorithm is proposed for the solution of the highly-coupled non-convex problem. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 161,528 |
2111.01322 | Diverse Distributions of Self-Supervised Tasks for Meta-Learning in NLP | Meta-learning considers the problem of learning an efficient learning process that can leverage its past experience to accurately solve new tasks. However, the efficacy of meta-learning crucially depends on the distribution of tasks available for training, and this is often assumed to be known a priori or constructed from limited supervised datasets. In this work, we aim to provide task distributions for meta-learning by considering self-supervised tasks automatically proposed from unlabeled text, to enable large-scale meta-learning in NLP. We design multiple distributions of self-supervised tasks by considering important aspects of task diversity, difficulty, type, domain, and curriculum, and investigate how they affect meta-learning performance. Our analysis shows that all these factors meaningfully alter the task distribution, some inducing significant improvements in downstream few-shot accuracy of the meta-learned models. Empirically, results on 20 downstream tasks show significant improvements in few-shot learning -- adding up to +4.2% absolute accuracy (on average) to the previous unsupervised meta-learning method, and perform comparably to supervised methods on the FewRel 2.0 benchmark. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 264,519 |
2406.11780 | Split, Unlearn, Merge: Leveraging Data Attributes for More Effective
Unlearning in LLMs | Large language models (LLMs) have shown to pose social and ethical risks such as generating toxic language or facilitating malicious use of hazardous knowledge. Machine unlearning is a promising approach to improve LLM safety by directly removing harmful behaviors and knowledge. In this paper, we propose "SPlit, UNlearn, MerGE" (SPUNGE), a framework that can be used with any unlearning method to amplify its effectiveness. SPUNGE leverages data attributes during unlearning by splitting unlearning data into subsets based on specific attribute values, unlearning each subset separately, and merging the unlearned models. We empirically demonstrate that SPUNGE significantly improves the performance of two recent unlearning methods on state-of-the-art LLMs while maintaining their general capabilities on standard academic benchmarks. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 465,053 |
1804.00325 | Aggregated Momentum: Stability Through Passive Damping | Momentum is a simple and widely used trick which allows gradient-based optimizers to pick up speed along low curvature directions. Its performance depends crucially on a damping coefficient $\beta$. Large $\beta$ values can potentially deliver much larger speedups, but are prone to oscillations and instability; hence one typically resorts to small values such as 0.5 or 0.9. We propose Aggregated Momentum (AggMo), a variant of momentum which combines multiple velocity vectors with different $\beta$ parameters. AggMo is trivial to implement, but significantly dampens oscillations, enabling it to remain stable even for aggressive $\beta$ values such as 0.999. We reinterpret Nesterov's accelerated gradient descent as a special case of AggMo and analyze rates of convergence for quadratic objectives. Empirically, we find that AggMo is a suitable drop-in replacement for other momentum methods, and frequently delivers faster convergence. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 93,996 |
2108.05971 | Ergonomically Intelligent Physical Human-Robot Interaction: Postural
Estimation, Assessment, and Optimization | Ergonomics and human comfort are essential concerns in physical human-robot interaction. Common practical methods in the area either fail in estimating the correct posture due to occlusion or suffer from inaccurate ergonomics models in performing postural optimization. We propose a novel alternative framework for posture estimation, assessment, and optimization for ergonomically intelligent physical human-robot interaction. We show that we can estimate human posture solely from the trajectory of the interacting robot with median deviation of 5 deg from motion capture. We propose DULA, a differentiable ergonomics assessment tool with 99.73% accuracy comparing to RULA. We use DULA in postural optimization for physical human-robot interaction tasks such as co-manipulation and teleoperation. We evaluate our framework through human and simulation experiments. | true | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | false | 250,468 |
2406.10633 | fNeRF: High Quality Radiance Fields from Practical Cameras | In recent years, the development of Neural Radiance Fields has enabled a previously unseen level of photo-realistic 3D reconstruction of scenes and objects from multi-view camera data. However, previous methods use an oversimplified pinhole camera model resulting in defocus blur being `baked' into the reconstructed radiance field. We propose a modification to the ray casting that leverages the optics of lenses to enhance scene reconstruction in the presence of defocus blur. This allows us to improve the quality of radiance field reconstructions from the measurements of a practical camera with finite aperture. We show that the proposed model matches the defocus blur behavior of practical cameras more closely than pinhole models and other approximations of defocus blur models, particularly in the presence of partial occlusions. This allows us to achieve sharper reconstructions, improving the PSNR on validation of all-in-focus images, on both synthetic and real datasets, by up to 3 dB. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 464,490 |
1902.10059 | MRS-VPR: a multi-resolution sampling based global visual place
recognition method | Place recognition and loop closure detection are challenging for long-term visual navigation tasks. SeqSLAM is considered to be one of the most successful approaches to achieving long-term localization under varying environmental conditions and changing viewpoints. It depends on a brute-force, time-consuming sequential matching method. We propose MRS-VPR, a multi-resolution, sampling-based place recognition method, which can significantly improve the matching efficiency and accuracy in sequential matching. The novelty of this method lies in the coarse-to-fine searching pipeline and a particle filter-based global sampling scheme, that can balance the matching efficiency and accuracy in the long-term navigation task. Moreover, our model works much better than SeqSLAM when the testing sequence has a much smaller scale than the reference sequence. Our experiments demonstrate that the proposed method is efficient in locating short temporary trajectories within long-term reference ones without losing accuracy compared to SeqSLAM. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 122,576 |
1212.5524 | Reinforcement learning for port-Hamiltonian systems | Passivity-based control (PBC) for port-Hamiltonian systems provides an intuitive way of achieving stabilization by rendering a system passive with respect to a desired storage function. However, in most instances the control law is obtained without any performance considerations and it has to be calculated by solving a complex partial differential equation (PDE). In order to address these issues we introduce a reinforcement learning approach into the energy-balancing passivity-based control (EB-PBC) method, which is a form of PBC in which the closed-loop energy is equal to the difference between the stored and supplied energies. We propose a technique to parameterize EB-PBC that preserves the systems's PDE matching conditions, does not require the specification of a global desired Hamiltonian, includes performance criteria, and is robust to extra non-linearities such as control input saturation. The parameters of the control law are found using actor-critic reinforcement learning, enabling learning near-optimal control policies satisfying a desired closed-loop energy landscape. The advantages are that near-optimal controllers can be generated using standard energy shaping techniques and that the solutions learned can be interpreted in terms of energy shaping and damping injection, which makes it possible to numerically assess stability using passivity theory. From the reinforcement learning perspective, our proposal allows for the class of port-Hamiltonian systems to be incorporated in the actor-critic framework, speeding up the learning thanks to the resulting parameterization of the policy. The method has been successfully applied to the pendulum swing-up problem in simulations and real-life experiments. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 20,560 |
1906.00569 | Distribution oblivious, risk-aware algorithms for multi-armed bandits
with unbounded rewards | Classical multi-armed bandit problems use the expected value of an arm as a metric to evaluate its goodness. However, the expected value is a risk-neutral metric. In many applications like finance, one is interested in balancing the expected return of an arm (or portfolio) with the risk associated with that return. In this paper, we consider the problem of selecting the arm that optimizes a linear combination of the expected reward and the associated Conditional Value at Risk (CVaR) in a fixed budget best-arm identification framework. We allow the reward distributions to be unbounded or even heavy-tailed. For this problem, our goal is to devise algorithms that are entirely distribution oblivious, i.e., the algorithm is not aware of any information on the reward distributions, including bounds on the moments/tails, or the suboptimality gaps across arms. In this paper, we provide a class of such algorithms with provable upper bounds on the probability of incorrect identification. In the process, we develop a novel estimator for the CVaR of unbounded (including heavy-tailed) random variables and prove a concentration inequality for the same, which could be of independent interest. We also compare the error bounds for our distribution oblivious algorithms with those corresponding to standard non-oblivious algorithms. Finally, numerical experiments reveal that our algorithms perform competitively when compared with non-oblivious algorithms, suggesting that distribution obliviousness can be realised in practice without incurring a significant loss of performance. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 133,439 |
2201.01929 | Decompose to Adapt: Cross-domain Object Detection via Feature
Disentanglement | Recent advances in unsupervised domain adaptation (UDA) techniques have witnessed great success in cross-domain computer vision tasks, enhancing the generalization ability of data-driven deep learning architectures by bridging the domain distribution gaps. For the UDA-based cross-domain object detection methods, the majority of them alleviate the domain bias by inducing the domain-invariant feature generation via adversarial learning strategy. However, their domain discriminators have limited classification ability due to the unstable adversarial training process. Therefore, the extracted features induced by them cannot be perfectly domain-invariant and still contain domain-private factors, bringing obstacles to further alleviate the cross-domain discrepancy. To tackle this issue, we design a Domain Disentanglement Faster-RCNN (DDF) to eliminate the source-specific information in the features for detection task learning. Our DDF method facilitates the feature disentanglement at the global and local stages, with a Global Triplet Disentanglement (GTD) module and an Instance Similarity Disentanglement (ISD) module, respectively. By outperforming state-of-the-art methods on four benchmark UDA object detection tasks, our DDF method is demonstrated to be effective with wide applicability. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 274,397 |
2003.11458 | Commentaries on "Learning Sensorimotor Control with Neuromorphic
Sensors: Toward Hyperdimensional Active Perception" [Science Robotics Vol. 4
Issue 30 (2019) 1-10 | This correspondence comments on the findings reported in a recent Science Robotics article by Mitrokhin et al. [1]. The main goal of this commentary is to expand on some of the issues touched on in that article. Our experience is that hyperdimensional computing is very different from other approaches to computation and that it can take considerable exposure to its concepts before attaining practically useful understanding. Therefore, in order to provide an overview of the area to the first time reader of [1], the commentary includes a brief historic overview as well as connects the findings of the article to a larger body of literature existing in the area. | false | false | false | false | true | false | false | true | false | false | false | true | false | false | false | false | false | false | 169,606 |
2306.07796 | Finite Gaussian Neurons: Defending against adversarial attacks by making
neural networks say "I don't know" | Since 2014, artificial neural networks have been known to be vulnerable to adversarial attacks, which can fool the network into producing wrong or nonsensical outputs by making humanly imperceptible alterations to inputs. While defenses against adversarial attacks have been proposed, they usually involve retraining a new neural network from scratch, a costly task. In this work, I introduce the Finite Gaussian Neuron (FGN), a novel neuron architecture for artificial neural networks. My works aims to: - easily convert existing models to Finite Gaussian Neuron architecture, - while preserving the existing model's behavior on real data, - and offering resistance against adversarial attacks. I show that converted and retrained Finite Gaussian Neural Networks (FGNN) always have lower confidence (i.e., are not overconfident) in their predictions over randomized and Fast Gradient Sign Method adversarial images when compared to classical neural networks, while maintaining high accuracy and confidence over real MNIST images. To further validate the capacity of Finite Gaussian Neurons to protect from adversarial attacks, I compare the behavior of FGNs to that of Bayesian Neural Networks against both randomized and adversarial images, and show how the behavior of the two architectures differs. Finally I show some limitations of the FGN models by testing them on the more complex SPEECHCOMMANDS task, against the stronger Carlini-Wagner and Projected Gradient Descent adversarial attacks. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 373,151 |
1610.01101 | A SMART Stochastic Algorithm for Nonconvex Optimization with
Applications to Robust Machine Learning | In this paper, we show how to transform any optimization problem that arises from fitting a machine learning model into one that (1) detects and removes contaminated data from the training set while (2) simultaneously fitting the trimmed model on the uncontaminated data that remains. To solve the resulting nonconvex optimization problem, we introduce a fast stochastic proximal-gradient algorithm that incorporates prior knowledge through nonsmooth regularization. For datasets of size $n$, our approach requires $O(n^{2/3}/\varepsilon)$ gradient evaluations to reach $\varepsilon$-accuracy and, when a certain error bound holds, the complexity improves to $O(\kappa n^{2/3}\log(1/\varepsilon))$. These rates are $n^{1/3}$ times better than those achieved by typical, full gradient methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 61,923 |
2211.08268 | A Comparative Study of Machine Learning and Deep Learning Techniques for
Prediction of Co2 Emission in Cars | The most recent concern of all people on Earth is the increase in the concentration of greenhouse gas in the atmosphere. The concentration of these gases has risen rapidly over the last century and if the trend continues it can cause many adverse climatic changes. There have been ways implemented to curb this by the government by limiting processes that emit a higher amount of CO2, one such greenhouse gas. However, there is mounting evidence that the CO2 numbers supplied by the government do not accurately reflect the performance of automobiles on the road. Our proposal of using artificial intelligence techniques to improve a previously rudimentary process takes a radical tack, but it fits the bill given the situation. To determine which algorithms and models produce the greatest outcomes, we compared them all and explored a novel method of ensembling them. Further, this can be used to foretell the rise in global temperature and to ground crucial policy decisions like the adoption of electric vehicles. To estimate emissions from vehicles, we used machine learning, deep learning, and ensemble learning on a massive dataset. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 330,548 |
1510.02975 | Optimal Piecewise Linear Function Approximation for GPU-based
Applications | Many computer vision and human-computer interaction applications developed in recent years need evaluating complex and continuous mathematical functions as an essential step toward proper operation. However, rigorous evaluation of this kind of functions often implies a very high computational cost, unacceptable in real-time applications. To alleviate this problem, functions are commonly approximated by simpler piecewise-polynomial representations. Following this idea, we propose a novel, efficient, and practical technique to evaluate complex and continuous functions using a nearly optimal design of two types of piecewise linear approximations in the case of a large budget of evaluation subintervals. To this end, we develop a thorough error analysis that yields asymptotically tight bounds to accurately quantify the approximation performance of both representations. It provides an improvement upon previous error estimates and allows the user to control the trade-off between the approximation error and the number of evaluation subintervals. To guarantee real-time operation, the method is suitable for, but not limited to, an efficient implementation in modern Graphics Processing Units (GPUs), where it outperforms previous alternative approaches by exploiting the fixed-function interpolation routines present in their texture units. The proposed technique is a perfect match for any application requiring the evaluation of continuous functions, we have measured in detail its quality and efficiency on several functions, and, in particular, the Gaussian function because it is extensively used in many areas of computer vision and cybernetics, and it is expensive to evaluate. | false | false | false | false | false | false | false | false | false | false | true | true | false | false | false | false | false | true | 47,789 |
2406.07407 | Private Geometric Median | In this paper, we study differentially private (DP) algorithms for computing the geometric median (GM) of a dataset: Given $n$ points, $x_1,\dots,x_n$ in $\mathbb{R}^d$, the goal is to find a point $\theta$ that minimizes the sum of the Euclidean distances to these points, i.e., $\sum_{i=1}^{n} \|\theta - x_i\|_2$. Off-the-shelf methods, such as DP-GD, require strong a priori knowledge locating the data within a ball of radius $R$, and the excess risk of the algorithm depends linearly on $R$. In this paper, we ask: can we design an efficient and private algorithm with an excess error guarantee that scales with the (unknown) radius containing the majority of the datapoints? Our main contribution is a pair of polynomial-time DP algorithms for the task of private GM with an excess error guarantee that scales with the effective diameter of the datapoints. Additionally, we propose an inefficient algorithm based on the inverse smooth sensitivity mechanism, which satisfies the more restrictive notion of pure DP. We complement our results with a lower bound and demonstrate the optimality of our polynomial-time algorithms in terms of sample complexity. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 463,033 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.