id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2212.11469
Vehicle in Virtual Environment (VVE) Method
Autonomous vehicle (AV) algorithms need to be tested extensively in order to make sure the vehicle and the passengers will be safe while using it after the implementation. Testing these algorithms in real world create another important safety critical point. Real world testing is also subjected to limitations such as logistic limitations to carry or drive the vehicle to a certain location. For this purpose, hardware in the loop (HIL) simulations as well as virtual environments such as CARLA and LG SVL are used widely. This paper discusses a method that combines the real vehicle with the virtual world, called vehicle in virtual environment (VVE). This method projects the vehicle location and heading into a virtual world for desired testing, and transfers back the information from sensors in the virtual world to the vehicle. As a result, while vehicle is moving in the real world, it simultaneously moves in the virtual world and obtains the situational awareness via multiple virtual sensors. This would allow testing in a safe environment with the real vehicle while providing some additional benefits on vehicle dynamics fidelity, logistics limitations and passenger experience testing. The paper also demonstrates an example case study where path following and the virtual sensors are utilized to test a radar based stopping algorithm.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
337,802
1706.01594
Dynamical patterns in individual trajectories toward extremism
Society faces a fundamental global problem of understanding which individuals are currently developing strong support for some extremist entity such as ISIS (Islamic State) -- even if they never end up doing anything in the real world. The importance of online connectivity in developing intent has been confirmed by recent case-studies of already convicted terrorists. Here we identify dynamical patterns in the online trajectories that individuals take toward developing a high level of extremist support -- specifically, for ISIS. Strong memory effects emerge among individuals whose transition is fastest, and hence may become 'out of the blue' threats in the real world. A generalization of diagrammatic expansion theory helps quantify these characteristics, including the impact of changes in geographical location, and can facilitate prediction of future risks. By quantifying the trajectories that individuals follow on their journey toward expressing high levels of pro-ISIS support -- irrespective of whether they then carry out a real-world attack or not -- our findings can help move safety debates beyond reliance on static watch-list identifiers such as ethnic background or immigration status, and/or post-fact interviews with already-convicted individuals. Given the broad commonality of social media platforms, our results likely apply quite generally: for example, even on Telegram where (like Twitter) there is no built-in group feature as in our study, individuals tend to collectively build and pass through so-called super-group accounts.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
74,829
1906.06430
Multi-Adversarial Variational Autoencoder Networks
The unsupervised training of GANs and VAEs has enabled them to generate realistic images mimicking real-world distributions and perform image-based unsupervised clustering or semi-supervised classification. Combining the power of these two generative models, we introduce Multi-Adversarial Variational autoEncoder Networks (MAVENs), a novel network architecture that incorporates an ensemble of discriminators in a VAE-GAN network, with simultaneous adversarial learning and variational inference. We apply MAVENs to the generation of synthetic images and propose a new distribution measure to quantify the quality of the generated images. Our experimental results using datasets from the computer vision and medical imaging domains---Street View House Numbers, CIFAR-10, and Chest X-Ray datasets---demonstrate competitive performance against state-of-the-art semi-supervised models both in image generation and classification tasks.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
135,295
2205.04685
DNS based In-Browser Cryptojacking Detection
The metadata aspect of Domain Names (DNs) enables us to perform a behavioral study of DNs and detect if a DN is involved in in-browser cryptojacking. Thus, we are motivated to study different temporal and behavioral aspects of DNs involved in cryptojacking. We use temporal features such as query frequency and query burst along with graph-based features such as degree and diameter, and non-temporal features such as the string-based to detect if a DNs is suspect to be involved in the in-browser cryptojacking. Then, we use them to train the Machine Learning (ML) algorithms over different temporal granularities such as 2 hours datasets and complete dataset. Our results show DecisionTrees classifier performs the best with 59.5% Recall on cryptojacked DN, while for unsupervised learning, K-Means with K=2 perform the best. Similarity analysis of the features reveals a minimal divergence between the cryptojacking DNs and other already known malicious DNs. It also reveals the need for improvements in the feature set of state-of-the-art methods to improve their accuracy in detecting in-browser cryptojacking. As added analysis, our signature-based analysis identifies that none-of-the Indian Government websites were involved in cryptojacking during October-December 2021. However, based on the resource utilization, we identify 10 DNs with different properties than others.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
295,717
2406.16833
USDC: A Dataset of $\underline{U}$ser $\underline{S}$tance and $\underline{D}$ogmatism in Long $\underline{C}$onversations
Identifying user's opinions and stances in long conversation threads on various topics can be extremely critical for enhanced personalization, market research, political campaigns, customer service, conflict resolution, targeted advertising, and content moderation. Hence, training language models to automate this task is critical. However, to train such models, gathering manual annotations has multiple challenges: 1) It is time-consuming and costly; 2) Conversation threads could be very long, increasing chances of noisy annotations; and 3) Interpreting instances where a user changes their opinion within a conversation is difficult because often such transitions are subtle and not expressed explicitly. Inspired by the recent success of large language models (LLMs) for complex natural language processing (NLP) tasks, we leverage Mistral Large and GPT-4 to automate the human annotation process on the following two tasks while also providing reasoning: i) User Stance classification, which involves labeling a user's stance of a post in a conversation on a five-point scale; ii) User Dogmatism classification, which deals with labeling a user's overall opinion in the conversation on a four-point scale. The majority voting on zero-shot, one-shot, and few-shot annotations from these two LLMs on 764 multi-user Reddit conversations helps us curate the USDC dataset. USDC is then used to finetune and instruction-tune multiple deployable small language models for the 5-class stance and 4-class dogmatism classification tasks. We make the code and dataset publicly available [https://anonymous.4open.science/r/USDC-0F7F].
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
467,291
cs/0611136
Neural Computation with Rings of Quasiperiodic Oscillators
We describe the use of quasiperiodic oscillators for computation and control of robots. We also describe their relationship to central pattern generators in simple organisms and develop a group theory for describing the dynamics of these systems.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
539,916
1302.3834
Non-Bayesian Quickest Detection with Stochastic Sample Right Constraints
In this paper, we study the design and analysis of optimal detection scheme for sensors that are deployed to monitor the change in the environment and are powered by the energy harvested from the environment. In this type of applications, detection delay is of paramount importance. We model this problem as quickest change detection problem with a stochastic energy constraint. In particular, a wireless sensor powered by renewable energy takes observations from a random sequence, whose distribution will change at a certain unknown time. Such a change implies events of interest. The energy in the sensor is consumed by taking observations and is replenished randomly. The sensor cannot take observations if there is no energy left in the battery. Our goal is to design a power allocation scheme and a detection strategy to minimize the worst case detection delay, which is the difference between the time when an alarm is raised and the time when the change occurs. Two types of average run length (ARL) constraint, namely an algorithm level ARL constraint and an system level ARL constraint, are considered. We propose a low complexity scheme in which the energy allocation rule is to spend energy to take observations as long as the battery is not empty and the detection scheme is the Cumulative Sum test. We show that this scheme is optimal for the formulation with the algorithm level ARL constraint and is asymptotically optimal for the formulations with the system level ARL constraint.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
22,096
2406.07063
Reconstructing the Tropical Pacific Upper Ocean using Online Data Assimilation with a Deep Learning model
A deep learning (DL) model, based on a transformer architecture, is trained on a climate-model dataset and compared with a standard linear inverse model (LIM) in the tropical Pacific. We show that the DL model produces more accurate forecasts compared to the LIM when tested on a reanalysis dataset. We then assess the ability of an ensemble Kalman filter to reconstruct the monthly-averaged upper ocean from a noisy set of 24 sea-surface temperature observations designed to mimic existing coral proxy measurements, and compare results for the DL model and LIM. Due to signal damping in the DL model, we implement a novel inflation technique by adding noise from hindcast experiments. Results show that assimilating observations with the DL model yields better reconstructions than the LIM for observation averaging times ranging from one month to one year. The improved reconstruction is due to the enhanced predictive capabilities of the DL model, which map the memory of past observations to future assimilation times.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
462,884
2205.08055
HelixADMET: a robust and endpoint extensible ADMET system incorporating self-supervised knowledge transfer
Accurate ADMET (an abbreviation for "absorption, distribution, metabolism, excretion, and toxicity") predictions can efficiently screen out undesirable drug candidates in the early stage of drug discovery. In recent years, multiple comprehensive ADMET systems that adopt advanced machine learning models have been developed, providing services to estimate multiple endpoints. However, those ADMET systems usually suffer from weak extrapolation ability. First, due to the lack of labelled data for each endpoint, typical machine learning models perform frail for the molecules with unobserved scaffolds. Second, most systems only provide fixed built-in endpoints and cannot be customised to satisfy various research requirements. To this end, we develop a robust and endpoint extensible ADMET system, HelixADMET (H-ADMET). H-ADMET incorporates the concept of self-supervised learning to produce a robust pre-trained model. The model is then fine-tuned with a multi-task and multi-stage framework to transfer knowledge between ADMET endpoints, auxiliary tasks, and self-supervised tasks. Our results demonstrate that H-ADMET achieves an overall improvement of 4%, compared with existing ADMET systems on comparable endpoints. Additionally, the pre-trained model provided by H-ADMET can be fine-tuned to generate new and customised ADMET endpoints, meeting various demands of drug research and development requirements.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
296,804
2011.09281
A Simulation-based Optimization Approach to Efficiently Route Air Taxis in a Cyber-Physical Network
Besides air pollution and commuter stress, traffic congestions also lead to loss of productivity, increase in delay, vehicle operating cost, and accidents. To assuage these issues, several logistics companies are planning to launch air taxis, electric-powered vehicles that aim to provide faster passenger commutes on a daily basis at an affordable cost. This research is one of the first to propose a centralized framework to dispatch and route flying taxis in a cyber-physical network considering unique constraints pertaining to air taxi operations. The feasibility of the proposed approach is tested using potential air taxi demands in New York City (NYC) provided by a prior study. The results of the experimentation suggest that the minimum number of air taxis required for efficient operation in NYC is 84, functioning with an average utilization rate of 66%. In addition, the impacts of commuter willingness to fly rate, percentage of demand fulfillment, on-road travel limit, maximum customer wait time, and arrival distribution on the optimal number of air taxis, utilization rate, number of customers served and cost incurred per customer are examined. Analyses show that the willingness to fly rate appears to have a linear influence on the number of air taxis and the efficiency, while on-road travel distance has an exponential impact on the performance measures. The routing and dispatching algorithm developed in this paper can be used by any company that is interested in venturing into the air taxi market.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
207,142
2205.10464
Synthesis from Satisficing and Temporal Goals
Reactive synthesis from high-level specifications that combine hard constraints expressed in Linear Temporal Logic LTL with soft constraints expressed by discounted-sum (DS) rewards has applications in planning and reinforcement learning. An existing approach combines techniques from LTL synthesis with optimization for the DS rewards but has failed to yield a sound algorithm. An alternative approach combining LTL synthesis with satisficing DS rewards (rewards that achieve a threshold) is sound and complete for integer discount factors, but, in practice, a fractional discount factor is desired. This work extends the existing satisficing approach, presenting the first sound algorithm for synthesis from LTL and DS rewards with fractional discount factors. The utility of our algorithm is demonstrated on robotic planning domains.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
true
297,715
1805.11799
Automated proof synthesis for propositional logic with deep neural networks
This work explores the application of deep learning, a machine learning technique that uses deep neural networks (DNN) in its core, to an automated theorem proving (ATP) problem. To this end, we construct a statistical model which quantifies the likelihood that a proof is indeed a correct one of a given proposition. Based on this model, we give a proof-synthesis procedure that searches for a proof in the order of the likelihood. This procedure uses an estimator of the likelihood of an inference rule being applied at each step of a proof. As an implementation of the estimator, we propose a proposition-to-proof architecture, which is a DNN tailored to the automated proof synthesis problem. To empirically demonstrate its usefulness, we apply our model to synthesize proofs of propositional logic. We train the proposition-to-proof model using a training dataset of proposition-proof pairs. The evaluation against a benchmark set shows the very high accuracy and an improvement to the recent work of neural proof synthesis.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
99,019
1504.01052
Fast algorithms for morphological operations using run-length encoded binary images
This paper presents innovative algorithms to efficiently compute erosions and dilations of run-length encoded (RLE) binary images with arbitrary shaped structuring elements. An RLE image is given by a set of runs, where a run is a horizontal concatenation of foreground pixels. The proposed algorithms extract the skeleton of the structuring element and build distance tables of the input image, which are storing the distance to the next background pixel on the left and right hand sides. This information is then used to speed up the calculations of the erosion and dilation operator by enabling the use of techniques which allow to skip the analysis of certain pixels whenever a hit or miss occurs. Additionally the input image gets trimmed during the preprocessing steps on the base of two primitive criteria. Experimental results show the advantages over other algorithms. The source code of our algorithms is available in C++.
false
false
false
false
false
false
false
false
false
true
false
true
false
false
false
false
false
true
41,762
1402.0586
Topic Segmentation and Labeling in Asynchronous Conversations
Topic segmentation and labeling is often considered a prerequisite for higher-level conversation analysis and has been shown to be useful in many Natural Language Processing (NLP) applications. We present two new corpora of email and blog conversations annotated with topics, and evaluate annotator reliability for the segmentation and labeling tasks in these asynchronous conversations. We propose a complete computational framework for topic segmentation and labeling in asynchronous conversations. Our approach extends state-of-the-art methods by considering a fine-grained structure of an asynchronous conversation, along with other conversational features by applying recent graph-based methods for NLP. For topic segmentation, we propose two novel unsupervised models that exploit the fine-grained conversational structure, and a novel graph-theoretic supervised model that combines lexical, conversational and topic features. For topic labeling, we propose two novel (unsupervised) random walk models that respectively capture conversation specific clues from two different sources: the leading sentences and the fine-grained conversational structure. Empirical evaluation shows that the segmentation and the labeling performed by our best models beat the state-of-the-art, and are highly correlated with human annotations.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
30,600
2309.15555
Low Latency of object detection for spikng neural network
Spiking Neural Networks, as a third-generation neural network, are well-suited for edge AI applications due to their binary spike nature. However, when it comes to complex tasks like object detection, SNNs often require a substantial number of time steps to achieve high performance. This limitation significantly hampers the widespread adoption of SNNs in latency-sensitive edge devices. In this paper, our focus is on generating highly accurate and low-latency SNNs specifically for object detection. Firstly, we systematically derive the conversion between SNNs and ANNs and analyze how to improve the consistency between them: improving the spike firing rate and reducing the quantization error. Then we propose a structural replacement, quantization of ANN activation and residual fix to allevicate the disparity. We evaluate our method on challenging dataset MS COCO, PASCAL VOC and our spike dataset. The experimental results show that the proposed method achieves higher accuracy and lower latency compared to previous work Spiking-YOLO. The advantages of SNNs processing of spike signals are also demonstrated.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
395,016
2011.01437
Learning Deformable Tetrahedral Meshes for 3D Reconstruction
3D shape representations that accommodate learning-based 3D reconstruction are an open problem in machine learning and computer graphics. Previous work on neural 3D reconstruction demonstrated benefits, but also limitations, of point cloud, voxel, surface mesh, and implicit function representations. We introduce Deformable Tetrahedral Meshes (DefTet) as a particular parameterization that utilizes volumetric tetrahedral meshes for the reconstruction problem. Unlike existing volumetric approaches, DefTet optimizes for both vertex placement and occupancy, and is differentiable with respect to standard 3D reconstruction loss functions. It is thus simultaneously high-precision, volumetric, and amenable to learning-based neural architectures. We show that it can represent arbitrary, complex topology, is both memory and computationally efficient, and can produce high-fidelity reconstructions with a significantly smaller grid size than alternative volumetric approaches. The predicted surfaces are also inherently defined as tetrahedral meshes, thus do not require post-processing. We demonstrate that DefTet matches or exceeds both the quality of the previous best approaches and the performance of the fastest ones. Our approach obtains high-quality tetrahedral meshes computed directly from noisy point clouds, and is the first to showcase high-quality 3D tet-mesh results using only a single image as input. Our project webpage: https://nv-tlabs.github.io/DefTet/
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
204,582
2003.03480
A Multi-Modal States based Vehicle Descriptor and Dilated Convolutional Social Pooling for Vehicle Trajectory Prediction
Precise trajectory prediction of surrounding vehicles is critical for decision-making of autonomous vehicles and learning-based approaches are well recognized for the robustness. However, state-of-the-art learning-based methods ignore 1) the feasibility of the vehicle's multi-modal state information for prediction and 2) the mutual exclusive relationship between the global traffic scene receptive fields and the local position resolution when modeling vehicles' interactions, which may influence prediction accuracy. Therefore, we propose a vehicle-descriptor based LSTM model with the dilated convolutional social pooling (VD+DCS-LSTM) to cope with the above issues. First, each vehicle's multi-modal state information is employed as our model's input and a new vehicle descriptor encoded by stacked sparse auto-encoders is proposed to reflect the deep interactive relationships between various states, achieving the optimal feature extraction and effective use of multi-modal inputs. Secondly, the LSTM encoder is used to encode the historical sequences composed of the vehicle descriptor and a novel dilated convolutional social pooling is proposed to improve modeling vehicles' spatial interactions. Thirdly, the LSTM decoder is used to predict the probability distribution of future trajectories based on maneuvers. The validity of the overall model was verified over the NGSIM US-101 and I-80 datasets and our method outperforms the latest benchmark.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
167,232
2301.03125
Sharper Analysis for Minibatch Stochastic Proximal Point Methods: Stability, Smoothness, and Deviation
The stochastic proximal point (SPP) methods have gained recent attention for stochastic optimization, with strong convergence guarantees and superior robustness to the classic stochastic gradient descent (SGD) methods showcased at little to no cost of computational overhead added. In this article, we study a minibatch variant of SPP, namely M-SPP, for solving convex composite risk minimization problems. The core contribution is a set of novel excess risk bounds of M-SPP derived through the lens of algorithmic stability theory. Particularly under smoothness and quadratic growth conditions, we show that M-SPP with minibatch-size $n$ and iteration count $T$ enjoys an in-expectation fast rate of convergence consisting of an $\mathcal{O}\left(\frac{1}{T^2}\right)$ bias decaying term and an $\mathcal{O}\left(\frac{1}{nT}\right)$ variance decaying term. In the small-$n$-large-$T$ setting, this result substantially improves the best known results of SPP-type approaches by revealing the impact of noise level of model on convergence rate. In the complementary small-$T$-large-$n$ regime, we provide a two-phase extension of M-SPP to achieve comparable convergence rates. Moreover, we derive a near-tight high probability (over the randomness of data) bound on the parameter estimation error of a sampling-without-replacement variant of M-SPP. Numerical evidences are provided to support our theoretical predictions when substantialized to Lasso and logistic regression models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
339,702
2007.03458
Fictitious Play for Mean Field Games: Continuous Time Analysis and Applications
In this paper, we deepen the analysis of continuous time Fictitious Play learning algorithm to the consideration of various finite state Mean Field Game settings (finite horizon, $\gamma$-discounted), allowing in particular for the introduction of an additional common noise. We first present a theoretical convergence analysis of the continuous time Fictitious Play process and prove that the induced exploitability decreases at a rate $O(\frac{1}{t})$. Such analysis emphasizes the use of exploitability as a relevant metric for evaluating the convergence towards a Nash equilibrium in the context of Mean Field Games. These theoretical contributions are supported by numerical experiments provided in either model-based or model-free settings. We provide hereby for the first time converging learning dynamics for Mean Field Games in the presence of common noise.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
186,061
2205.00832
Gradient Descent, Stochastic Optimization, and Other Tales
The goal of this paper is to debunk and dispel the magic behind black-box optimizers and stochastic optimizers. It aims to build a solid foundation on how and why the techniques work. This manuscript crystallizes this knowledge by deriving from simple intuitions, the mathematics behind the strategies. This tutorial doesn't shy away from addressing both the formal and informal aspects of gradient descent and stochastic optimization methods. By doing so, it hopes to provide readers with a deeper understanding of these techniques as well as the when, the how and the why of applying these algorithms. Gradient descent is one of the most popular algorithms to perform optimization and by far the most common way to optimize machine learning tasks. Its stochastic version receives attention in recent years, and this is particularly true for optimizing deep neural networks. In deep neural networks, the gradient followed by a single sample or a batch of samples is employed to save computational resources and escape from saddle points. In 1951, Robbins and Monro published \textit{A stochastic approximation method}, one of the first modern treatments on stochastic optimization that estimates local gradients with a new batch of samples. And now, stochastic optimization has become a core technology in machine learning, largely due to the development of the back propagation algorithm in fitting a neural network. The sole aim of this article is to give a self-contained introduction to concepts and mathematical tools in gradient descent and stochastic optimization.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
294,390
1604.05243
Better Strategyproof Mechanisms without Payments or Prior --- An Analytic Approach
We revisit the problem of designing strategyproof mechanisms for allocating divisible items among two agents who have linear utilities, where payments are disallowed and there is no prior information on the agents' preferences. The objective is to design strategyproof mechanisms which are competitive against the most efficient (but not strategyproof) mechanism. For the case with two items: (1) We provide a set of sufficient conditions for strategyproofness. (2) We use an analytic approach to derive strategyproof mechanisms which are more competitive than all prior strategyproof mechanisms. (3) We improve the linear-program-based proof of Guo and Conitzer to show new upper bounds on competitive ratios. (4) We provide the first "mathematical" upper bound proof. For the cases with any number of items: (1) We build on the Partial Allocation mechanisms introduced by Cole et al. to design a strategyproof mechanism which is 0.67776-competitive, breaking the 2/3 barrier. (2) We propose a new subclass of strategyproof mechanisms called Dynamical-Increasing-Price mechanisms, where each agent purchases the items using virtual money, and the prices of the items depend on other agents' preferences.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
true
54,783
2108.06080
TDM: Trustworthy Decision-Making via Interpretability Enhancement
Human-robot interactive decision-making is increasingly becoming ubiquitous, and trust is an influential factor in determining the reliance on autonomy. However, it is not reasonable to trust systems that are beyond our comprehension, and typical machine learning and data-driven decision-making are black-box paradigms that impede interpretability. Therefore, it is critical to establish computational trustworthy decision-making mechanisms enhanced by interpretability-aware strategies. To this end, we propose a Trustworthy Decision-Making (TDM) framework, which integrates symbolic planning into sequential decision-making. The framework learns interpretable subtasks that result in a complex, higher-level composite task that can be formally evaluated using the proposed trust metric. TDM enables the subtask-level interpretability by design and converges to an optimal symbolic plan from the learned subtasks. Moreover, a TDM-based algorithm is introduced to demonstrate the unification of symbolic planning with other sequential-decision making algorithms, reaping the benefits of both. Experimental results validate the effectiveness of trust-score-based planning while improving the interpretability of subtasks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
250,496
2101.12640
Enhancing the Transformer Decoder with Transition-based Syntax
Notwithstanding recent advances, syntactic generalization remains a challenge for text decoders. While some studies showed gains from incorporating source-side symbolic syntactic and semantic structure into text generation Transformers, very little work addressed the decoding of such structure. We propose a general approach for tree decoding using a transition-based approach. Examining the challenging test case of incorporating Universal Dependencies syntax into machine translation, we present substantial improvements on test sets that focus on syntactic generalization, while presenting improved or comparable performance on standard MT benchmarks. Further qualitative analysis addresses cases where syntactic generalization in the vanilla Transformer decoder is inadequate and demonstrates the advantages afforded by integrating syntactic information.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
217,615
2308.06965
AutoAssign+: Automatic Shared Embedding Assignment in Streaming Recommendation
In the domain of streaming recommender systems, conventional methods for addressing new user IDs or item IDs typically involve assigning initial ID embeddings randomly. However, this practice results in two practical challenges: (i) Items or users with limited interactive data may yield suboptimal prediction performance. (ii) Embedding new IDs or low-frequency IDs necessitates consistently expanding the embedding table, leading to unnecessary memory consumption. In light of these concerns, we introduce a reinforcement learning-driven framework, namely AutoAssign+, that facilitates Automatic Shared Embedding Assignment Plus. To be specific, AutoAssign+ utilizes an Identity Agent as an actor network, which plays a dual role: (i) Representing low-frequency IDs field-wise with a small set of shared embeddings to enhance the embedding initialization, and (ii) Dynamically determining which ID features should be retained or eliminated in the embedding table. The policy of the agent is optimized with the guidance of a critic network. To evaluate the effectiveness of our approach, we perform extensive experiments on three commonly used benchmark datasets. Our experiment results demonstrate that AutoAssign+ is capable of significantly enhancing recommendation performance by mitigating the cold-start problem. Furthermore, our framework yields a reduction in memory usage of approximately 20-30%, verifying its practical effectiveness and efficiency for streaming recommender systems.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
385,348
1904.08966
Non-Stationary Polar Codes for Resistive Memories
Resistive memories are considered a promising memory technology enabling high storage densities with in-memory computing capabilities. However, the readout reliability of resistive memories is impaired due to the inevitable existence of wire resistance, resulting in the sneak path problem. Motivated by this problem, we study polar coding over channels with different reliability levels, termed non-stationary polar codes, and we propose a technique improving its bit error rate (BER) performance. We then apply the framework of non-stationary polar codes to the crossbar array and evaluate its BER performance under two modeling approaches, namely binary symmetric channels (BSCs) and binary asymmetric channels (BSCs). Finally, we propose a technique for biasing the proportion of high-resistance states in the crossbar array and show its advantage in reducing further the BER. Several simulations are carried out using a SPICE-like simulator, exhibiting significant reduction in BER.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
128,230
2108.11483
Heavy-tailed Streaming Statistical Estimation
We consider the task of heavy-tailed statistical estimation given streaming $p$-dimensional samples. This could also be viewed as stochastic optimization under heavy-tailed distributions, with an additional $O(p)$ space complexity constraint. We design a clipped stochastic gradient descent algorithm and provide an improved analysis, under a more nuanced condition on the noise of the stochastic gradients, which we show is critical when analyzing stochastic optimization problems arising from general statistical estimation problems. Our results guarantee convergence not just in expectation but with exponential concentration, and moreover does so using $O(1)$ batch size. We provide consequences of our results for mean estimation and linear regression. Finally, we provide empirical corroboration of our results and algorithms via synthetic experiments for mean estimation and linear regression.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
252,188
1105.0510
Voting in a Stochastic Environment: The Case of Two Groups
Social dynamics determined by voting in a stochastic environment is analyzed for a society composed of two cohesive groups of similar size. Within the model of random walks determined by voting, explicit formulas are derived for the capital increments of the groups against the parameters of the environment and "claim thresholds" of the groups. The "unanimous acceptance" and "unanimous rejection" group rules are considered as the voting procedures. Claim thresholds are evaluated that are most beneficial to the participants of the groups and to the society as a whole.
false
false
false
true
false
false
false
false
false
false
true
false
false
false
true
false
false
false
10,229
2401.03904
Guided Time-optimal Model Predictive Control of a Multi-rotor
Time-optimal control of a multi-rotor remains an open problem due to the under-actuation and nonlinearity of its dynamics, which make it difficult to solve this problem directly. In this paper, the time-optimal control problem of the multi-rotor is studied. Firstly, a thrust limit optimal decomposition method is proposed, which can reasonably decompose the limited thrust into three directions according to the current state and the target state. As a result, the thrust limit constraint is decomposed as a linear constraint. With the linear constraint and decoupled dynamics, a time-optimal guidance trajectory can be obtained. Then, a cost function is defined based on the time-optimal guidance trajectory, which has a quadratic form and can be used to evaluate the time-optimal performance of the system outputs. Finally, based on the cost function, the time-optimal control problem is reformulated as an MPC (Model Predictive Control) problem. The experimental results demonstrate the feasibility and validity of the proposed methods.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
420,267
1301.6626
Discriminative Feature Selection for Uncertain Graph Classification
Mining discriminative features for graph data has attracted much attention in recent years due to its important role in constructing graph classifiers, generating graph indices, etc. Most measurement of interestingness of discriminative subgraph features are defined on certain graphs, where the structure of graph objects are certain, and the binary edges within each graph represent the "presence" of linkages among the nodes. In many real-world applications, however, the linkage structure of the graphs is inherently uncertain. Therefore, existing measurements of interestingness based upon certain graphs are unable to capture the structural uncertainty in these applications effectively. In this paper, we study the problem of discriminative subgraph feature selection from uncertain graphs. This problem is challenging and different from conventional subgraph mining problems because both the structure of the graph objects and the discrimination score of each subgraph feature are uncertain. To address these challenges, we propose a novel discriminative subgraph feature selection method, DUG, which can find discriminative subgraph features in uncertain graphs based upon different statistical measures including expectation, median, mode and phi-probability. We first compute the probability distribution of the discrimination scores for each subgraph feature based on dynamic programming. Then a branch-and-bound algorithm is proposed to search for discriminative subgraphs efficiently. Extensive experiments on various neuroimaging applications (i.e., Alzheimer's Disease, ADHD and HIV) have been performed to analyze the gain in performance by taking into account structural uncertainties in identifying discriminative subgraph features for graph classification.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
21,461
2011.05559
Learning Bayes Filter Models for Tactile Localization
Localizing and tracking the pose of robotic grippers are necessary skills for manipulation tasks. However, the manipulators with imprecise kinematic models (e.g. low-cost arms) or manipulators with unknown world coordinates (e.g. poor camera-arm calibration) cannot locate the gripper with respect to the world. In these circumstances, we can leverage tactile feedback between the gripper and the environment. In this paper, we present learnable Bayes filter models that can localize robotic grippers using tactile feedback. We propose a novel observation model that conditions the tactile feedback on visual maps of the environment along with a motion model to recursively estimate the gripper's location. Our models are trained in simulation with self-supervision and transferred to the real world. Our method is evaluated on a tabletop localization task in which the gripper interacts with objects. We report results in simulation and on a real robot, generalizing over different sizes, shapes, and configurations of the objects.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
205,962
2405.18520
Offline-Boosted Actor-Critic: Adaptively Blending Optimal Historical Behaviors in Deep Off-Policy RL
Off-policy reinforcement learning (RL) has achieved notable success in tackling many complex real-world tasks, by leveraging previously collected data for policy learning. However, most existing off-policy RL algorithms fail to maximally exploit the information in the replay buffer, limiting sample efficiency and policy performance. In this work, we discover that concurrently training an offline RL policy based on the shared online replay buffer can sometimes outperform the original online learning policy, though the occurrence of such performance gains remains uncertain. This motivates a new possibility of harnessing the emergent outperforming offline optimal policy to improve online policy learning. Based on this insight, we present Offline-Boosted Actor-Critic (OBAC), a model-free online RL framework that elegantly identifies the outperforming offline policy through value comparison, and uses it as an adaptive constraint to guarantee stronger policy learning performance. Our experiments demonstrate that OBAC outperforms other popular model-free RL baselines and rivals advanced model-based RL methods in terms of sample efficiency and asymptotic performance across 53 tasks spanning 6 task suites.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
458,456
1905.02085
Pixel-wise Regression: 3D Hand Pose Estimation via Spatial-form Representation and Differentiable Decoder
3D Hand pose estimation from a single depth image is an essential topic in computer vision and human-computer interaction. Although the rising of deep learning method boosts the accuracy a lot, the problem is still hard to solve due to the complex structure of the human hand. Existing methods with deep learning either lose spatial information of hand structure or lack a direct supervision of joint coordinates. In this paper, we propose a novel Pixel-wise Regression method, which use spatial-form representation (SFR) and differentiable decoder (DD) to solve the two problems. To use our method, we build a model, in which we design a particular SFR and its correlative DD which divided the 3D joint coordinates into two parts, plane coordinates and depth coordinates and use two modules named Plane Regression (PR) and Depth Regression (DR) to deal with them respectively. We conduct an ablation experiment to show the method we proposed achieve better results than the former methods. We also make an exploration on how different training strategies influence the learned SFRs and results. The experiment on three public datasets demonstrates that our model is comparable with the existing state-of-the-art models and in one of them our model can reduce mean 3D joint error by 25%.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
129,891
2208.08896
Adversarial Learning Based Structural Brain-network Generative Model for Analyzing Mild Cognitive Impairment
Mild cognitive impairment(MCI) is a precursor of Alzheimer's disease(AD), and the detection of MCI is of great clinical significance. Analyzing the structural brain networks of patients is vital for the recognition of MCI. However, the current studies on structural brain networks are totally dependent on specific toolboxes, which is time-consuming and subjective. Few tools can obtain the structural brain networks from brain diffusion tensor images. In this work, an adversarial learning-based structural brain-network generative model(SBGM) is proposed to directly learn the structural connections from brain diffusion tensor images. By analyzing the differences in structural brain networks across subjects, we found that the structural brain networks of subjects showed a consistent trend from elderly normal controls(NC) to early mild cognitive impairment(EMCI) to late mild cognitive impairment(LMCI): structural connectivity progressed in a progressively weaker direction as the condition worsened. In addition, our proposed model tri-classifies EMCI, LMCI, and NC subjects, achieving a classification accuracy of 83.33\% on the Alzheimer's Disease Neuroimaging Initiative(ADNI) database.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
313,527
2409.04737
CrysAtom: Distributed Representation of Atoms for Crystal Property Prediction
Application of artificial intelligence (AI) has been ubiquitous in the growth of research in the areas of basic sciences. Frequent use of machine learning (ML) and deep learning (DL) based methodologies by researchers has resulted in significant advancements in the last decade. These techniques led to notable performance enhancements in different tasks such as protein structure prediction, drug-target binding affinity prediction, and molecular property prediction. In material science literature, it is well-known that crystalline materials exhibit topological structures. Such topological structures may be represented as graphs and utilization of graph neural network (GNN) based approaches could help encoding them into an augmented representation space. Primarily, such frameworks adopt supervised learning techniques targeted towards downstream property prediction tasks on the basis of electronic properties (formation energy, bandgap, total energy, etc.) and crystalline structures. Generally, such type of frameworks rely highly on the handcrafted atom feature representations along with the structural representations. In this paper, we propose an unsupervised framework namely, CrysAtom, using untagged crystal data to generate dense vector representation of atoms, which can be utilized in existing GNN-based property predictor models to accurately predict important properties of crystals. Empirical results show that our dense representation embeds chemical properties of atoms and enhance the performance of the baseline property predictor models significantly.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
486,485
2309.07428
Physical Invisible Backdoor Based on Camera Imaging
Backdoor attack aims to compromise a model, which returns an adversary-wanted output when a specific trigger pattern appears yet behaves normally for clean inputs. Current backdoor attacks require changing pixels of clean images, which results in poor stealthiness of attacks and increases the difficulty of the physical implementation. This paper proposes a novel physical invisible backdoor based on camera imaging without changing nature image pixels. Specifically, a compromised model returns a target label for images taken by a particular camera, while it returns correct results for other images. To implement and evaluate the proposed backdoor, we take shots of different objects from multi-angles using multiple smartphones to build a new dataset of 21,500 images. Conventional backdoor attacks work ineffectively with some classical models, such as ResNet18, over the above-mentioned dataset. Therefore, we propose a three-step training strategy to mount the backdoor attack. First, we design and train a camera identification model with the phone IDs to extract the camera fingerprint feature. Subsequently, we elaborate a special network architecture, which is easily compromised by our backdoor attack, by leveraging the attributes of the CFA interpolation algorithm and combining it with the feature extraction block in the camera identification model. Finally, we transfer the backdoor from the elaborated special network architecture to the classical architecture model via teacher-student distillation learning. Since the trigger of our method is related to the specific phone, our attack works effectively in the physical world. Experiment results demonstrate the feasibility of our proposed approach and robustness against various backdoor defenses.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
391,778
2102.08742
SPAN: a Simple Predict & Align Network for Handwritten Paragraph Recognition
Unconstrained handwriting recognition is an essential task in document analysis. It is usually carried out in two steps. First, the document is segmented into text lines. Second, an Optical Character Recognition model is applied on these line images. We propose the Simple Predict & Align Network: an end-to-end recurrence-free Fully Convolutional Network performing OCR at paragraph level without any prior segmentation stage. The framework is as simple as the one used for the recognition of isolated lines and we achieve competitive results on three popular datasets: RIMES, IAM and READ 2016. The proposed model does not require any dataset adaptation, it can be trained from scratch, without segmentation labels, and it does not require line breaks in the transcription labels. Our code and trained model weights are available at https://github.com/FactoDeepLearning/SPAN.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
220,554
2103.12066
Machine learning based in situ quality estimation by molten pool condition-quality relations modeling using experimental data
The advancement of machine learning promises the ability to accelerate the adoption of new processes and property designs for metal additive manufacturing. The molten pool geometry and molten pool temperature are the significant indicators for the final part's geometric shape and microstructural properties for the Wire-feed laser direct energy deposition process. Thus, the molten pool condition-property relations are of preliminary importance for in situ quality assurance. To enable in situ quality monitoring of bead geometry and characterization properties, we need to continuously monitor the sensor's data for molten pool dimensions and temperature for the Wire-feed laser additive manufacturing (WLAM) system. We first develop a machine learning convolutional neural network (CNN) model for establishing the correlations from the measurable molten pool image and temperature data directly to the geometric shape and microstructural properties. The multi-modality network receives both the camera image and temperature measurement as inputs, yielding the corresponding characterization properties of the final build part (e.g., fusion zone depth, alpha lath thickness). The performance of the CNN model is compared with the regression model as a baseline. The developed models enable molten pool condition-quality relations mapping for building quantitative and collaborative in situ quality estimation and assurance framework.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
226,045
1908.07162
Discriminative Topic Mining via Category-Name Guided Text Embedding
Mining a set of meaningful and distinctive topics automatically from massive text corpora has broad applications. Existing topic models, however, typically work in a purely unsupervised way, which often generate topics that do not fit users' particular needs and yield suboptimal performance on downstream tasks. We propose a new task, discriminative topic mining, which leverages a set of user-provided category names to mine discriminative topics from text corpora. This new task not only helps a user understand clearly and distinctively the topics he/she is most interested in, but also benefits directly keyword-driven classification tasks. We develop CatE, a novel category-name guided text embedding method for discriminative topic mining, which effectively leverages minimal user guidance to learn a discriminative embedding space and discover category representative terms in an iterative manner. We conduct a comprehensive set of experiments to show that CatE mines high-quality set of topics guided by category names only, and benefits a variety of downstream applications including weakly-supervised classification and lexical entailment direction identification.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
142,225
2208.13898
Conjugate Natural Selection
We prove that Fisher-Rao natural gradient descent (FR-NGD) optimally approximates the continuous time replicator equation (an essential model of evolutionary dynamics), and term this correspondence "conjugate natural selection". This correspondence promises alternative approaches for evolutionary computation over continuous or high-dimensional hypothesis spaces. As a special case, FR-NGD also provides the optimal approximation of continuous Bayesian inference when hypotheses compete on the basis of predicting actual observations. In this case, the method avoids the need to compute prior probabilities. We demonstrate our findings on a non-convex optimization problem and a system identification task for a stochastic process with time-varying parameters.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
315,162
2307.11357
Bridging the Reality Gap of Reinforcement Learning based Traffic Signal Control using Domain Randomization and Meta Learning
Reinforcement Learning (RL) has been widely explored in Traffic Signal Control (TSC) applications, however, still no such system has been deployed in practice. A key barrier to progress in this area is the reality gap, the discrepancy that results from differences between simulation models and their real-world equivalents. In this paper, we address this challenge by first presenting a comprehensive analysis of potential simulation parameters that contribute to this reality gap. We then also examine two promising strategies that can bridge this gap: Domain Randomization (DR) and Model-Agnostic Meta-Learning (MAML). Both strategies were trained with a traffic simulation model of an intersection. In addition, the model was embedded in LemgoRL, a framework that integrates realistic, safety-critical requirements into the control system. Subsequently, we evaluated the performance of the two methods on a separate model of the same intersection that was developed with a different traffic simulator. In this way, we mimic the reality gap. Our experimental results show that both DR and MAML outperform a state-of-the-art RL algorithm, therefore highlighting their potential to mitigate the reality gap in RLbased TSC systems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
380,883
2112.04344
Does Structure Matter? Leveraging Data-to-Text Generation for Answering Complex Information Needs
In this work, our aim is to provide a structured answer in natural language to a complex information need. Particularly, we envision using generative models from the perspective of data-to-text generation. We propose the use of a content selection and planning pipeline which aims at structuring the answer by generating intermediate plans. The experimental evaluation is performed using the TREC Complex Answer Retrieval (CAR) dataset. We evaluate both the generated answer and its corresponding structure and show the effectiveness of planning-based models in comparison to a text-to-text model.
false
false
false
false
false
true
true
false
true
false
false
false
false
false
false
false
false
false
270,495
2401.06992
Progressive Feature Fusion Network for Enhancing Image Quality Assessment
Image compression has been applied in the fields of image storage and video broadcasting. However, it's formidably tough to distinguish the subtle quality differences between those distorted images generated by different algorithms. In this paper, we propose a new image quality assessment framework to decide which image is better in an image group. To capture the subtle differences, a fine-grained network is adopted to acquire multi-scale features. Subsequently, we design a cross subtract block for separating and gathering the information within positive and negative image pairs. Enabling image comparison in feature space. After that, a progressive feature fusion block is designed, which fuses multi-scale features in a novel progressive way. Hierarchical spatial 2D features can thus be processed gradually. Experimental results show that compared with the current mainstream image quality assessment methods, the proposed network can achieve more accurate image quality assessment and ranks second in the benchmark of CLIC in the image perceptual model track.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
421,376
2310.10266
Generative Calibration for In-context Learning
As one of the most exciting features of large language models (LLMs), in-context learning is a mixed blessing. While it allows users to fast-prototype a task solver with only a few training examples, the performance is generally sensitive to various configurations of the prompt such as the choice or order of the training examples. In this paper, we for the first time theoretically and empirically identify that such a paradox is mainly due to the label shift of the in-context model to the data distribution, in which LLMs shift the label marginal $p(y)$ while having a good label conditional $p(x|y)$. With this understanding, we can simply calibrate the in-context predictive distribution by adjusting the label marginal, which is estimated via Monte-Carlo sampling over the in-context model, i.e., generation of LLMs. We call our approach as generative calibration. We conduct exhaustive experiments with 12 text classification tasks and 12 LLMs scaling from 774M to 33B, generally find that the proposed method greatly and consistently outperforms the ICL as well as state-of-the-art calibration methods, by up to 27% absolute in macro-F1. Meanwhile, the proposed method is also stable under different prompt configurations.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
400,154
2103.05793
Universal Approximation of Residual Flows in Maximum Mean Discrepancy
Normalizing flows are a class of flexible deep generative models that offer easy likelihood computation. Despite their empirical success, there is little theoretical understanding of their expressiveness. In this work, we study residual flows, a class of normalizing flows composed of Lipschitz residual blocks. We prove residual flows are universal approximators in maximum mean discrepancy. We provide upper bounds on the number of residual blocks to achieve approximation under different assumptions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
224,087
2502.05305
Online Covariance Estimation in Nonsmooth Stochastic Approximation
We consider applying stochastic approximation (SA) methods to solve nonsmooth variational inclusion problems. Existing studies have shown that the averaged iterates of SA methods exhibit asymptotic normality, with an optimal limiting covariance matrix in the local minimax sense of H\'ajek and Le Cam. However, no methods have been proposed to estimate this covariance matrix in a nonsmooth and potentially non-monotone (nonconvex) setting. In this paper, we study an online batch-means covariance matrix estimator introduced in Zhu et al.(2023). The estimator groups the SA iterates appropriately and computes the sample covariance among batches as an estimate of the limiting covariance. Its construction does not require prior knowledge of the total sample size, and updates can be performed recursively as new data arrives. We establish that, as long as the batch size sequence is properly specified (depending on the stepsize sequence), the estimator achieves a convergence rate of order $O(\sqrt{d}n^{-1/8+\varepsilon})$ for any $\varepsilon>0$, where $d$ and $n$ denote the problem dimensionality and the number of iterations (or samples) used. Although the problem is nonsmooth and potentially non-monotone (nonconvex), our convergence rate matches the best-known rate for covariance estimation methods using only first-order information in smooth and strongly-convex settings. The consistency of this covariance estimator enables asymptotically valid statistical inference, including constructing confidence intervals and performing hypothesis testing.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
531,542
2403.11497
A Sober Look at the Robustness of CLIPs to Spurious Features
Large vision language models, such as CLIP, demonstrate impressive robustness to spurious features than single-modal models trained on ImageNet. However, existing test datasets are typically curated based on ImageNet-trained models, which aim to capture the spurious features inherited in ImageNet. Benchmarking CLIP models based on the ImageNet-oriented spurious features may not be sufficient to reflect the extent to which CLIP models are robust to spurious correlations within CLIP training data, e.g., LAION. To this end, we craft a new challenging dataset named CounterAnimal designed to reveal the reliance of CLIP models on realistic spurious features. Specifically, we split animal photos into groups according to the backgrounds, and then identify a pair of groups for each class where a CLIP model shows high-performance drops across the two groups. Our evaluations show that the spurious features captured by CounterAnimal are generically learned by CLIP models with different backbones and pre-train data, yet have limited influence for ImageNet models. We provide theoretical insights that the CLIP objective cannot offer additional robustness. Furthermore, we also re-evaluate strategies such as scaling up parameters and high-quality pre-trained data. We find that they still help mitigate the spurious features, providing a promising path for future developments.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
438,729
2012.14654
The Adaptive Dynamic Programming Toolbox
The paper develops the Adaptive Dynamic Programming Toolbox (ADPT), which solves optimal control problems for continuous-time nonlinear systems. Based on the adaptive dynamic programming technique, the ADPT computes optimal feedback controls from the system dynamics in the model-based working mode, or from measurements of trajectories of the system in the model-free working mode without the requirement of knowledge of the system model. Multiple options are provided such that the ADPT can accommodate various customized circumstances. Compared to other popular software toolboxes for optimal control, the ADPT enjoys its computational precision and speed, which is illustrated with its applications to a satellite attitude control problem.
false
false
false
false
true
false
true
true
false
false
true
false
false
false
false
false
false
false
213,573
2301.10260
Learned Interferometric Imaging for the SPIDER Instrument
The Segmented Planar Imaging Detector for Electro-Optical Reconnaissance (SPIDER) is an optical interferometric imaging device that aims to offer an alternative to the large space telescope designs of today with reduced size, weight and power consumption. This is achieved through interferometric imaging. State-of-the-art methods for reconstructing images from interferometric measurements adopt proximal optimization techniques, which are computationally expensive and require handcrafted priors. In this work we present two data-driven approaches for reconstructing images from measurements made by the SPIDER instrument. These approaches use deep learning to learn prior information from training data, increasing the reconstruction quality, and significantly reducing the computation time required to recover images by orders of magnitude. Reconstruction time is reduced to ${\sim} 10$ milliseconds, opening up the possibility of real-time imaging with SPIDER for the first time. Furthermore, we show that these methods can also be applied in domains where training data is scarce, such as astronomical imaging, by leveraging transfer learning from domains where plenty of training data are available.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
341,752
2310.14557
The Skipped Beat: A Study of Sociopragmatic Understanding in LLMs for 64 Languages
Instruction tuned large language models (LLMs), such as ChatGPT, demonstrate remarkable performance in a wide range of tasks. Despite numerous recent studies that examine the performance of instruction-tuned LLMs on various NLP benchmarks, there remains a lack of comprehensive investigation into their ability to understand cross-lingual sociopragmatic meaning (SM), i.e., meaning embedded within social and interactive contexts. This deficiency arises partly from SM not being adequately represented in any of the existing benchmarks. To address this gap, we present SPARROW, an extensive multilingual benchmark specifically designed for SM understanding. SPARROW comprises 169 datasets covering 13 task types across six primary categories (e.g., anti-social language detection, emotion recognition). SPARROW datasets encompass 64 different languages originating from 12 language families representing 16 writing scripts. We evaluate the performance of various multilingual pretrained language models (e.g., mT5) and instruction-tuned LLMs (e.g., BLOOMZ, ChatGPT) on SPARROW through fine-tuning, zero-shot, and/or few-shot learning. Our comprehensive analysis reveals that existing open-source instruction tuned LLMs still struggle to understand SM across various languages, performing close to a random baseline in some cases. We also find that although ChatGPT outperforms many LLMs, it still falls behind task-specific finetuned models with a gap of 12.19 SPARROW score. Our benchmark is available at: https://github.com/UBC-NLP/SPARROW
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
401,915
2308.11732
(Un)fair Exposure in Deep Face Rankings at a Distance
Law enforcement regularly faces the challenge of ranking suspects from their facial images. Deep face models aid this process but frequently introduce biases that disproportionately affect certain demographic segments. While bias investigation is common in domains like job candidate ranking, the field of forensic face rankings remains underexplored. In this paper, we propose a novel experimental framework, encompassing six state-of-the-art face encoders and two public data sets, designed to scrutinize the extent to which demographic groups suffer from biases in exposure in the context of forensic face rankings. Through comprehensive experiments that cover both re-identification and identification tasks, we show that exposure biases within this domain are far from being countered, demanding attention towards establishing ad-hoc policies and corrective measures. The source code is available at https://github.com/atzoriandrea/ijcb2023-unfair-face-rankings
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
387,251
2208.12118
A Globally Convergent Gradient-based Bilevel Hyperparameter Optimization Method
Hyperparameter optimization in machine learning is often achieved using naive techniques that only lead to an approximate set of hyperparameters. Although techniques such as Bayesian optimization perform an intelligent search on a given domain of hyperparameters, it does not guarantee an optimal solution. A major drawback of most of these approaches is an exponential increase of their search domain with number of hyperparameters, increasing the computational cost and making the approaches slow. The hyperparameter optimization problem is inherently a bilevel optimization task, and some studies have attempted bilevel solution methodologies for solving this problem. However, these studies assume a unique set of model weights that minimize the training loss, which is generally violated by deep learning architectures. This paper discusses a gradient-based bilevel method addressing these drawbacks for solving the hyperparameter optimization problem. The proposed method can handle continuous hyperparameters for which we have chosen the regularization hyperparameter in our experiments. The method guarantees convergence to the set of optimal hyperparameters that this study has theoretically proven. The idea is based on approximating the lower-level optimal value function using Gaussian process regression. As a result, the bilevel problem is reduced to a single level constrained optimization task that is solved using the augmented Lagrangian method. We have performed an extensive computational study on the MNIST and CIFAR-10 datasets on multi-layer perceptron and LeNet architectures that confirms the efficiency of the proposed method. A comparative study against grid search, random search, Bayesian optimization, and HyberBand method on various hyperparameter problems shows that the proposed algorithm converges with lower computation and leads to models that generalize better on the testing set.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
314,628
2501.13888
Multimodal Sensor Dataset for Monitoring Older Adults Post Lower-Limb Fractures in Community Settings
Lower-Limb Fractures (LLF) are a major health concern for older adults, often leading to reduced mobility and prolonged recovery, potentially impairing daily activities and independence. During recovery, older adults frequently face social isolation and functional decline, complicating rehabilitation and adversely affecting physical and mental health. Multi-modal sensor platforms that continuously collect data and analyze it using machine-learning algorithms can remotely monitor this population and infer health outcomes. They can also alert clinicians to individuals at risk of isolation and decline. This paper presents a new publicly available multi-modal sensor dataset, MAISON-LLF, collected from older adults recovering from LLF in community settings. The dataset includes data from smartphone and smartwatch sensors, motion detectors, sleep-tracking mattresses, and clinical questionnaires on isolation and decline. The dataset was collected from ten older adults living alone at home for eight weeks each, totaling 560 days of 24-hour sensor data. For technical validation, supervised machine-learning and deep-learning models were developed using the sensor and clinical questionnaire data, providing a foundational comparison for the research community.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
526,862
1410.7613
A Short Image Series Based Scheme for Time Series Digital Image Correlation
A new scheme for digital image correlation, i.e., short time series DIC (STS-DIC) is proposed. Instead of processing the original deformed speckle images individually, STS-DIC combines several adjacent deformed speckle images from a short time series and then processes the averaged image, for which deformation continuity over time is introduced. The deformation of several adjacent images is assumed to be linear in time and a new spatial-temporal displacement representation method with eight unknowns is presented based on the subset-based representation method. Then, the model of STS-DIC is created and a solving scheme is developed based on the Newton-Raphson iteration. The proposed method is verified for numerical and experimental cases. The results show that the proposed STS-DIC greatly improves the accuracy of traditional DIC, both under simple and complicated deformation conditions, while retaining acceptable actual computational cost.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
37,085
2501.16250
Runtime Analysis of the Compact Genetic Algorithm on the LeadingOnes Benchmark
The compact genetic algorithm (cGA) is one of the simplest estimation-of-distribution algorithms (EDAs). Next to the univariate marginal distribution algorithm (UMDA) -- another simple EDA -- , the cGA has been subject to extensive mathematical runtime analyses, often showcasing a similar or even superior performance to competing approaches. Surprisingly though, up to date and in contrast to the UMDA and many other heuristics, we lack a rigorous runtime analysis of the cGA on the LeadingOnes benchmark -- one of the most studied theory benchmarks in the domain of evolutionary computation. We fill this gap in the literature by conducting a formal runtime analysis of the cGA on LeadingOnes. For the cGA's single parameter -- called the hypothetical population size -- at least polylogarithmically larger than the problem size, we prove that the cGA samples the optimum of LeadingOnes with high probability within a number of function evaluations quasi-linear in the problem size and linear in the hypothetical population size. For the best hypothetical population size, our result matches, up to polylogarithmic factors, the typical quadratic runtime that many randomized search heuristics exhibit on LeadingOnes. Our analysis exhibits some noteworthy differences in the working principles of the two algorithms which were not visible in previous works.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
527,878
2406.01364
BELLS: A Framework Towards Future Proof Benchmarks for the Evaluation of LLM Safeguards
Input-output safeguards are used to detect anomalies in the traces produced by Large Language Models (LLMs) systems. These detectors are at the core of diverse safety-critical applications such as real-time monitoring, offline evaluation of traces, and content moderation. However, there is no widely recognized methodology to evaluate them. To fill this gap, we introduce the Benchmarks for the Evaluation of LLM Safeguards (BELLS), a structured collection of tests, organized into three categories: (1) established failure tests, based on already-existing benchmarks for well-defined failure modes, aiming to compare the performance of current input-output safeguards; (2) emerging failure tests, to measure generalization to never-seen-before failure modes and encourage the development of more general safeguards; (3) next-gen architecture tests, for more complex scaffolding (such as LLM-agents and multi-agent systems), aiming to foster the development of safeguards that could adapt to future applications for which no safeguard currently exists. Furthermore, we implement and share the first next-gen architecture test, using the MACHIAVELLI environment, along with an interactive visualization of the dataset.
false
false
false
false
true
false
false
false
true
false
false
false
true
false
false
false
false
false
460,283
2110.13665
Bootstrapping Concept Formation in Small Neural Networks
The question how neural systems (of humans) can perform reasoning is still far from being solved. We posit that the process of forming Concepts is a fundamental step required for this. We argue that, first, Concepts are formed as closed representations, which are then consolidated by relating them to each other. Here we present a model system (agent) with a small neural network that uses realistic learning rules and receives only feedback from the environment in which the agent performs virtual actions. First, the actions of the agent are reflexive. In the process of learning, statistical regularities in the input lead to the formation of neuronal pools representing relations between the entities observed by the agent from its artificial world. This information then influences the behavior of the agent via feedback connections replacing the initial reflex by an action driven by these relational representations. We hypothesize that the neuronal pools representing relational information can be considered as primordial Concepts, which may in a similar way be present in some pre-linguistic animals, too. This system provides formal grounds for further discussions on what could be understood as a Concept and shows that associative learning is enough to develop concept-like structures.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
false
263,272
2206.14713
CONVIQT: Contrastive Video Quality Estimator
Perceptual video quality assessment (VQA) is an integral component of many streaming and video sharing platforms. Here we consider the problem of learning perceptually relevant video quality representations in a self-supervised manner. Distortion type identification and degradation level determination is employed as an auxiliary task to train a deep learning model containing a deep Convolutional Neural Network (CNN) that extracts spatial features, as well as a recurrent unit that captures temporal information. The model is trained using a contrastive loss and we therefore refer to this training framework and resulting model as CONtrastive VIdeo Quality EstimaTor (CONVIQT). During testing, the weights of the trained model are frozen, and a linear regressor maps the learned features to quality scores in a no-reference (NR) setting. We conduct comprehensive evaluations of the proposed model on multiple VQA databases by analyzing the correlations between model predictions and ground-truth quality ratings, and achieve competitive performance when compared to state-of-the-art NR-VQA models, even though it is not trained on those databases. Our ablation experiments demonstrate that the learned representations are highly robust and generalize well across synthetic and realistic distortions. Our results indicate that compelling representations with perceptual bearing can be obtained using self-supervised learning. The implementations used in this work have been made available at https://github.com/pavancm/CONVIQT.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
305,375
1901.08658
Is Pretraining Necessary for Hyperspectral Image Classification?
We address two questions for training a convolutional neural network (CNN) for hyperspectral image classification: i) is it possible to build a pre-trained network? and ii) is the pre-training effective in furthering the performance? To answer the first question, we have devised an approach that pre-trains a network on multiple source datasets that differ in their hyperspectral characteristics and fine-tunes on a target dataset. This approach effectively resolves the architectural issue that arises when transferring meaningful information between the source and the target networks. To answer the second question, we carried out several ablation experiments. Based on the experimental results, a network trained from scratch performs as good as a network fine-tuned from a pre-trained network. However, we observed that pre-training the network has its own advantage in achieving better performances when deeper networks are required.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
119,543
2312.07079
Spatial-Contextual Discrepancy Information Compensation for GAN Inversion
Most existing GAN inversion methods either achieve accurate reconstruction but lack editability or offer strong editability at the cost of fidelity. Hence, how to balance the distortioneditability trade-off is a significant challenge for GAN inversion. To address this challenge, we introduce a novel spatial-contextual discrepancy information compensationbased GAN-inversion method (SDIC), which consists of a discrepancy information prediction network (DIPN) and a discrepancy information compensation network (DICN). SDIC follows a "compensate-and-edit" paradigm and successfully bridges the gap in image details between the original image and the reconstructed/edited image. On the one hand, DIPN encodes the multi-level spatial-contextual information of the original and initial reconstructed images and then predicts a spatial-contextual guided discrepancy map with two hourglass modules. In this way, a reliable discrepancy map that models the contextual relationship and captures finegrained image details is learned. On the other hand, DICN incorporates the predicted discrepancy information into both the latent code and the GAN generator with different transformations, generating high-quality reconstructed/edited images. This effectively compensates for the loss of image details during GAN inversion. Both quantitative and qualitative experiments demonstrate that our proposed method achieves the excellent distortion-editability trade-off at a fast inference speed for both image inversion and editing tasks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
414,792
2307.16106
TransFusion: A Practical and Effective Transformer-based Diffusion Model for 3D Human Motion Prediction
Predicting human motion plays a crucial role in ensuring a safe and effective human-robot close collaboration in intelligent remanufacturing systems of the future. Existing works can be categorized into two groups: those focusing on accuracy, predicting a single future motion, and those generating diverse predictions based on observations. The former group fails to address the uncertainty and multi-modal nature of human motion, while the latter group often produces motion sequences that deviate too far from the ground truth or become unrealistic within historical contexts. To tackle these issues, we propose TransFusion, an innovative and practical diffusion-based model for 3D human motion prediction which can generate samples that are more likely to happen while maintaining a certain level of diversity. Our model leverages Transformer as the backbone with long skip connections between shallow and deep layers. Additionally, we employ the discrete cosine transform to model motion sequences in the frequency space, thereby improving performance. In contrast to prior diffusion-based models that utilize extra modules like cross-attention and adaptive layer normalization to condition the prediction on past observed motion, we treat all inputs, including conditions, as tokens to create a more lightweight model compared to existing approaches. Extensive experimental studies are conducted on benchmark datasets to validate the effectiveness of our human motion prediction model.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
382,474
2312.02405
BEDD: The MineRL BASALT Evaluation and Demonstrations Dataset for Training and Benchmarking Agents that Solve Fuzzy Tasks
The MineRL BASALT competition has served to catalyze advances in learning from human feedback through four hard-to-specify tasks in Minecraft, such as create and photograph a waterfall. Given the completion of two years of BASALT competitions, we offer to the community a formalized benchmark through the BASALT Evaluation and Demonstrations Dataset (BEDD), which serves as a resource for algorithm development and performance assessment. BEDD consists of a collection of 26 million image-action pairs from nearly 14,000 videos of human players completing the BASALT tasks in Minecraft. It also includes over 3,000 dense pairwise human evaluations of human and algorithmic agents. These comparisons serve as a fixed, preliminary leaderboard for evaluating newly-developed algorithms. To enable this comparison, we present a streamlined codebase for benchmarking new algorithms against the leaderboard. In addition to presenting these datasets, we conduct a detailed analysis of the data from both datasets to guide algorithm development and evaluation. The released code and data are available at https://github.com/minerllabs/basalt-benchmark .
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
412,849
2107.07453
Next-item Recommendations in Short Sessions
The changing preferences of users towards items trigger the emergence of session-based recommender systems (SBRSs), which aim to model the dynamic preferences of users for next-item recommendations. However, most of the existing studies on SBRSs are based on long sessions only for recommendations, ignoring short sessions, though short sessions, in fact, account for a large proportion in most of the real-world datasets. As a result, the applicability of existing SBRSs solutions is greatly reduced. In a short session, quite limited contextual information is available, making the next-item recommendation very challenging. To this end, in this paper, inspired by the success of few-shot learning (FSL) in effectively learning a model with limited instances, we formulate the next-item recommendation as an FSL problem. Accordingly, following the basic idea of a representative approach for FSL, i.e., meta-learning, we devise an effective SBRS called INter-SEssion collaborative Recommender netTwork (INSERT) for next-item recommendations in short sessions. With the carefully devised local module and global module, INSERT is able to learn an optimal preference representation of the current user in a given short session. In particular, in the global module, a similar session retrieval network (SSRN) is designed to find out the sessions similar to the current short session from the historical sessions of both the current user and other users, respectively. The obtained similar sessions are then utilized to complement and optimize the preference representation learned from the current short session by the local module for more accurate next-item recommendations in this short session. Extensive experiments conducted on two real-world datasets demonstrate the superiority of our proposed INSERT over the state-of-the-art SBRSs when making next-item recommendations in short sessions.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
246,429
2411.05316
Exploring the Alignment Landscape: LLMs and Geometric Deep Models in Protein Representation
Latent representation alignment has become a foundational technique for constructing multimodal large language models (MLLM) by mapping embeddings from different modalities into a shared space, often aligned with the embedding space of large language models (LLMs) to enable effective cross-modal understanding. While preliminary protein-focused MLLMs have emerged, they have predominantly relied on heuristic approaches, lacking a fundamental understanding of optimal alignment practices across representations. In this study, we explore the alignment of multimodal representations between LLMs and Geometric Deep Models (GDMs) in the protein domain. We comprehensively evaluate three state-of-the-art LLMs (Gemma2-2B, LLaMa3.1-8B, and LLaMa3.1-70B) with four protein-specialized GDMs (GearNet, GVP, ScanNet, GAT). Our work examines alignment factors from both model and protein perspectives, identifying challenges in current alignment methodologies and proposing strategies to improve the alignment process. Our key findings reveal that GDMs incorporating both graph and 3D structural information align better with LLMs, larger LLMs demonstrate improved alignment capabilities, and protein rarity significantly impacts alignment performance. We also find that increasing GDM embedding dimensions, using two-layer projection heads, and fine-tuning LLMs on protein-specific data substantially enhance alignment quality. These strategies offer potential enhancements to the performance of protein-related multimodal models. Our code and data are available at https://github.com/Tizzzzy/LLM-GDM-alignment.
false
true
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
506,623
2408.12253
Epsilon: Exploring Comprehensive Visual-Semantic Projection for Multi-Label Zero-Shot Learning
This paper investigates a challenging problem of zero-shot learning in the multi-label scenario (MLZSL), wherein the model is trained to recognize multiple unseen classes within a sample (e.g., an image) based on seen classes and auxiliary knowledge, e.g., semantic information. Existing methods usually resort to analyzing the relationship of various seen classes residing in a sample from the dimension of spatial or semantic characteristics and transferring the learned model to unseen ones. However, they neglect the integrity of local and global features. Although the use of the attention structure will accurately locate local features, especially objects, it will significantly lose its integrity, and the relationship between classes will also be affected. Rough processing of global features will also directly affect comprehensiveness. This neglect will make the model lose its grasp of the main components of the image. Relying only on the local existence of seen classes during the inference stage introduces unavoidable bias. In this paper, we propose a novel and comprehensive visual-semantic framework for MLZSL, dubbed Epsilon, to fully make use of such properties and enable a more accurate and robust visual-semantic projection. In terms of spatial information, we achieve effective refinement by group aggregating image features into several semantic prompts. It can aggregate semantic information rather than class information, preserving the correlation between semantics. In terms of global semantics, we use global forward propagation to collect as much information as possible to ensure that semantics are not omitted. Experiments on large-scale MLZSL benchmark datasets NUS-Wide and Open-Images-v4 demonstrate that the proposed Epsilon outperforms other state-of-the-art methods with large margins.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
482,654
2308.15673
MDTD: A Multi Domain Trojan Detector for Deep Neural Networks
Machine learning models that use deep neural networks (DNNs) are vulnerable to backdoor attacks. An adversary carrying out a backdoor attack embeds a predefined perturbation called a trigger into a small subset of input samples and trains the DNN such that the presence of the trigger in the input results in an adversary-desired output class. Such adversarial retraining however needs to ensure that outputs for inputs without the trigger remain unaffected and provide high classification accuracy on clean samples. In this paper, we propose MDTD, a Multi-Domain Trojan Detector for DNNs, which detects inputs containing a Trojan trigger at testing time. MDTD does not require knowledge of trigger-embedding strategy of the attacker and can be applied to a pre-trained DNN model with image, audio, or graph-based inputs. MDTD leverages an insight that input samples containing a Trojan trigger are located relatively farther away from a decision boundary than clean samples. MDTD estimates the distance to a decision boundary using adversarial learning methods and uses this distance to infer whether a test-time input sample is Trojaned or not. We evaluate MDTD against state-of-the-art Trojan detection methods across five widely used image-based datasets: CIFAR100, CIFAR10, GTSRB, SVHN, and Flowers102; four graph-based datasets: AIDS, WinMal, Toxicant, and COLLAB; and the SpeechCommand audio dataset. MDTD effectively identifies samples that contain different types of Trojan triggers. We evaluate MDTD against adaptive attacks where an adversary trains a robust DNN to increase (decrease) distance of benign (Trojan) inputs from a decision boundary.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
388,751
2412.11427
Towards Scientific Discovery with Generative AI: Progress, Opportunities, and Challenges
Scientific discovery is a complex cognitive process that has driven human knowledge and technological progress for centuries. While artificial intelligence (AI) has made significant advances in automating aspects of scientific reasoning, simulation, and experimentation, we still lack integrated AI systems capable of performing autonomous long-term scientific research and discovery. This paper examines the current state of AI for scientific discovery, highlighting recent progress in large language models and other AI techniques applied to scientific tasks. We then outline key challenges and promising research directions toward developing more comprehensive AI systems for scientific discovery, including the need for science-focused AI agents, improved benchmarks and evaluation metrics, multimodal scientific representations, and unified frameworks combining reasoning, theorem proving, and data-driven modeling. Addressing these challenges could lead to transformative AI tools to accelerate progress across disciplines towards scientific discovery.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
517,401
1804.01195
Probabilistic Contraction Analysis of Iterated Random Operators
In many branches of engineering, Banach contraction mapping theorem is employed to establish the convergence of certain deterministic algorithms. Randomized versions of these algorithms have been developed that have proved useful in data-driven problems. In a class of randomized algorithms, in each iteration, the contraction map is approximated with an operator that uses independent and identically distributed samples of certain random variables. This leads to iterated random operators acting on an initial point in a complete metric space, and it generates a Markov chain. In this paper, we develop a new stochastic dominance based proof technique, called probabilistic contraction analysis, for establishing the convergence in probability of Markov chains generated by such iterated random operators in certain limiting regime. The methods developed in this paper provides a general framework for understanding convergence of a wide variety of Monte Carlo methods in which contractive property is present. We apply the convergence result to conclude the convergence of fitted value iteration and fitted relative value iteration in continuous state and continuous action Markov decision problems as representative applications of the general framework developed here.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
94,190
2003.03384
AutoML-Zero: Evolving Machine Learning Algorithms From Scratch
Machine learning research has advanced in multiple aspects, including model structures and learning methods. The effort to automate such research, known as AutoML, has also made significant progress. However, this progress has largely focused on the architecture of neural networks, where it has relied on sophisticated expert-designed layers as building blocks---or similarly restrictive search spaces. Our goal is to show that AutoML can go further: it is possible today to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks. We demonstrate this by introducing a novel framework that significantly reduces human bias through a generic search space. Despite the vastness of this space, evolutionary search can still discover two-layer neural networks trained by backpropagation. These simple neural networks can then be surpassed by evolving directly on tasks of interest, e.g. CIFAR-10 variants, where modern techniques emerge in the top algorithms, such as bilinear interactions, normalized gradients, and weight averaging. Moreover, evolution adapts algorithms to different task types: e.g., dropout-like techniques appear when little data is available. We believe these preliminary successes in discovering machine learning algorithms from scratch indicate a promising new direction for the field.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
167,203
2204.00353
Impact of spatiotemporal heterogeneity in heat pump loads on generation and storage requirements
This paper investigates how spatiotemporal heterogeneity in inflexible residential heat pump loads affects the need for storage and generation in the electricity system under business-as-usual and low-carbon emissions budgets. Homogeneous and heterogeneous heat pump loads are generated using population-weighted average and local temperature, respectively, assuming complete residential heat pump penetration. The results of a storage and generation optimal expansion model with network effects for spatiotemporally homogeneous and heterogeneous load profiles are compared. A case study is performed using a 3-bus network of London, Manchester, and Glasgow in Britain for load and weather data for representative weeks. Using heterogeneous heating demand data changes storage sizing: under a business-as-usual budget, 26% more total storage is built on an energy and power basis, and this storage is distributed among all of the buses in the heterogeneous case. Under a low-carbon budget, total energy storage at all buses increases 2 times on an energy basis and 40% on a power basis. The energy to power ratio of storage at each bus also increases when accounting for heterogeneity; this change suggests that storage will be needed to provide energy support in addition to power support for electric heating in high-renewable power systems. Accounting for heterogeneity also increases modeled systems costs, particularly capital costs, because of the need for higher generation capacity in the largest load center and coincidence of local peak demand at different buses. These results show the importance of accounting for heat pump load heterogeneity in power system planning.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
289,230
1601.05533
Information Decomposition on Structured Space
We build information geometry for a partially ordered set of variables and define the orthogonal decomposition of information theoretic quantities. The natural connection between information geometry and order theory leads to efficient decomposition algorithms. This generalization of Amari's seminal work on hierarchical decomposition of probability distributions on event combinations enables us to analyze high-order statistical interactions arising in neuroscience, biology, and machine learning.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
51,132
2107.04154
On lattice-free boosted MMI training of HMM and CTC-based full-context ASR models
Hybrid automatic speech recognition (ASR) models are typically sequentially trained with CTC or LF-MMI criteria. However, they have vastly different legacies and are usually implemented in different frameworks. In this paper, by decoupling the concepts of modeling units and label topologies and building proper numerator/denominator graphs accordingly, we establish a generalized framework for hybrid acoustic modeling (AM). In this framework, we show that LF-MMI is a powerful training criterion applicable to both limited-context and full-context models, for wordpiece/mono-char/bi-char/chenone units, with both HMM/CTC topologies. From this framework, we propose three novel training schemes: chenone(ch)/wordpiece(wp)-CTC-bMMI, and wordpiece(wp)-HMM-bMMI with different advantages in training performance, decoding efficiency and decoding time-stamp accuracy. The advantages of different training schemes are evaluated comprehensively on Librispeech, and wp-CTC-bMMI and ch-CTC-bMMI are evaluated on two real world ASR tasks to show their effectiveness. Besides, we also show bi-char(bc) HMM-MMI models can serve as better alignment models than traditional non-neural GMM-HMMs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
245,364
2005.08619
Properties of Translation Operator and the Solution of the Eigenvalue and Boundary Value Problems of Arbitrary Space-time Periodic Circuits
The time periodic circuit theory is exploited to introduce an appropriate translation operator that is invariant under the change of the spatial unit cell. Useful properties of the operator are derived. By casting the problem in an eigenvalue problem form, the equivalency between solutions at different positions along the structure is demonstrated. It is shown that the underlying mathematical machinery is identical to the one used in the analysis of linear time invariant periodic structures, where a two step eigen-decompositions is performed. The first decomposition is in the temporal eigenfunctions basis, which is followed by the decomposition of the translation operator in the spatial domain. The two step process results in the well-known dispersion relation. We also prove that all points in the ($\beta$,$\omega$) plane parallel to the modulation velocity are equivalent in the sense that the eigenvectors are related by a shift operator. Additionally, the wave propagation inside the space time periodic circuit and the terminal characteristics are rigorously determined via the expansion of the total solution in terms of the eigenmodes. To validate the developed framework, two examples are provided. In the first, a space time modulated composite right left handed transmission line is studied and results are compared with time domain simulation. The second example is concerned with the characterization of the non-reciprocal behaviour observed on a nonlinear transmission line that was manufactured in our lab. Using the developed machinery it is shown that the passive interaction between different harmonics results in an observed giant non-reciprocity, where the difference between the forward and backward transmission coefficients can be greater than 30 dB. The frequencies at which non-reciprocity occurs and its strength agree with time domain simulation and measurements.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
177,674
2307.09295
Learning to Select and Rank from Choice-Based Feedback: A Simple Nested Approach
We study a ranking and selection problem of learning from choice-based feedback with dynamic assortments. In this problem, a company sequentially displays a set of items to a population of customers and collects their choices as feedback. The only information available about the underlying choice model is that the choice probabilities are consistent with some unknown true strict ranking over the items. The objective is to identify, with the fewest samples, the most preferred item or the full ranking over the items at a high confidence level. We present novel and simple algorithms for both learning goals. In the first subproblem regarding best-item identification, we introduce an elimination-based algorithm, Nested Elimination (NE). In the more complex subproblem regarding full-ranking identification, we generalize NE and propose a divide-and-conquer algorithm, Nested Partition (NP). We provide strong characterizations of both algorithms through instance-specific and non-asymptotic bounds on the sample complexity. This is accomplished using an analytical framework that characterizes the system dynamics through analyzing a sequence of multi-dimensional random walks. We also establish a connection between our nested approach and the information-theoretic lower bounds. We thus show that NE is worst-case asymptotically optimal, and NP is optimal up to a constant factor. Finally, numerical experiments from both synthetic and real data corroborate our theoretical findings.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
380,124
2411.02974
Region-Guided Attack on the Segment Anything Model (SAM)
The Segment Anything Model (SAM) is a cornerstone of image segmentation, demonstrating exceptional performance across various applications, particularly in autonomous driving and medical imaging, where precise segmentation is crucial. However, SAM is vulnerable to adversarial attacks that can significantly impair its functionality through minor input perturbations. Traditional techniques, such as FGSM and PGD, are often ineffective in segmentation tasks due to their reliance on global perturbations that overlook spatial nuances. Recent methods like Attack-SAM-K and UAD have begun to address these challenges, but they frequently depend on external cues and do not fully leverage the structural interdependencies within segmentation processes. This limitation underscores the need for a novel adversarial strategy that exploits the unique characteristics of segmentation tasks. In response, we introduce the Region-Guided Attack (RGA), designed specifically for SAM. RGA utilizes a Region-Guided Map (RGM) to manipulate segmented regions, enabling targeted perturbations that fragment large segments and expand smaller ones, resulting in erroneous outputs from SAM. Our experiments demonstrate that RGA achieves high success rates in both white-box and black-box scenarios, emphasizing the need for robust defenses against such sophisticated attacks. RGA not only reveals SAM's vulnerabilities but also lays the groundwork for developing more resilient defenses against adversarial threats in image segmentation.
false
false
false
false
true
false
false
false
false
false
false
true
true
false
false
false
false
false
505,742
2404.03648
AutoWebGLM: A Large Language Model-based Web Navigating Agent
Large language models (LLMs) have fueled many intelligent web agents, but most existing ones perform far from satisfying in real-world web navigation tasks due to three factors: (1) the complexity of HTML text data (2) versatility of actions on webpages, and (3) task difficulty due to the open-domain nature of the web. In light of these challenges, we develop the open AutoWebGLM based on ChatGLM3-6B. AutoWebGLM can serve as a powerful automated web navigation agent that outperform GPT-4. Inspired by human browsing patterns, we first design an HTML simplification algorithm to represent webpages with vital information preserved succinctly. We then employ a hybrid human-AI method to build web browsing data for curriculum training. Finally, we bootstrap the model by reinforcement learning and rejection sampling to further facilitate webpage comprehension, browser operations, and efficient task decomposition by itself. For comprehensive evaluation, we establish a bilingual benchmark -- AutoWebBench -- for real-world web navigation tasks. We evaluate AutoWebGLM across diverse web navigation benchmarks, demonstrating its potential to tackle challenging tasks in real environments. Related code, model, and data are released at \url{https://github.com/THUDM/AutoWebGLM}.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
444,346
2306.07034
Enhanced Floating Isogeometric Analysis
The numerical simulation of additive manufacturing techniques promises the acceleration of costly experimental procedures to identify suitable process parameters. We recently proposed Floating Isogeometric Analysis (FLIGA), a new computational solid mechanics approach, which is mesh distortion-free in one characteristic spatial direction. FLIGA emanates from Isogeometric Analysis and its key novel aspect is the concept of deformation-dependent "floating" of individual B-spline basis functions along one parametric axis of the mesh. Our previous work showed that FLIGA not only overcomes the problem of mesh distortion associated to this direction, but is also ideally compatible with material point integration and enjoys a stability similar to that of conventional Lagrangian mesh-based methods. These features make the method applicable to the simulation of large deformation problems with history-dependent constitutive behavior, such as additive manufacturing based on polymer extrusion. In this work, we enhance the first version of FLIGA by (i) a novel quadrature scheme which further improves the robustness against mesh distortion, (ii) a procedure to automatically regulate floating of the basis functions (as opposed to the manual procedure of the first version), and (iii) an adaptive refinement strategy. We demonstrate the performance of enhanced FLIGA on relevant numerical examples including a selection of viscoelastic extrusion problems.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
372,852
1803.04579
It was the training data pruning too!
We study the current best model (KDG) for question answering on tabular data evaluated over the WikiTableQuestions dataset. Previous ablation studies performed against this model attributed the model's performance to certain aspects of its architecture. In this paper, we find that the model's performance also crucially depends on a certain pruning of the data used to train the model. Disabling the pruning step drops the accuracy of the model from 43.3% to 36.3%. The large impact on the performance of the KDG model suggests that the pruning may be a useful pre-processing step in training other semantic parsers as well.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
92,480
2208.04448
NeuralVDB: High-resolution Sparse Volume Representation using Hierarchical Neural Networks
We introduce NeuralVDB, which improves on an existing industry standard for efficient storage of sparse volumetric data, denoted VDB [Museth 2013], by leveraging recent advancements in machine learning. Our novel hybrid data structure can reduce the memory footprints of VDB volumes by orders of magnitude, while maintaining its flexibility and only incurring small (user-controlled) compression errors. Specifically, NeuralVDB replaces the lower nodes of a shallow and wide VDB tree structure with multiple hierarchical neural networks that separately encode topology and value information by means of neural classifiers and regressors respectively. This approach is proven to maximize the compression ratio while maintaining the spatial adaptivity offered by the higher-level VDB data structure. For sparse signed distance fields and density volumes, we have observed compression ratios on the order of 10x to more than 100x from already compressed VDB inputs, with little to no visual artifacts. Furthermore, NeuralVDB is shown to offer more effective compression performance compared to other neural representations such as Neural Geometric Level of Detail [Takikawa et al. 2021], Variable Bitrate Neural Fields [Takikawa et al. 2022a], and Instant Neural Graphics Primitives [M\"uller et al. 2022]. Finally, we demonstrate how warm-starting from previous frames can accelerate training, i.e., compression, of animated volumes as well as improve temporal coherency of model inference, i.e., decompression.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
true
312,108
1103.3673
Buffers Improve the Performance of Relay Selection
We show that the performance of relay selection can be improved by employing relays with buffers. Under the idealized assumption that no buffer is full or empty, the best source-relay and the best relay-destination channels can be simultaneously exploited by selecting the corresponding relays for reception and transmission, respectively. The resulting relay selection scheme is referred to as max-max relay selection (MMRS). Since for finite buffer sizes, empty and full buffers are practically unavoidable if MMRS is employed, we propose a hybrid relay selection (HRS) scheme, which is a combination of conventional best relay selection (BRS) and MMRS. We analyze the outage probabilities of MMRS and HRS and show that both schemes achieve the same diversity gain as conventional BRS and a superior coding gain. Furthermore, our results show that for moderate buffer sizes (e.g. 30 packets) HRS closely approaches the performance of idealized MMRS and the performance gain compared to BRS approaches 3 dB as the number of relays increases.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
9,665
1109.3994
k-means Approach to the Karhunen-Loeve Transform
We present a simultaneous generalization of the well-known Karhunen-Loeve (PCA) and k-means algorithms. The basic idea lies in approximating the data with k affine subspaces of a given dimension n. In the case n=0 we obtain the classical k-means, while for k=1 we obtain PCA algorithm. We show that for some data exploration problems this method gives better result then either of the classical approaches.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
12,230
2004.09760
Take a NAP: Non-Autoregressive Prediction for Pedestrian Trajectories
Pedestrian trajectory prediction is a challenging task as there are three properties of human movement behaviors which need to be addressed, namely, the social influence from other pedestrians, the scene constraints, and the multimodal (multiroute) nature of predictions. Although existing methods have explored these key properties, the prediction process of these methods is autoregressive. This means they can only predict future locations sequentially. In this paper, we present NAP, a non-autoregressive method for trajectory prediction. Our method comprises specifically designed feature encoders and a latent variable generator to handle the three properties above. It also has a time-agnostic context generator and a time-specific context generator for non-autoregressive prediction. Through extensive experiments that compare NAP against several recent methods, we show that NAP has state-of-the-art trajectory prediction performance.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
173,447
2002.09142
Learning Optimal Classification Trees: Strong Max-Flow Formulations
We consider the problem of learning optimal binary classification trees. Literature on the topic has burgeoned in recent years, motivated both by the empirical suboptimality of heuristic approaches and the tremendous improvements in mixed-integer programming (MIP) technology. Yet, existing approaches from the literature do not leverage the power of MIP to its full extent. Indeed, they rely on weak formulations, resulting in slow convergence and large optimality gaps. To fill this gap in the literature, we propose a flow-based MIP formulation for optimal binary classification trees that has a stronger linear programming relaxation. Our formulation presents an attractive decomposable structure. We exploit this structure and max-flow/min-cut duality to derive a Benders' decomposition method, which scales to larger instances. We conduct extensive computational experiments on standard benchmark datasets on which we show that our proposed approaches are 50 times faster than state-of-the art MIP-based techniques and improve out of sample performance up to 13.8%.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
164,987
1809.05724
Improving Natural Language Inference Using External Knowledge in the Science Questions Domain
Natural Language Inference (NLI) is fundamental to many Natural Language Processing (NLP) applications including semantic search and question answering. The NLI problem has gained significant attention thanks to the release of large scale, challenging datasets. Present approaches to the problem largely focus on learning-based methods that use only textual information in order to classify whether a given premise entails, contradicts, or is neutral with respect to a given hypothesis. Surprisingly, the use of methods based on structured knowledge -- a central topic in artificial intelligence -- has not received much attention vis-a-vis the NLI problem. While there are many open knowledge bases that contain various types of reasoning information, their use for NLI has not been well explored. To address this, we present a combination of techniques that harness knowledge graphs to improve performance on the NLI problem in the science questions domain. We present the results of applying our techniques on text, graph, and text-to-graph based models, and discuss implications for the use of external knowledge in solving the NLI problem. Our model achieves the new state-of-the-art performance on the NLI problem over the SciTail science questions dataset.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
107,856
2208.12547
Deep Hypergraph Structure Learning
Learning on high-order correlation has shown superiority in data representation learning, where hypergraph has been widely used in recent decades. The performance of hypergraph-based representation learning methods, such as hypergraph neural networks, highly depends on the quality of the hypergraph structure. How to generate the hypergraph structure among data is still a challenging task. Missing and noisy data may lead to "bad connections" in the hypergraph structure and destroy the hypergraph-based representation learning process. Therefore, revealing the high-order structure, i.e., the hypergraph behind the observed data, becomes an urgent but important task. To address this issue, we design a general paradigm of deep hypergraph structure learning, namely DeepHGSL, to optimize the hypergraph structure for hypergraph-based representation learning. Concretely, inspired by the information bottleneck principle for the robustness issue, we first extend it to the hypergraph case, named by the hypergraph information bottleneck (HIB) principle. Then, we apply this principle to guide the hypergraph structure learning, where the HIB is introduced to construct the loss function to minimize the noisy information in the hypergraph structure. The hypergraph structure can be optimized and this process can be regarded as enhancing the correct connections and weakening the wrong connections in the training phase. Therefore, the proposed method benefits to extract more robust representations even on a heavily noisy structure. Finally, we evaluate the model on four benchmark datasets for representation learning. The experimental results on both graph- and hypergraph-structured data demonstrate the effectiveness and robustness of our method compared with other state-of-the-art methods.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
314,767
2202.08232
Quantum Lazy Training
In the training of over-parameterized model functions via gradient descent, sometimes the parameters do not change significantly and remain close to their initial values. This phenomenon is called lazy training, and motivates consideration of the linear approximation of the model function around the initial parameters. In the lazy regime, this linear approximation imitates the behavior of the parameterized function whose associated kernel, called the tangent kernel, specifies the training performance of the model. Lazy training is known to occur in the case of (classical) neural networks with large widths. In this paper, we show that the training of geometrically local parameterized quantum circuits enters the lazy regime for large numbers of qubits. More precisely, we prove bounds on the rate of changes of the parameters of such a geometrically local parameterized quantum circuit in the training process, and on the precision of the linear approximation of the associated quantum model function; both of these bounds tend to zero as the number of qubits grows. We support our analytic results with numerical simulations.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
280,808
2106.13401
Decomposed Mutual Information Estimation for Contrastive Representation Learning
Recent contrastive representation learning methods rely on estimating mutual information (MI) between multiple views of an underlying context. E.g., we can derive multiple views of a given image by applying data augmentation, or we can split a sequence into views comprising the past and future of some step in the sequence. Contrastive lower bounds on MI are easy to optimize, but have a strong underestimation bias when estimating large amounts of MI. We propose decomposing the full MI estimation problem into a sum of smaller estimation problems by splitting one of the views into progressively more informed subviews and by applying the chain rule on MI between the decomposed views. This expression contains a sum of unconditional and conditional MI terms, each measuring modest chunks of the total MI, which facilitates approximation via contrastive bounds. To maximize the sum, we formulate a contrastive lower bound on the conditional MI which can be approximated efficiently. We refer to our general approach as Decomposed Estimation of Mutual Information (DEMI). We show that DEMI can capture a larger amount of MI than standard non-decomposed contrastive bounds in a synthetic setting, and learns better representations in a vision domain and for dialogue generation.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
243,063
1508.02606
InAR:Inverse Augmented Reality
Augmented reality is the art to seamlessly fuse virtual objects into real ones. In this short note, we address the opposite problem, the inverse augmented reality, that is, given a perfectly augmented reality scene where human is unable to distinguish real objects from virtual ones, how the machine could help do the job. We show by structure from motion (SFM), a simple 3D reconstruction technique from images in computer vision, the real and virtual objects can be easily separated in the reconstructed 3D scene.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
45,922
2001.00920
Estimation of the yield curve for Costa Rica using combinatorial optimization metaheuristics applied to nonlinear regression
The term structure of interest rates or yield curve is a function relating the interest rate with its own term. Nonlinear regression models of Nelson-Siegel and Svensson were used to estimate the yield curve using a sample of historical data supplied by the National Stock Exchange of Costa Rica. The optimization problem involved in the estimation process of model parameters is addressed by the use of four well known combinatorial optimization metaheuristics: Ant colony optimization, Genetic algorithm, Particle swarm optimization and Simulated annealing. The aim of the study is to improve the local minima obtained by a classical quasi-Newton optimization method using a descent direction. Good results with at least two metaheuristics are achieved, Particle swarm optimization and Simulated annealing. Keywords: Yield curve, nonlinear regression, Nelson-
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
159,353
2110.13494
Meta-Learning for Multi-Label Few-Shot Classification
Even with the luxury of having abundant data, multi-label classification is widely known to be a challenging task to address. This work targets the problem of multi-label meta-learning, where a model learns to predict multiple labels within a query (e.g., an image) by just observing a few supporting examples. In doing so, we first propose a benchmark for Few-Shot Learning (FSL) with multiple labels per sample. Next, we discuss and extend several solutions specifically designed to address the conventional and single-label FSL, to work in the multi-label regime. Lastly, we introduce a neural module to estimate the label count of a given sample by exploiting the relational inference. We will show empirically the benefit of the label count module, the label propagation algorithm, and the extensions of conventional FSL methods on three challenging datasets, namely MS-COCO, iMaterialist, and Open MIC. Overall, our thorough experiments suggest that the proposed label-propagation algorithm in conjunction with the neural label count module (NLC) shall be considered as the method of choice.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
263,204
1704.07355
Accelerated Nearest Neighbor Search with Quick ADC
Efficient Nearest Neighbor (NN) search in high-dimensional spaces is a foundation of many multimedia retrieval systems. Because it offers low responses times, Product Quantization (PQ) is a popular solution. PQ compresses high-dimensional vectors into short codes using several sub-quantizers, which enables in-RAM storage of large databases. This allows fast answers to NN queries, without accessing the SSD or HDD. The key feature of PQ is that it can compute distances between short codes and high-dimensional vectors using cache-resident lookup tables. The efficiency of this technique, named Asymmetric Distance Computation (ADC), remains limited because it performs many cache accesses. In this paper, we introduce Quick ADC, a novel technique that achieves a 3 to 6 times speedup over ADC by exploiting Single Instruction Multiple Data (SIMD) units available in current CPUs. Efficiently exploiting SIMD requires algorithmic changes to the ADC procedure. Namely, Quick ADC relies on two key modifications of ADC: (i) the use 4-bit sub-quantizers instead of the standard 8-bit sub-quantizers and (ii) the quantization of floating-point distances. This allows Quick ADC to exceed the performance of state-of-the-art systems, e.g., it achieves a Recall@100 of 0.94 in 3.4 ms on 1 billion SIFT descriptors (128-bit codes).
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
true
true
72,342
1810.10702
Subgradient Descent Learns Orthogonal Dictionaries
This paper concerns dictionary learning, i.e., sparse coding, a fundamental representation learning problem. We show that a subgradient descent algorithm, with random initialization, can provably recover orthogonal dictionaries on a natural nonsmooth, nonconvex $\ell_1$ minimization formulation of the problem, under mild statistical assumptions on the data. This is in contrast to previous provable methods that require either expensive computation or delicate initialization schemes. Our analysis develops several tools for characterizing landscapes of nonsmooth functions, which might be of independent interest for provable training of deep networks with nonsmooth activations (e.g., ReLU), among numerous other applications. Preliminary experiments corroborate our analysis and show that our algorithm works well empirically in recovering orthogonal dictionaries.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
111,346
2401.06071
GroundingGPT:Language Enhanced Multi-modal Grounding Model
Multi-modal large language models have demonstrated impressive performance across various tasks in different modalities. However, existing multi-modal models primarily emphasize capturing global information within each modality while neglecting the importance of perceiving local information across modalities. Consequently, these models lack the ability to effectively understand the fine-grained details of input data, limiting their performance in tasks that require a more nuanced understanding. To address this limitation, there is a compelling need to develop models that enable fine-grained understanding across multiple modalities, thereby enhancing their applicability to a wide range of tasks. In this paper, we propose GroundingGPT, a language enhanced multi-modal grounding model. Beyond capturing global information like other multi-modal models, our proposed model excels at tasks demanding a detailed understanding of local information within the input. It demonstrates precise identification and localization of specific regions in images or moments in videos. To achieve this objective, we design a diversified dataset construction pipeline, resulting in a multi-modal, multi-granularity dataset for model training. The code, dataset, and demo of our model can be found at https: //github.com/lzw-lzw/GroundingGPT.
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
421,004
1812.02646
RepeatNet: A Repeat Aware Neural Recommendation Machine for Session-based Recommendation
Recurrent neural networks for session-based recommendation have attracted a lot of attention recently because of their promising performance. repeat consumption is a common phenomenon in many recommendation scenarios (e.g., e-commerce, music, and TV program recommendations), where the same item is re-consumed repeatedly over time. However, no previous studies have emphasized repeat consumption with neural networks. An effective neural approach is needed to decide when to perform repeat recommendation. In this paper, we incorporate a repeat-explore mechanism into neural networks and propose a new model, called RepeatNet, with an encoder-decoder structure. RepeatNet integrates a regular neural recommendation approach in the decoder with a new repeat recommendation mechanism that can choose items from a user's history and recommends them at the right time. We report on extensive experiments on three benchmark datasets. RepeatNet outperforms state-of-the-art baselines on all three datasets in terms of MRR and Recall. Furthermore, as the dataset size and the repeat ratio increase, the improvements of RepeatNet over the baselines also increase, which demonstrates its advantage in handling repeat recommendation scenarios.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
115,821
2110.10318
Improved Multilingual Language Model Pretraining for Social Media Text via Translation Pair Prediction
We evaluate a simple approach to improving zero-shot multilingual transfer of mBERT on social media corpus by adding a pretraining task called translation pair prediction (TPP), which predicts whether a pair of cross-lingual texts are a valid translation. Our approach assumes access to translations (exact or approximate) between source-target language pairs, where we fine-tune a model on source language task data and evaluate the model in the target language. In particular, we focus on language pairs where transfer learning is difficult for mBERT: those where source and target languages are different in script, vocabulary, and linguistic typology. We show improvements from TPP pretraining over mBERT alone in zero-shot transfer from English to Hindi, Arabic, and Japanese on two social media tasks: NER (a 37% average relative improvement in F1 across target languages) and sentiment classification (12% relative improvement in F1) on social media text, while also benchmarking on a non-social media task of Universal Dependency POS tagging (6.7% relative improvement in accuracy). Our results are promising given the lack of social media bitext corpus. Our code can be found at: https://github.com/twitter-research/multilingual-alignment-tpp.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
262,102
2412.00132
Road User Classification from High-Frequency GNSS Data Using Distributed Edge Intelligence
Real-world traffic involves diverse road users, ranging from pedestrians to heavy trucks, necessitating effective road user classification for various applications within Intelligent Transport Systems (ITS). Traditional approaches often rely on intrusive and/or expensive external hardware sensors. These systems typically have limited spatial coverage. In response to these limitations, this work aims to investigate an unintrusive and cost-effective alternative for road user classification by using high-frequency (1-2 Hz) positional sequences. A cutting-edge solution could involve leveraging positioning data from 5G networks. However, this feature is currently only proposed in the 3GPP standard and has not yet been implemented for outdoor applications by 5G equipment vendors. Therefore, our approach relies on positional data, that is recorded under real-world conditions using Global Navigation Satellite Systems (GNSS) and processed on distributed edge devices. As a start-ing point, four types of road users are distinguished: pedestri-ans, cyclists, motorcycles, and passenger cars. While earlier approaches used classical statistical methods, we propose Long Short-Term Memory (LSTM) recurrent neural networks (RNNs) as the preferred classification method, as they repre-sent state-of-the-art in processing sequential data. An RNN architecture for road user classification, based on selected motion characteristics derived from raw positional sequences is presented and the influence of sequence length on classifica-tion quality is examined. The results of the work show that RNNs are capable of efficiently classifying road users on dis-tributed devices, and can particularly differentiate between types of motorized vehicles, based on two- to four-minute se-quences.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
512,521
2201.09884
AutoMC: Automated Model Compression based on Domain Knowledge and Progressive search strategy
Model compression methods can reduce model complexity on the premise of maintaining acceptable performance, and thus promote the application of deep neural networks under resource constrained environments. Despite their great success, the selection of suitable compression methods and design of details of the compression scheme are difficult, requiring lots of domain knowledge as support, which is not friendly to non-expert users. To make more users easily access to the model compression scheme that best meet their needs, in this paper, we propose AutoMC, an effective automatic tool for model compression. AutoMC builds the domain knowledge on model compression to deeply understand the characteristics and advantages of each compression method under different settings. In addition, it presents a progressive search strategy to efficiently explore pareto optimal compression scheme according to the learned prior knowledge combined with the historical evaluation information. Extensive experimental results show that AutoMC can provide satisfying compression schemes within short time, demonstrating the effectiveness of AutoMC.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
276,810
2307.00474
Moments, Random Walks, and Limits for Spectrum Approximation
We study lower bounds for the problem of approximating a one dimensional distribution given (noisy) measurements of its moments. We show that there are distributions on $[-1,1]$ that cannot be approximated to accuracy $\epsilon$ in Wasserstein-1 distance even if we know \emph{all} of their moments to multiplicative accuracy $(1\pm2^{-\Omega(1/\epsilon)})$; this result matches an upper bound of Kong and Valiant [Annals of Statistics, 2017]. To obtain our result, we provide a hard instance involving distributions induced by the eigenvalue spectra of carefully constructed graph adjacency matrices. Efficiently approximating such spectra in Wasserstein-1 distance is a well-studied algorithmic problem, and a recent result of Cohen-Steiner et al. [KDD 2018] gives a method based on accurately approximating spectral moments using $2^{O(1/\epsilon)}$ random walks initiated at uniformly random nodes in the graph. As a strengthening of our main result, we show that improving the dependence on $1/\epsilon$ in this result would require a new algorithmic approach. Specifically, no algorithm can compute an $\epsilon$-accurate approximation to the spectrum of a normalized graph adjacency matrix with constant probability, even when given the transcript of $2^{\Omega(1/\epsilon)}$ random walks of length $2^{\Omega(1/\epsilon)}$ started at random nodes.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
377,018
cs/0404004
Dealing With Curious Players in Secure Networks
In secure communications networks there are a great number of user behavioural problems, which need to be dealt with. Curious players pose a very real and serious threat to the integrity of such a network. By traversing a network a Curious player could uncover secret information, which that user has no need to know, by simply posing as a loyalty check. Loyalty checks are done simply to gauge the integrity of the network with respect to players who act in a malicious manner. We wish to propose a method, which can deal with Curious players trying to obtain "Need to Know" information using a combined Fault-tolerant, Cryptographic and Game Theoretic Approach.
false
false
false
false
false
false
false
false
false
false
false
false
true
false
true
false
false
true
538,141
1710.04552
Improving optimal control of grid-connected lithium-ion batteries through more accurate battery and degradation modelling
The increased deployment of intermittent renewable energy generators opens up opportunities for grid-connected energy storage. Batteries offer significant flexibility but are relatively expensive at present. Battery lifetime is a key factor in the business case, and it depends on usage, but most techno-economic analyses do not account for this. For the first time, this paper quantifies the annual benefits of grid-connected batteries including realistic physical dynamics and nonlinear electrochemical degradation. Three lithium-ion battery models of increasing realism are formulated, and the predicted degradation of each is compared with a large-scale experimental degradation data set (Mat4Bat). A respective improvement in RMS capacity prediction error from 11\% to 5\% is found by increasing the model accuracy. The three models are then used within an optimal control algorithm to perform price arbitrage over one year, including degradation. Results show that the revenue can be increased substantially while degradation can be reduced by using more realistic models. The estimated best case profit using a sophisticated model is a 175% improvement compared with the simplest model. This illustrates that using a simplistic battery model in a techno-economic assessment of grid-connected batteries might substantially underestimate the business case and lead to erroneous conclusions.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
82,496
2109.13066
Prefix-to-SQL: Text-to-SQL Generation from Incomplete User Questions
Existing text-to-SQL research only considers complete questions as the input, but lay-users might strive to formulate a complete question. To build a smarter natural language interface to database systems (NLIDB) that also processes incomplete questions, we propose a new task, prefix-to-SQL which takes question prefix from users as the input and predicts the intended SQL. We construct a new benchmark called PAGSAS that contains 124K user question prefixes and the intended SQL for 5 sub-tasks Advising, GeoQuery, Scholar, ATIS, and Spider. Additionally, we propose a new metric SAVE to measure how much effort can be saved by users. Experimental results show that PAGSAS is challenging even for strong baseline models such as T5. As we observe the difficulty of prefix-to-SQL is related to the number of omitted tokens, we incorporate curriculum learning of feeding examples with an increasing number of omitted tokens. This improves scores on various sub-tasks by as much as 9% recall scores on sub-task GeoQuery in PAGSAS.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
true
false
257,510