id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1905.09870
Gradient Descent can Learn Less Over-parameterized Two-layer Neural Networks on Classification Problems
Recently, several studies have proven the global convergence and generalization abilities of the gradient descent method for two-layer ReLU networks. Most studies especially focused on the regression problems with the squared loss function, except for a few, and the importance of the positivity of the neural tangent kernel has been pointed out. On the other hand, the performance of gradient descent on classification problems using the logistic loss function has not been well studied, and further investigation of this problem structure is possible. In this work, we demonstrate that the separability assumption using a neural tangent model is more reasonable than the positivity condition of the neural tangent kernel and provide a refined convergence analysis of the gradient descent for two-layer networks with smooth activations. A remarkable point of our result is that our convergence and generalization bounds have much better dependence on the network width in comparison to related studies. Consequently, our theory provides a generalization guarantee for less over-parameterized two-layer networks, while most studies require much higher over-parameterization.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
131,854
2302.10146
Multi-generational labour markets: data-driven discovery of multi-perspective system parameters using machine learning
Economic issues, such as inflation, energy costs, taxes, and interest rates, are a constant presence in our daily lives and have been exacerbated by global events such as pandemics, environmental disasters, and wars. A sustained history of financial crises reveals significant weaknesses and vulnerabilities in the foundations of modern economies. Another significant issue currently is people quitting their jobs in large numbers. Moreover, many organizations have a diverse workforce comprising multiple generations posing new challenges. Transformative approaches in economics and labour markets are needed to protect our societies, economies, and planet. In this work, we use big data and machine learning methods to discover multi-perspective parameters for multi-generational labour markets. The parameters for the academic perspective are discovered using 35,000 article abstracts from the Web of Science for the period 1958-2022 and for the professionals' perspective using 57,000 LinkedIn posts from 2022. We discover a total of 28 parameters and categorised them into 5 macro-parameters, Learning & Skills, Employment Sectors, Consumer Industries, Learning & Employment Issues, and Generations-specific Issues. A complete machine learning software tool is developed for data-driven parameter discovery. A variety of quantitative and visualisation methods are applied and multiple taxonomies are extracted to explore multi-generational labour markets. A knowledge structure and literature review of multi-generational labour markets using over 100 research articles is provided. It is expected that this work will enhance the theory and practice of AI-based methods for knowledge discovery and system parameter discovery to develop autonomous capabilities and systems and promote novel approaches to labour economics and markets, leading to the development of sustainable societies and economies.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
346,686
2311.04551
Earth Observation based multi-scale analysis of crop diversity in the European Union: first insights for agro-environmental policies
To understand the resilience of farms and the agricultural sector, as well as the provision of ecosystem services, we need to characterize and quantify crop diversity. Using a 10m resolution satellite-derived product, we created datasets of crop diversity across spatial and administrative scales for 27 EU countries and the UK in 2018. We define local crop diversity, or $\alpha$-diversity, at a 1km scale, corresponding to large or clusters of small-to-medium-sized farms. $\alpha$ crop diversities range from 2.3 to 4.4, with higher levels in systems with many small farms (averaging less than 10 ha). $\gamma$-diversity, the number and area of crops grown independently of location, increases from 2.85 at 1km to 3.86 at 10km, and levels off at 4.27 at 100km. These levels are higher than those reported in the U.S., possibly due to differences in farm structure and practices. $\beta$-diversity, the ratio of $\gamma$ and $\alpha$ diversities, measures the difference between agroecosystems and ranges from 1.2 to 2.3 across EU countries. We classify countries' crop diversities into four groups based on the magnitude and change of $\gamma$-diversity across scales, with implications for regional to national agro-environmental policy recommendations. Continental Copernicus crop type maps will enable temporal comparisons, and exploring ecosystem co-variates will deepen our understanding of the link between crop diversity and agro-ecosystem services.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
406,271
2409.00140
Statistical Analysis of the Impact of Quaternion Components in Convolutional Neural Networks
In recent years, several models using Quaternion-Valued Convolutional Neural Networks (QCNNs) for different problems have been proposed. Although the definition of the quaternion convolution layer is the same, there are different adaptations of other atomic components to the quaternion domain, e.g., pooling layers, activation functions, fully connected layers, etc. However, the effect of selecting a specific type of these components and the way in which their interactions affect the performance of the model still unclear. Understanding the impact of these choices on model performance is vital for effectively utilizing QCNNs. This paper presents a statistical analysis carried out on experimental data to compare the performance of existing components for the image classification problem. In addition, we introduce a novel Fully Quaternion ReLU activation function, which exploits the unique properties of quaternion algebra to improve model performance.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
true
false
false
484,805
2008.01552
A Reinforcement Learning Method For Power Suppliers' Strategic Bidding with Insufficient Information
Power suppliers can exercise market power to gain higher profit. However, this becomes difficult when external information is extremely rare. To get a promising performance in an extremely incomplete information market environment, a novel model-free reinforcement learning algorithm based on the Learning Automata (LA) is proposed in this paper. Besides, this paper analyses the rationality and convergence of the algorithm in case studies based on the Cournot market model.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
190,368
1611.06612
RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation
Recently, very deep convolutional neural networks (CNNs) have shown outstanding performance in object recognition and have also been the first choice for dense classification problems such as semantic segmentation. However, repeated subsampling operations like pooling or convolution striding in deep CNNs lead to a significant decrease in the initial image resolution. Here, we present RefineNet, a generic multi-path refinement network that explicitly exploits all the information available along the down-sampling process to enable high-resolution prediction using long-range residual connections. In this way, the deeper layers that capture high-level semantic features can be directly refined using fine-grained features from earlier convolutions. The individual components of RefineNet employ residual connections following the identity mapping mindset, which allows for effective end-to-end training. Further, we introduce chained residual pooling, which captures rich background context in an efficient manner. We carry out comprehensive experiments and set new state-of-the-art results on seven public datasets. In particular, we achieve an intersection-over-union score of 83.4 on the challenging PASCAL VOC 2012 dataset, which is the best reported result to date.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
64,220
2204.05823
Adaptive Cross-Attention-Driven Spatial-Spectral Graph Convolutional Network for Hyperspectral Image Classification
Recently, graph convolutional networks (GCNs) have been developed to explore spatial relationship between pixels, achieving better classification performance of hyperspectral images (HSIs). However, these methods fail to sufficiently leverage the relationship between spectral bands in HSI data. As such, we propose an adaptive cross-attention-driven spatial-spectral graph convolutional network (ACSS-GCN), which is composed of a spatial GCN (Sa-GCN) subnetwork, a spectral GCN (Se-GCN) subnetwork, and a graph cross-attention fusion module (GCAFM). Specifically, Sa-GCN and Se-GCN are proposed to extract the spatial and spectral features by modeling correlations between spatial pixels and between spectral bands, respectively. Then, by integrating attention mechanism into information aggregation of graph, the GCAFM, including three parts, i.e., spatial graph attention block, spectral graph attention block, and fusion block, is designed to fuse the spatial and spectral features and suppress noise interference in Sa-GCN and Se-GCN. Moreover, the idea of the adaptive graph is introduced to explore an optimal graph through back propagation during the training process. Experiments on two HSI data sets show that the proposed method achieves better performance than other classification methods.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
291,155
2405.02769
Linear Convergence of Independent Natural Policy Gradient in Games with Entropy Regularization
This work focuses on the entropy-regularized independent natural policy gradient (NPG) algorithm in multi-agent reinforcement learning. In this work, agents are assumed to have access to an oracle with exact policy evaluation and seek to maximize their respective independent rewards. Each individual's reward is assumed to depend on the actions of all the agents in the multi-agent system, leading to a game between agents. We assume all agents make decisions under a policy with bounded rationality, which is enforced by the introduction of entropy regularization. In practice, a smaller regularization implies the agents are more rational and behave closer to Nash policies. On the other hand, agents with larger regularization acts more randomly, which ensures more exploration. We show that, under sufficient entropy regularization, the dynamics of this system converge at a linear rate to the quantal response equilibrium (QRE). Although regularization assumptions prevent the QRE from approximating a Nash equilibrium, our findings apply to a wide range of games, including cooperative, potential, and two-player matrix games. We also provide extensive empirical results on multiple games (including Markov games) as a verification of our theoretical analysis.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
451,906
1803.05795
RUSSE'2018: A Shared Task on Word Sense Induction for the Russian Language
The paper describes the results of the first shared task on word sense induction (WSI) for the Russian language. While similar shared tasks were conducted in the past for some Romance and Germanic languages, we explore the performance of sense induction and disambiguation methods for a Slavic language that shares many features with other Slavic languages, such as rich morphology and virtually free word order. The participants were asked to group contexts of a given word in accordance with its senses that were not provided beforehand. For instance, given a word "bank" and a set of contexts for this word, e.g. "bank is a financial institution that accepts deposits" and "river bank is a slope beside a body of water", a participant was asked to cluster such contexts in the unknown in advance number of clusters corresponding to, in this case, the "company" and the "area" senses of the word "bank". For the purpose of this evaluation campaign, we developed three new evaluation datasets based on sense inventories that have different sense granularity. The contexts in these datasets were sampled from texts of Wikipedia, the academic corpus of Russian, and an explanatory dictionary of Russian. Overall, 18 teams participated in the competition submitting 383 models. Multiple teams managed to substantially outperform competitive state-of-the-art baselines from the previous years based on sense embeddings.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
92,705
1907.04380
Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference
Natural Language Inference (NLI) datasets often contain hypothesis-only biases---artifacts that allow models to achieve non-trivial performance without learning whether a premise entails a hypothesis. We propose two probabilistic methods to build models that are more robust to such biases and better transfer across datasets. In contrast to standard approaches to NLI, our methods predict the probability of a premise given a hypothesis and NLI label, discouraging models from ignoring the premise. We evaluate our methods on synthetic and existing NLI datasets by training on datasets containing biases and testing on datasets containing no (or different) hypothesis-only biases. Our results indicate that these methods can make NLI models more robust to dataset-specific artifacts, transferring better than a baseline architecture in 9 out of 12 NLI datasets. Additionally, we provide an extensive analysis of the interplay of our methods with known biases in NLI datasets, as well as the effects of encouraging models to ignore biases and fine-tuning on target datasets.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
138,093
1904.04232
A Closer Look at Few-shot Classification
Few-shot classification aims to learn a classifier to recognize unseen classes during training with limited labeled examples. While significant progress has been made, the growing complexity of network designs, meta-learning algorithms, and differences in implementation details make a fair comparison difficult. In this paper, we present 1) a consistent comparative analysis of several representative few-shot classification algorithms, with results showing that deeper backbones significantly reduce the performance differences among methods on datasets with limited domain differences, 2) a modified baseline method that surprisingly achieves competitive performance when compared with the state-of-the-art on both the \miniI and the CUB datasets, and 3) a new experimental setting for evaluating the cross-domain generalization ability for few-shot classification algorithms. Our results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones. In a realistic cross-domain evaluation setting, we show that a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
126,977
2312.01260
Rethinking PGD Attack: Is Sign Function Necessary?
Neural networks have demonstrated success in various domains, yet their performance can be significantly degraded by even a small input perturbation. Consequently, the construction of such perturbations, known as adversarial attacks, has gained significant attention, many of which fall within "white-box" scenarios where we have full access to the neural network. Existing attack algorithms, such as the projected gradient descent (PGD), commonly take the sign function on the raw gradient before updating adversarial inputs, thereby neglecting gradient magnitude information. In this paper, we present a theoretical analysis of how such sign-based update algorithm influences step-wise attack performance, as well as its caveat. We also interpret why previous attempts of directly using raw gradients failed. Based on that, we further propose a new raw gradient descent (RGD) algorithm that eliminates the use of sign. Specifically, we convert the constrained optimization problem into an unconstrained one, by introducing a new hidden variable of non-clipped perturbation that can move beyond the constraint. The effectiveness of the proposed RGD algorithm has been demonstrated extensively in experiments, outperforming PGD and other competitors in various settings, without incurring any additional computational overhead. The codes is available in https://github.com/JunjieYang97/RGD.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
412,377
2202.13996
Risk-Neutral Market Simulation
We develop a risk-neutral spot and equity option market simulator for a single underlying, under which the joint market process is a martingale. We leverage an efficient low-dimensional representation of the market which preserves no static arbitrage, and employ neural spline flows to simulate samples which are free from conditional drifts and are highly realistic in the sense that among all possible risk-neutral simulators, the obtained risk-neutral simulator is the closest to the historical data with respect to the Kullback-Leibler divergence. Numerical experiments demonstrate the effectiveness and highlight both drift removal and fidelity of the calibrated simulator.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
282,814
2305.17593
Data Minimization at Inference Time
In domains with high stakes such as law, recruitment, and healthcare, learning models frequently rely on sensitive user data for inference, necessitating the complete set of features. This not only poses significant privacy risks for individuals but also demands substantial human effort from organizations to verify information accuracy. This paper asks whether it is necessary to use \emph{all} input features for accurate predictions at inference time. The paper demonstrates that, in a personalized setting, individuals may only need to disclose a small subset of their features without compromising decision-making accuracy. The paper also provides an efficient sequential algorithm to determine the appropriate attributes for each individual to provide. Evaluations across various learning tasks show that individuals can potentially report as little as 10\% of their information while maintaining the same accuracy level as a model that employs the full set of user information.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
368,667
2408.17168
EMHI: A Multimodal Egocentric Human Motion Dataset with HMD and Body-Worn IMUs
Egocentric human pose estimation (HPE) using wearable sensors is essential for VR/AR applications. Most methods rely solely on either egocentric-view images or sparse Inertial Measurement Unit (IMU) signals, leading to inaccuracies due to self-occlusion in images or the sparseness and drift of inertial sensors. Most importantly, the lack of real-world datasets containing both modalities is a major obstacle to progress in this field. To overcome the barrier, we propose EMHI, a multimodal \textbf{E}gocentric human \textbf{M}otion dataset with \textbf{H}ead-Mounted Display (HMD) and body-worn \textbf{I}MUs, with all data collected under the real VR product suite. Specifically, EMHI provides synchronized stereo images from downward-sloping cameras on the headset and IMU data from body-worn sensors, along with pose annotations in SMPL format. This dataset consists of 885 sequences captured by 58 subjects performing 39 actions, totaling about 28.5 hours of recording. We evaluate the annotations by comparing them with optical marker-based SMPL fitting results. To substantiate the reliability of our dataset, we introduce MEPoser, a new baseline method for multimodal egocentric HPE, which employs a multimodal fusion encoder, temporal feature encoder, and MLP-based regression heads. The experiments on EMHI show that MEPoser outperforms existing single-modal methods and demonstrates the value of our dataset in solving the problem of egocentric HPE. We believe the release of EMHI and the method could advance the research of egocentric HPE and expedite the practical implementation of this technology in VR/AR products.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
484,606
1409.6022
Exact Analysis of k-Connectivity in Secure Sensor Networks with Unreliable Links
The Eschenauer--Gligor (EG) random key predistribution scheme has been widely recognized as a typical approach to secure communications in wireless sensor networks (WSNs). However, there is a lack of precise probability analysis on the reliable connectivity of WSNs under the EG scheme. To address this, we rigorously derive the asymptotically exact probability of $k$-connectivity in WSNs employing the EG scheme with unreliable links represented by independent on/off channels, where $k$-connectivity ensures that the network remains connected despite the failure of any $(k-1)$ sensors or links. Our analytical results are confirmed via numerical experiments, and they provide precise guidelines for the design of secure WSNs that exhibit a desired level of reliability against node and link failures.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
36,212
2404.00068
A Data-Driven Predictive Analysis on Cyber Security Threats with Key Risk Factors
Cyber risk refers to the risk of defacing reputation, monetary losses, or disruption of an organization or individuals, and this situation usually occurs by the unconscious use of cyber systems. The cyber risk is unhurriedly increasing day by day and it is right now a global threat. Developing countries like Bangladesh face major cyber risk challenges. The growing cyber threat worldwide focuses on the need for effective modeling to predict and manage the associated risk. This paper exhibits a Machine Learning(ML) based model for predicting individuals who may be victims of cyber attacks by analyzing socioeconomic factors. We collected the dataset from victims and non-victims of cyberattacks based on socio-demographic features. The study involved the development of a questionnaire to gather data, which was then used to measure the significance of features. Through data augmentation, the dataset was expanded to encompass 3286 entries, setting the stage for our investigation and modeling. Among several ML models with 19, 20, 21, and 26 features, we proposed a novel Pertinent Features Random Forest (RF) model, which achieved maximum accuracy with 20 features (95.95\%) and also demonstrated the association among the selected features using the Apriori algorithm with Confidence (above 80\%) according to the victim. We generated 10 important association rules and presented the framework that is rigorously evaluated on real-world datasets, demonstrating its potential to predict cyberattacks and associated risk factors effectively. Looking ahead, future efforts will be directed toward refining the predictive model's precision and delving into additional risk factors, to fortify the proposed framework's efficacy in navigating the complex terrain of cybersecurity threats.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
442,751
2209.10179
Reconstructing Robot Operations via Radio-Frequency Side-Channel
Connected teleoperated robotic systems play a key role in ensuring operational workflows are carried out with high levels of accuracy and low margins of error. In recent years, a variety of attacks have been proposed that actively target the robot itself from the cyber domain. However, little attention has been paid to the capabilities of a passive attacker. In this work, we investigate whether an insider adversary can accurately fingerprint robot movements and operational warehousing workflows via the radio frequency side channel in a stealthy manner. Using an SVM for classification, we found that an adversary can fingerprint individual robot movements with at least 96% accuracy, increasing to near perfect accuracy when reconstructing entire warehousing workflows.
false
false
false
false
false
false
true
true
false
false
false
false
true
false
false
false
false
false
318,781
2107.13362
Graph Constrained Data Representation Learning for Human Motion Segmentation
Recently, transfer subspace learning based approaches have shown to be a valid alternative to unsupervised subspace clustering and temporal data clustering for human motion segmentation (HMS). These approaches leverage prior knowledge from a source domain to improve clustering performance on a target domain, and currently they represent the state of the art in HMS. Bucking this trend, in this paper, we propose a novel unsupervised model that learns a representation of the data and digs clustering information from the data itself. Our model is reminiscent of temporal subspace clustering, but presents two critical differences. First, we learn an auxiliary data matrix that can deviate from the initial data, hence confer more degrees of freedom to the coding matrix. Second, we introduce a regularization term for this auxiliary data matrix that preserves the local geometrical structure present in the high-dimensional space. The proposed model is efficiently optimized by using an original Alternating Direction Method of Multipliers (ADMM) formulation allowing to learn jointly the auxiliary data representation, a nonnegative dictionary and a coding matrix. Experimental results on four benchmark datasets for HMS demonstrate that our approach achieves significantly better clustering performance then state-of-the-art methods, including both unsupervised and more recent semi-supervised transfer learning approaches.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
248,177
2401.03753
Color-$S^{4}L$: Self-supervised Semi-supervised Learning with Image Colorization
This work addresses the problem of semi-supervised image classification tasks with the integration of several effective self-supervised pretext tasks. Different from widely-used consistency regularization within semi-supervised learning, we explored a novel self-supervised semi-supervised learning framework (Color-$S^{4}L$) especially with image colorization proxy task and deeply evaluate performances of various network architectures in such special pipeline. Also, we demonstrated its effectiveness and optimal performance on CIFAR-10, SVHN and CIFAR-100 datasets in comparison to previous supervised and semi-supervised optimal methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
420,217
2206.05225
ClamNet: Using contrastive learning with variable depth Unets for medical image segmentation
Unets have become the standard method for semantic segmentation of medical images, along with fully convolutional networks (FCN). Unet++ was introduced as a variant of Unet, in order to solve some of the problems facing Unet and FCNs. Unet++ provided networks with an ensemble of variable depth Unets, hence eliminating the need for professionals estimating the best suitable depth for a task. While Unet and all its variants, including Unet++ aimed at providing networks that were able to train well without requiring large quantities of annotated data, none of them attempted to eliminate the need for pixel-wise annotated data altogether. Obtaining such data for each disease to be diagnosed comes at a high cost. Hence such data is scarce. In this paper we use contrastive learning to train Unet++ for semantic segmentation of medical images using medical images from various sources including magnetic resonance imaging (MRI) and computed tomography (CT), without the need for pixel-wise annotations. Here we describe the architecture of the proposed model and the training method used. This is still a work in progress and so we abstain from including results in this paper. The results and the trained model would be made available upon publication or in subsequent versions of this paper on arxiv.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
301,928
2212.04325
Lattice-Free Sequence Discriminative Training for Phoneme-Based Neural Transducers
Recently, RNN-Transducers have achieved remarkable results on various automatic speech recognition tasks. However, lattice-free sequence discriminative training methods, which obtain superior performance in hybrid models, are rarely investigated in RNN-Transducers. In this work, we propose three lattice-free training objectives, namely lattice-free maximum mutual information, lattice-free segment-level minimum Bayes risk, and lattice-free minimum Bayes risk, which are used for the final posterior output of the phoneme-based neural transducer with a limited context dependency. Compared to criteria using N-best lists, lattice-free methods eliminate the decoding step for hypotheses generation during training, which leads to more efficient training. Experimental results show that lattice-free methods gain up to 6.5% relative improvement in word error rate compared to a sequence-level cross-entropy trained model. Compared to the N-best-list based minimum Bayes risk objectives, lattice-free methods gain 40% - 70% relative training time speedup with a small degradation in performance.
false
false
true
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
335,407
2307.06824
CLAIMED -- the open source framework for building coarse-grained operators for accelerated discovery in science
In modern data-driven science, reproducibility and reusability are key challenges. Scientists are well skilled in the process from data to publication. Although some publication channels require source code and data to be made accessible, rerunning and verifying experiments is usually hard due to a lack of standards. Therefore, reusing existing scientific data processing code from state-of-the-art research is hard as well. This is why we introduce CLAIMED, which has a proven track record in scientific research for addressing the repeatability and reusability issues in modern data-driven science. CLAIMED is a framework to build reusable operators and scalable scientific workflows by supporting the scientist to draw from previous work by re-composing workflows from existing libraries of coarse-grained scientific operators. Although various implementations exist, CLAIMED is programming language, scientific library, and execution environment agnostic.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
true
false
379,184
2105.12006
The incel lexicon: Deciphering the emergent cryptolect of a global misogynistic community
Evolving out of a gender-neutral framing of an involuntary celibate identity, the concept of `incels' has come to refer to an online community of men who bear antipathy towards themselves, women, and society-at-large for their perceived inability to find and maintain sexual relationships. By exploring incel language use on Reddit, a global online message board, we contextualize the incel community's online expressions of misogyny and real-world acts of violence perpetrated against women. After assembling around three million comments from incel-themed Reddit channels, we analyze the temporal dynamics of a data driven rank ordering of the glossary of phrases belonging to an emergent incel lexicon. Our study reveals the generation and normalization of an extensive coded misogynist vocabulary in service of the group's identity.
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
236,886
1902.03361
Image Decomposition and Classification through a Generative Model
We demonstrate in this paper that a generative model can be designed to perform classification tasks under challenging settings, including adversarial attacks and input distribution shifts. Specifically, we propose a conditional variational autoencoder that learns both the decomposition of inputs and the distributions of the resulting components. During test, we jointly optimize the latent variables of the generator and the relaxed component labels to find the best match between the given input and the output of the generator. The model demonstrates promising performance at recognizing overlapping components from the multiMNIST dataset, and novel component combinations from a traffic sign dataset. Experiments also show that the proposed model achieves high robustness on MNIST and NORB datasets, in particular for high-strength gradient attacks and non-gradient attacks.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
121,080
1905.04511
Unified Generator-Classifier for Efficient Zero-Shot Learning
Generative models have achieved state-of-the-art performance for the zero-shot learning problem, but they require re-training the classifier every time a new object category is encountered. The traditional semantic embedding approaches, though very elegant, usually do not perform at par with their generative counterparts. In this work, we propose an unified framework termed GenClass, which integrates the generator with the classifier for efficient zero-shot learning, thus combining the representative power of the generative approaches and the elegance of the embedding approaches. End-to-end training of the unified framework not only eliminates the requirement of additional classifier for new object categories as in the generative approaches, but also facilitates the generation of more discriminative and useful features. Extensive evaluation on three standard zero-shot object classification datasets, namely AWA, CUB and SUN shows the effectiveness of the proposed approach. The approach without any modification, also gives state-of-the-art performance for zero-shot action classification, thus showing its generalizability to other domains.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
130,489
2108.06017
AGKD-BML: Defense Against Adversarial Attack by Attention Guided Knowledge Distillation and Bi-directional Metric Learning
While deep neural networks have shown impressive performance in many tasks, they are fragile to carefully designed adversarial attacks. We propose a novel adversarial training-based model by Attention Guided Knowledge Distillation and Bi-directional Metric Learning (AGKD-BML). The attention knowledge is obtained from a weight-fixed model trained on a clean dataset, referred to as a teacher model, and transferred to a model that is under training on adversarial examples (AEs), referred to as a student model. In this way, the student model is able to focus on the correct region, as well as correcting the intermediate features corrupted by AEs to eventually improve the model accuracy. Moreover, to efficiently regularize the representation in feature space, we propose a bidirectional metric learning. Specifically, given a clean image, it is first attacked to its most confusing class to get the forward AE. A clean image in the most confusing class is then randomly picked and attacked back to the original class to get the backward AE. A triplet loss is then used to shorten the representation distance between original image and its AE, while enlarge that between the forward and backward AEs. We conduct extensive adversarial robustness experiments on two widely used datasets with different attacks. Our proposed AGKD-BML model consistently outperforms the state-of-the-art approaches. The code of AGKD-BML will be available at: https://github.com/hongw579/AGKD-BML.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
250,480
2502.10723
A Mathematics Framework of Artificial Shifted Population Risk and Its Further Understanding Related to Consistency Regularization
Data augmentation is an important technique in training deep neural networks as it enhances their ability to generalize and remain robust. While data augmentation is commonly used to expand the sample size and act as a consistency regularization term, there is a lack of research on the relationship between them. To address this gap, this paper introduces a more comprehensive mathematical framework for data augmentation. Through this framework, we establish that the expected risk of the shifted population is the sum of the original population risk and a gap term, which can be interpreted as a consistency regularization term. The paper also provides a theoretical understanding of this gap, highlighting its negative effects on the early stages of training. We also propose a method to mitigate these effects. To validate our approach, we conducted experiments using same data augmentation techniques and computing resources under several scenarios, including standard training, out-of-distribution, and imbalanced classification. The results demonstrate that our methods surpass compared methods under all scenarios in terms of generalization ability and convergence stability. We provide our code implementation at the following link: https://github.com/ydlsfhll/ASPR.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
534,022
1904.08159
3D Object Recognition with Ensemble Learning --- A Study of Point Cloud-Based Deep Learning Models
In this study, we present an analysis of model-based ensemble learning for 3D point-cloud object classification and detection. An ensemble of multiple model instances is known to outperform a single model instance, but there is little study of the topic of ensemble learning for 3D point clouds. First, an ensemble of multiple model instances trained on the same part of the $\textit{ModelNet40}$ dataset was tested for seven deep learning, point cloud-based classification algorithms: $\textit{PointNet}$, $\textit{PointNet++}$, $\textit{SO-Net}$, $\textit{KCNet}$, $\textit{DeepSets}$, $\textit{DGCNN}$, and $\textit{PointCNN}$. Second, the ensemble of different architectures was tested. Results of our experiments show that the tested ensemble learning methods improve over state-of-the-art on the $\textit{ModelNet40}$ dataset, from $92.65\%$ to $93.64\%$ for the ensemble of single architecture instances, $94.03\%$ for two different architectures, and $94.15\%$ for five different architectures. We show that the ensemble of two models with different architectures can be as effective as the ensemble of 10 models with the same architecture. Third, a study on classic bagging i.e. with different subsets used for training multiple model instances) was tested and sources of ensemble accuracy growth were investigated for best-performing architecture, i.e. $\textit{SO-Net}$. We also investigate the ensemble learning of $\textit{Frustum PointNet}$ approach in the task of 3D object detection, increasing the average precision of 3D box detection on the $\textit{KITTI}$ dataset from $63.1\%$ to $66.5\%$ using only three model instances. We measure the inference time of all 3D classification architectures on a $\textit{Nvidia Jetson TX2}$, a common embedded computer for mobile robots, to allude to the use of these models in real-life applications.
false
false
false
false
true
false
true
true
false
false
false
true
false
false
false
false
false
false
127,994
2306.17193
Uncovering the Limits of Machine Learning for Automatic Vulnerability Detection
Recent results of machine learning for automatic vulnerability detection (ML4VD) have been very promising. Given only the source code of a function $f$, ML4VD techniques can decide if $f$ contains a security flaw with up to 70% accuracy. However, as evident in our own experiments, the same top-performing models are unable to distinguish between functions that contain a vulnerability and functions where the vulnerability is patched. So, how can we explain this contradiction and how can we improve the way we evaluate ML4VD techniques to get a better picture of their actual capabilities? In this paper, we identify overfitting to unrelated features and out-of-distribution generalization as two problems, which are not captured by the traditional approach of evaluating ML4VD techniques. As a remedy, we propose a novel benchmarking methodology to help researchers better evaluate the true capabilities and limits of ML4VD techniques. Specifically, we propose (i) to augment the training and validation dataset according to our cross-validation algorithm, where a semantic preserving transformation is applied during the augmentation of either the training set or the testing set, and (ii) to augment the testing set with code snippets where the vulnerabilities are patched. Using six ML4VD techniques and two datasets, we find (a) that state-of-the-art models severely overfit to unrelated features for predicting the vulnerabilities in the testing data, (b) that the performance gained by data augmentation does not generalize beyond the specific augmentations applied during training, and (c) that state-of-the-art ML4VD techniques are unable to distinguish vulnerable functions from their patches.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
376,630
1109.5560
Temporal effects in the growth of networks
We show that to explain the growth of the citation network by preferential attachment (PA), one has to accept that individual nodes exhibit heterogeneous fitness values that decay with time. While previous PA-based models assumed either heterogeneity or decay in isolation, we propose a simple analytically treatable model that combines these two factors. Depending on the input assumptions, the resulting degree distribution shows an exponential, log-normal or power-law decay, which makes the model an apt candidate for modeling a wide range of real systems.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
12,330
2502.07809
Analyzing the Resource Utilization of Lambda Functions on Mobile Devices: Case Studies on Kotlin and Swift
With billions of smartphones in use globally, the daily time spent on these devices contributes significantly to overall electricity consumption. Given this scale, even minor reductions in smartphone power use could result in substantial energy savings. This study explores the impact of Lambda functions on resource consumption in mobile programming. While Lambda functions are known for enhancing code readability and conciseness, their use does not add to the functional capabilities of a programming language. Our research investigates the implications of using Lambda functions in terms of battery utilization, memory usage, and execution time compared to equivalent code structures without Lambda functions. Our findings reveal that Lambda functions impose a considerable resource overhead on mobile devices without offering additional functionalities.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
532,766
2403.11304
Pioneering SE(2)-Equivariant Trajectory Planning for Automated Driving
Planning the trajectory of the controlled ego vehicle is a key challenge in automated driving. As for human drivers, predicting the motions of surrounding vehicles is important to plan the own actions. Recent motion prediction methods utilize equivariant neural networks to exploit geometric symmetries in the scene. However, no existing method combines motion prediction and trajectory planning in a joint step while guaranteeing equivariance under roto-translations of the input space. We address this gap by proposing a lightweight equivariant planning model that generates multi-modal joint predictions for all vehicles and selects one mode as the ego plan. The equivariant network design improves sample efficiency, guarantees output stability, and reduces model parameters. We further propose equivariant route attraction to guide the ego vehicle along a high-level route provided by an off-the-shelf GPS navigation system. This module creates a momentum from embedded vehicle positions toward the route in latent space while keeping the equivariance property. Route attraction enables goal-oriented behavior without forcing the vehicle to stick to the exact route. We conduct experiments on the challenging nuScenes dataset to investigate the capability of our planner. The results show that the planned trajectory is stable under roto-translations of the input scene which demonstrates the equivariance of our model. Despite using only a small split of the dataset for training, our method improves L2 distance at 3 s by 20.6 % and surpasses the state of the art.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
true
false
false
false
438,632
1011.1212
CplexA: a Mathematica package to study macromolecular-assembly control of gene expression
Summary: Macromolecular assembly vertebrates essential cellular processes, such as gene regulation and signal transduction. A major challenge for conventional computational methods to study these processes is tackling the exponential increase of the number of configurational states with the number of components. CplexA is a Mathematica package that uses functional programming to efficiently compute probabilities and average properties over such exponentially large number of states from the energetics of the interactions. The package is particularly suited to study gene expression at complex promoters controlled by multiple, local and distal, DNA binding sites for transcription factors. Availability: CplexA is freely available together with documentation at http://sourceforge.net/projects/cplexa/.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
8,139
2404.06484
Public-private funding models in open source software development: A case study on scikit-learn
Governments are increasingly funding open source software (OSS) development to support software security, digital sovereignty, and national competitiveness in science and innovation, amongst others. However, little is known about how OSS developers evaluate the relative benefits and drawbacks of governmental funding for OSS. This study explores this question through a case study on scikit-learn, a Python library for machine learning, funded by public research grants, commercial sponsorship, micro-donations, and a 32 euro million grant announced in France's artificial intelligence strategy. Through 25 interviews with scikit-learn's maintainers and funders, this study makes two key contributions. First, it contributes empirical findings about the benefits and drawbacks of public and private funding in an impactful OSS project, and the governance protocols employed by the maintainers to balance the diverse interests of their community and funders. Second, it offers practical lessons on funding for OSS developers, governments, and companies based on the experience of scikit-learn. The paper concludes with key recommendations for practitioners and future research directions.
false
false
false
false
true
false
true
false
false
false
false
false
false
true
false
false
false
true
445,486
1811.08790
Learning Quadratic Games on Networks
Individuals, or organizations, cooperate with or compete against one another in a wide range of practical situations. Such strategic interactions are often modeled as games played on networks, where an individual's payoff depends not only on her action but also on that of her neighbors. The current literature has largely focused on analyzing the characteristics of network games in the scenario where the structure of the network, which is represented by a graph, is known beforehand. It is often the case, however, that the actions of the players are readily observable while the underlying interaction network remains hidden. In this paper, we propose two novel frameworks for learning, from the observations on individual actions, network games with linear-quadratic payoffs, and in particular, the structure of the interaction network. Our frameworks are based on the Nash equilibrium of such games and involve solving a joint optimization problem for the graph structure and the individual marginal benefits. Both synthetic and real-world experiments demonstrate the effectiveness of the proposed frameworks, which have theoretical as well as practical implications for understanding strategic interactions in a network environment.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
true
114,126
2009.06680
SML: Semantic Meta-learning for Few-shot Semantic Segmentation
The significant amount of training data required for training Convolutional Neural Networks has become a bottleneck for applications like semantic segmentation. Few-shot semantic segmentation algorithms address this problem, with an aim to achieve good performance in the low-data regime, with few annotated training images. Recently, approaches based on class-prototypes computed from available training data have achieved immense success for this task. In this work, we propose a novel meta-learning framework, Semantic Meta-Learning (SML) which incorporates class level semantic descriptions in the generated prototypes for this problem. In addition, we propose to use the well established technique, ridge regression, to not only bring in the class-level semantic information, but also to effectively utilise the information available from multiple images present in the training data for prototype computation. This has a simple closed-form solution, and thus can be implemented easily and efficiently. Extensive experiments on the benchmark PASCAL-5i dataset under different experimental settings show the effectiveness of the proposed framework.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
195,711
1805.12238
High-Quality Disjoint and Overlapping Community Structure in Large-Scale Complex Networks
In this paper, we propose an improved version of an agglomerative hierarchical clustering algorithm that performs disjoint community detection in large-scale complex networks. The improved algorithm is achieved after replacing the local structural similarity used in the original algorithm, with the recently proposed Dynamic Structural Similarity. Additionally, the improved algorithm is extended to detect fuzzy and crisp overlapping community structure. The extended algorithm leverages the disjoint community structure generated by itself and the dynamic structural similarity measures, to compute a proposed membership probability function that defines the fuzzy communities. Moreover, an experimental evaluation is performed on reference benchmark graphs in order to compare the proposed algorithms with the state-of-the-art.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
99,121
2010.05350
Google Landmark Recognition 2020 Competition Third Place Solution
We present our third place solution to the Google Landmark Recognition 2020 competition. It is an ensemble of global features only Sub-center ArcFace models. We introduce dynamic margins for ArcFace loss, a family of tune-able margin functions of class size, designed to deal with the extreme imbalance in GLDv2 dataset. Progressive finetuning and careful postprocessing are also key to the solution. Our two submissions scored 0.6344 and 0.6289 on private leaderboard, both ranking third place out of 736 teams.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
200,089
2502.12476
CoCo-CoLa: Evaluating Language Adherence in Multilingual LLMs
Multilingual Large Language Models (LLMs) develop cross-lingual abilities despite being trained on limited parallel data. However, they often struggle to generate responses in the intended language, favoring high-resource languages such as English. In this work, we introduce CoCo-CoLa (Correct Concept - Correct Language), a novel metric to evaluate language adherence in multilingual LLMs. Using fine-tuning experiments on a closed-book QA task across seven languages, we analyze how training in one language affects others' performance. Our findings reveal that multilingual models share task knowledge across languages but exhibit biases in the selection of output language. We identify language-specific layers, showing that final layers play a crucial role in determining output language. Accordingly, we propose a partial training strategy that selectively fine-tunes key layers, improving language adherence while significantly reducing computational cost. Our method achieves comparable or superior performance to full fine-tuning, particularly for low-resource languages, offering a more efficient multilingual adaptation.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
534,884
2111.10824
A Blockchain-Based Approach for Collaborative Formalization of Mathematics and Programs
Formalization of mathematics is the process of digitizing mathematical knowledge, which allows for formal proof verification as well as efficient semantic searches. Given the large and ever-increasing gap between the set of formalized and unformalized mathematical knowledge, there is a clear need to encourage more computer scientists and mathematicians to solve and formalize mathematical problems together. With blockchain technology, we are able to decentralize this process, provide time-stamped verification of authorship and encourage collaboration through implementation of incentive mechanisms via smart contracts. Currently, the formalization of mathematics is done through the use of proof assistants, which can be used to verify programs and protocols as well. Furthermore, with the advancement in artificial intelligence (AI), particularly machine learning, we can apply automated AI reasoning tools in these proof assistants and (at least partially) automate the process of synthesizing proofs. In our paper, we demonstrate a blockchain-based system for collaborative formalization of mathematics and programs incorporating both human labour as well as automated AI tools. We explain how Token-Curated Registries (TCR) and smart contracts are used to ensure appropriate documents are recorded and encourage collaboration through implementation of incentive mechanisms respectively. Using an illustrative example, we show how formalized proofs of different sorting algorithms can be produced collaboratively in our proposed blockchain system.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
true
267,459
1906.10489
Keep soft robots soft -- a data-driven based trade-off between feed-forward and feedback control
Tracking control for soft robots is challenging due to uncertainties in the system model and environment. Using high feedback gains to overcome this issue results in an increasing stiffness that clearly destroys the inherent safety property of soft robots. However, accurate models for feed-forward control are often difficult to obtain. In this article, we employ Gaussian Process regression to obtain a data-driven model that is used for the feed-forward compensation of unknown dynamics. The model fidelity is used to adapt the feed-forward and feedback part allowing low feedback gains in regions of high model confidence.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
136,445
2409.09715
Generative Semantic Communication via Textual Prompts: Latency Performance Tradeoffs
This paper develops an edge-device collaborative Generative Semantic Communications (Gen SemCom) framework leveraging pre-trained Multi-modal/Vision Language Models (M/VLMs) for ultra-low-rate semantic communication via textual prompts. The proposed framework optimizes the use of M/VLMs on the wireless edge/device to generate high-fidelity textual prompts through visual captioning/question answering, which are then transmitted over a wireless channel for SemCom. Specifically, we develop a multi-user Gen SemCom framework using pre-trained M/VLMs, and formulate a joint optimization problem of prompt generation offloading, communication and computation resource allocation to minimize the latency and maximize the resulting semantic quality. Due to the nonconvex nature of the problem with highly coupled discrete and continuous variables, we decompose it as a two-level problem and propose a low-complexity swap/leaving/joining (SLJ)-based matching algorithm. Simulation results demonstrate significant performance improvements over the conventional semanticunaware/non-collaborative offloading benchmarks.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
488,439
1805.06158
Investigating the Agility Bias in DNS Graph Mining
The concept of agile domain name system (DNS) refers to dynamic and rapidly changing mappings between domain names and their Internet protocol (IP) addresses. This empirical paper evaluates the bias from this kind of agility for DNS-based graph theoretical data mining applications. By building on two conventional metrics for observing malicious DNS agility, the agility bias is observed by comparing bipartite DNS graphs to different subgraphs from which vertices and edges are removed according to two criteria. According to an empirical experiment with two longitudinal DNS datasets, irrespective of the criterion, the agility bias is observed to be severe particularly regarding the effect of outlying domains hosted and delivered via content delivery networks and cloud computing services. With these observations, the paper contributes to the research domains of cyber security and DNS mining. In a larger context of applied graph mining, the paper further elaborates the practical concerns related to the learning of large and dynamic bipartite graphs.
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
true
97,549
2105.01289
Representation Learning for Clustering via Building Consensus
In this paper, we focus on unsupervised representation learning for clustering of images. Recent advances in deep clustering and unsupervised representation learning are based on the idea that different views of an input image (generated through data augmentation techniques) must be close in the representation space (exemplar consistency), and/or similar images must have similar cluster assignments (population consistency). We define an additional notion of consistency, consensus consistency, which ensures that representations are learned to induce similar partitions for variations in the representation space, different clustering algorithms or different initializations of a single clustering algorithm. We define a clustering loss by executing variations in the representation space and seamlessly integrate all three consistencies (consensus, exemplar and population) into an end-to-end learning framework. The proposed algorithm, consensus clustering using unsupervised representation learning (ConCURL), improves upon the clustering performance of state-of-the-art methods on four out of five image datasets. Furthermore, we extend the evaluation procedure for clustering to reflect the challenges encountered in real-world clustering tasks, such as maintaining clustering performance in cases with distribution shifts. We also perform a detailed ablation study for a deeper understanding of the proposed algorithm. The code and the trained models are available at https://github.com/JayanthRR/ConCURL_NCE.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
233,484
2305.00221
Sensor Equivariance by LiDAR Projection Images
In this work, we propose an extension of conventional image data by an additional channel in which the associated projection properties are encoded. This addresses the issue of sensor-dependent object representation in projection-based sensors, such as LiDAR, which can lead to distorted physical and geometric properties due to variations in sensor resolution and field of view. To that end, we propose an architecture for processing this data in an instance segmentation framework. We focus specifically on LiDAR as a key sensor modality for machine vision tasks and highly automated driving (HAD). Through an experimental setup in a controlled synthetic environment, we identify a bias on sensor resolution and field of view and demonstrate that our proposed method can reduce said bias for the task of LiDAR instance segmentation. Furthermore, we define our method such that it can be applied to other projection-based sensors, such as cameras. To promote transparency, we make our code and dataset publicly available. This method shows the potential to improve performance and robustness in various machine vision tasks that utilize projection-based sensors.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
361,258
2408.12598
ND-SDF: Learning Normal Deflection Fields for High-Fidelity Indoor Reconstruction
Neural implicit reconstruction via volume rendering has demonstrated its effectiveness in recovering dense 3D surfaces. However, it is non-trivial to simultaneously recover meticulous geometry and preserve smoothness across regions with differing characteristics. To address this issue, previous methods typically employ geometric priors, which are often constrained by the performance of the prior models. In this paper, we propose ND-SDF, which learns a Normal Deflection field to represent the angular deviation between the scene normal and the prior normal. Unlike previous methods that uniformly apply geometric priors on all samples, introducing significant bias in accuracy, our proposed normal deflection field dynamically learns and adapts the utilization of samples based on their specific characteristics, thereby improving both the accuracy and effectiveness of the model. Our method not only obtains smooth weakly textured regions such as walls and floors but also preserves the geometric details of complex structures. In addition, we introduce a novel ray sampling strategy based on the deflection angle to facilitate the unbiased rendering process, which significantly improves the quality and accuracy of intricate surfaces, especially on thin structures. Consistent improvements on various challenging datasets demonstrate the superiority of our method.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
482,797
2106.13109
Machine learning to tame divergent density functional approximations: a new path to consensus materials design principles
Computational virtual high-throughput screening (VHTS) with density functional theory (DFT) and machine-learning (ML)-acceleration is essential in rapid materials discovery. By necessity, efficient DFT-based workflows are carried out with a single density functional approximation (DFA). Nevertheless, properties evaluated with different DFAs can be expected to disagree for the cases with challenging electronic structure (e.g., open shell transition metal complexes, TMCs) for which rapid screening is most needed and accurate benchmarks are often unavailable. To quantify the effect of DFA bias, we introduce an approach to rapidly obtain property predictions from 23 representative DFAs spanning multiple families and "rungs" (e.g., semi-local to double hybrid) and basis sets on over 2,000 TMCs. Although computed properties (e.g., spin-state ordering and frontier orbital gap) naturally differ by DFA, high linear correlations persist across all DFAs. We train independent ML models for each DFA and observe convergent trends in feature importance; these features thus provide DFA-invariant, universal design rules. We devise a strategy to train ML models informed by all 23 DFAs and use them to predict properties (e.g., spin-splitting energy) of over 182k TMCs. By requiring consensus of the ANN-predicted DFA properties, we improve correspondence of these computational lead compounds with literature-mined, experimental compounds over the single-DFA approach typically employed. Both feature analysis and consensus-based ML provide efficient, alternative paths to overcome accuracy limitations of practical DFT.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
242,970
1803.01500
Memorization Precedes Generation: Learning Unsupervised GANs with Memory Networks
We propose an approach to address two issues that commonly occur during training of unsupervised GANs. First, since GANs use only a continuous latent distribution to embed multiple classes or clusters of data, they often do not correctly handle the structural discontinuity between disparate classes in a latent space. Second, discriminators of GANs easily forget about past generated samples by generators, incurring instability during adversarial training. We argue that these two infamous problems of unsupervised GAN training can be largely alleviated by a learnable memory network to which both generators and discriminators can access. Generators can effectively learn representation of training samples to understand underlying cluster distributions of data, which ease the structure discontinuity problem. At the same time, discriminators can better memorize clusters of previously generated samples, which mitigate the forgetting problem. We propose a novel end-to-end GAN model named memoryGAN, which involves a memory network that is unsupervisedly trainable and integrable to many existing GAN models. With evaluations on multiple datasets such as Fashion-MNIST, CelebA, CIFAR10, and Chairs, we show that our model is probabilistically interpretable, and generates realistic image samples of high visual fidelity. The memoryGAN also achieves the state-of-the-art inception scores over unsupervised GAN models on the CIFAR10 dataset, without any optimization tricks and weaker divergences.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
91,894
cs/0207093
Preference Queries
The handling of user preferences is becoming an increasingly important issue in present-day information systems. Among others, preferences are used for information filtering and extraction to reduce the volume of data presented to the user. They are also used to keep track of user profiles and formulate policies to improve and automate decision making. We propose here a simple, logical framework for formulating preferences as preference formulas. The framework does not impose any restrictions on the preference relations and allows arbitrary operation and predicate signatures in preference formulas. It also makes the composition of preference relations straightforward. We propose a simple, natural embedding of preference formulas into relational algebra (and SQL) through a single winnow operator parameterized by a preference formula. The embedding makes possible the formulation of complex preference queries, e.g., involving aggregation, by piggybacking on existing SQL constructs. It also leads in a natural way to the definition of further, preference-related concepts like ranking. Finally, we present general algebraic laws governing the winnow operator and its interaction with other relational algebra operators. The preconditions on the applicability of the laws are captured by logical formulas. The laws provide a formal foundation for the algebraic optimization of preference queries. We demonstrate the usefulness of our approach through numerous examples.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
537,670
2209.05227
DUET: A Tuning-Free Device-Cloud Collaborative Parameters Generation Framework for Efficient Device Model Generalization
Device Model Generalization (DMG) is a practical yet under-investigated research topic for on-device machine learning applications. It aims to improve the generalization ability of pre-trained models when deployed on resource-constrained devices, such as improving the performance of pre-trained cloud models on smart mobiles. While quite a lot of works have investigated the data distribution shift across clouds and devices, most of them focus on model fine-tuning on personalized data for individual devices to facilitate DMG. Despite their promising, these approaches require on-device re-training, which is practically infeasible due to the overfitting problem and high time delay when performing gradient calculation on real-time data. In this paper, we argue that the computational cost brought by fine-tuning can be rather unnecessary. We consequently present a novel perspective to improving DMG without increasing computational cost, i.e., device-specific parameter generation which directly maps data distribution to parameters. Specifically, we propose an efficient Device-cloUd collaborative parametErs generaTion framework DUET. DUET is deployed on a powerful cloud server that only requires the low cost of forwarding propagation and low time delay of data transmission between the device and the cloud. By doing so, DUET can rehearse the device-specific model weight realizations conditioned on the personalized real-time data for an individual device. Importantly, our DUET elegantly connects the cloud and device as a 'duet' collaboration, frees the DMG from fine-tuning, and enables a faster and more accurate DMG paradigm. We conduct an extensive experimental study of DUET on three public datasets, and the experimental results confirm our framework's effectiveness and generalisability for different DMG tasks.
false
false
false
false
true
true
false
false
false
false
false
true
false
false
false
false
false
true
317,030
2211.14864
A Faster, Lighter and Stronger Deep Learning-Based Approach for Place Recognition
Visual Place Recognition is an essential component of systems for camera localization and loop closure detection, and it has attracted widespread interest in multiple domains such as computer vision, robotics and AR/VR. In this work, we propose a faster, lighter and stronger approach that can generate models with fewer parameters and can spend less time in the inference stage. We designed RepVGG-lite as the backbone network in our architecture, it is more discriminative than other general networks in the Place Recognition task. RepVGG-lite has more speed advantages while achieving higher performance. We extract only one scale patch-level descriptors from global descriptors in the feature extraction stage. Then we design a trainable feature matcher to exploit both spatial relationships of the features and their visual appearance, which is based on the attention mechanism. Comprehensive experiments on challenging benchmark datasets demonstrate the proposed method outperforming recent other state-of-the-art learned approaches, and achieving even higher inference speed. Our system has 14 times less params than Patch-NetVLAD, 6.8 times lower theoretical FLOPs, and run faster 21 and 33 times in feature extraction and feature matching. Moreover, the performance of our approach is 0.5\% better than Patch-NetVLAD in Recall@1. We used subsets of Mapillary Street Level Sequences dataset to conduct experiments for all other challenging conditions.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
333,026
2210.01820
MOAT: Alternating Mobile Convolution and Attention Brings Strong Vision Models
This paper presents MOAT, a family of neural networks that build on top of MObile convolution (i.e., inverted residual blocks) and ATtention. Unlike the current works that stack separate mobile convolution and transformer blocks, we effectively merge them into a MOAT block. Starting with a standard Transformer block, we replace its multi-layer perceptron with a mobile convolution block, and further reorder it before the self-attention operation. The mobile convolution block not only enhances the network representation capacity, but also produces better downsampled features. Our conceptually simple MOAT networks are surprisingly effective, achieving 89.1% / 81.5% top-1 accuracy on ImageNet-1K / ImageNet-1K-V2 with ImageNet22K pretraining. Additionally, MOAT can be seamlessly applied to downstream tasks that require large resolution inputs by simply converting the global attention to window attention. Thanks to the mobile convolution that effectively exchanges local information between pixels (and thus cross-windows), MOAT does not need the extra window-shifting mechanism. As a result, on COCO object detection, MOAT achieves 59.2% box AP with 227M model parameters (single-scale inference, and hard NMS), and on ADE20K semantic segmentation, MOAT attains 57.6% mIoU with 496M model parameters (single-scale inference). Finally, the tiny-MOAT family, obtained by simply reducing the channel sizes, also surprisingly outperforms several mobile-specific transformer-based models on ImageNet. The tiny-MOAT family is also benchmarked on downstream tasks, serving as a baseline for the community. We hope our simple yet effective MOAT will inspire more seamless integration of convolution and self-attention. Code is publicly available.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
321,416
2309.02401
Prototype-based Dataset Comparison
Dataset summarisation is a fruitful approach to dataset inspection. However, when applied to a single dataset the discovery of visual concepts is restricted to those most prominent. We argue that a comparative approach can expand upon this paradigm to enable richer forms of dataset inspection that go beyond the most prominent concepts. To enable dataset comparison we present a module that learns concept-level prototypes across datasets. We leverage self-supervised learning to discover these prototypes without supervision, and we demonstrate the benefits of our approach in two case-studies. Our findings show that dataset comparison extends dataset inspection and we hope to encourage more works in this direction. Code and usage instructions available at https://github.com/Nanne/ProtoSim
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
390,026
2307.13332
The Optimal Approximation Factors in Misspecified Off-Policy Value Function Estimation
Theoretical guarantees in reinforcement learning (RL) are known to suffer multiplicative blow-up factors with respect to the misspecification error of function approximation. Yet, the nature of such \emph{approximation factors} -- especially their optimal form in a given learning problem -- is poorly understood. In this paper we study this question in linear off-policy value function estimation, where many open questions remain. We study the approximation factor in a broad spectrum of settings, such as with the weighted $L_2$-norm (where the weighting is the offline state distribution), the $L_\infty$ norm, the presence vs. absence of state aliasing, and full vs. partial coverage of the state space. We establish the optimal asymptotic approximation factors (up to constants) for all of these settings. In particular, our bounds identify two instance-dependent factors for the $L_2(\mu)$ norm and only one for the $L_\infty$ norm, which are shown to dictate the hardness of off-policy evaluation under misspecification.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
381,550
2309.05689
Large Language Model for Science: A Study on P vs. NP
In this work, we use large language models (LLMs) to augment and accelerate research on the P versus NP problem, one of the most important open problems in theoretical computer science and mathematics. Specifically, we propose Socratic reasoning, a general framework that promotes in-depth thinking with LLMs for complex problem-solving. Socratic reasoning encourages LLMs to recursively discover, solve, and integrate problems while facilitating self-evaluation and refinement. Our pilot study on the P vs. NP problem shows that GPT-4 successfully produces a proof schema and engages in rigorous reasoning throughout 97 dialogue turns, concluding "P $\neq$ NP", which is in alignment with (Xu and Zhou, 2023). The investigation uncovers novel insights within the extensive solution space of LLMs, shedding light on LLM for Science.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
391,174
2311.00424
Tracking capelin spawning migration -- Integrating environmental data and Individual-based modeling
This paper presents a modeling framework for tracking the spawning migration of the capelin, which is a fish species in the Barents Sea. The framework combines an individual-based model (IBM) with artificial neural networks (ANNs). The ANNs determine the direction of the fish's movement based on local environmental information, while a genetic algorithm and fitness function assess the suitability of the proposed directions. The framework's efficacy is demonstrated by comparing the spatial distributions of modeled and empirical potential spawners. The proposed model successfully replicates the southeastward movement of capelin during their spawning migration, accurately capturing the distribution of spawning fish over historical spawning sites along the eastern coast of northern Norway. Furthermore, the paper compares three migration models: passive swimmers, taxis movement based on temperature gradients, and restricted-area search, along with our proposed approach. The results reveal that our approach outperforms the other models in mimicking the migration pattern. Most spawning stocks managed to reach the spawning sites, unlike the other models where water currents played a significant role in pushing the fish away from the coast. The temperature gradient detection model and restricted-area search model are found to be inadequate for accurately simulating capelin spawning migration in the Barents Sea due to complex oceanographic conditions.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
404,645
2211.15644
Efficient Mirror Detection via Multi-level Heterogeneous Learning
We present HetNet (Multi-level \textbf{Het}erogeneous \textbf{Net}work), a highly efficient mirror detection network. Current mirror detection methods focus more on performance than efficiency, limiting the real-time applications (such as drones). Their lack of efficiency is aroused by the common design of adopting homogeneous modules at different levels, which ignores the difference between different levels of features. In contrast, HetNet detects potential mirror regions initially through low-level understandings (\textit{e.g.}, intensity contrasts) and then combines with high-level understandings (contextual discontinuity for instance) to finalize the predictions. To perform accurate yet efficient mirror detection, HetNet follows an effective architecture that obtains specific information at different stages to detect mirrors. We further propose a multi-orientation intensity-based contrasted module (MIC) and a reflection semantic logical module (RSL), equipped on HetNet, to predict potential mirror regions by low-level understandings and analyze semantic logic in scenarios by high-level understandings, respectively. Compared to the state-of-the-art method, HetNet runs 664$\%$ faster and draws an average performance gain of 8.9$\%$ on MAE, 3.1$\%$ on IoU, and 2.0$\%$ on F-measure on two mirror detection benchmarks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
333,343
1702.07634
Thermal Transients in District Heating Systems
Heat fluxes in a district heating pipeline systems need to be controlled on the scale from minutes to an hour to adjust to evolving demand. There are two principal ways to control the heat flux - keep temperature fixed but adjust velocity of the carrier (typically water) or keep the velocity flow steady but then adjust temperature at the heat producing source (heat plant). We study the latter scenario, commonly used for operations in Russia and Nordic countries, and analyze dynamics of the heat front as it propagates through the system. Steady velocity flows in the district heating pipelines are typically turbulent and incompressible. Changes in the heat, on either consumption or production sides, lead to slow transients which last from tens of minutes to hours. We classify relevant physical phenomena in a single pipe, e.g. turbulent spread of the turbulent front. We then explain how to describe dynamics of temperature and heat flux evolution over a network efficiently and illustrate the network solution on a simple example involving one producer and one consumer of heat connected by "hot" and "cold" pipes. We conclude the manuscript motivating future research directions.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
68,814
1809.00773
Sequence-to-Action: End-to-End Semantic Graph Generation for Semantic Parsing
This paper proposes a neural semantic parsing approach -- Sequence-to-Action, which models semantic parsing as an end-to-end semantic graph generation process. Our method simultaneously leverages the advantages from two recent promising directions of semantic parsing. Firstly, our model uses a semantic graph to represent the meaning of a sentence, which has a tight-coupling with knowledge bases. Secondly, by leveraging the powerful representation learning and prediction ability of neural network models, we propose a RNN model which can effectively map sentences to action sequences for semantic graph generation. Experiments show that our method achieves state-of-the-art performance on OVERNIGHT dataset and gets competitive performance on GEO and ATIS datasets.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
106,648
1709.01602
Dynamic Multiscale Tree Learning Using Ensemble Strong Classifiers for Multi-label Segmentation of Medical Images with Lesions
We introduce a dynamic multiscale tree (DMT) architecture that learns how to leverage the strengths of different existing classifiers for supervised multi-label image segmentation. Unlike previous works that simply aggregate or cascade classifiers for addressing image segmentation and labeling tasks, we propose to embed strong classifiers into a tree structure that allows bi-directional flow of information between its classifier nodes to gradually improve their performances. Our DMT is a generic classification model that inherently embeds different cascades of classifiers while enhancing learning transfer between them to boost up their classification accuracies. Specifically, each node in our DMT can nest a Structured Random Forest (SRF) classifier or a Bayesian Network (BN) classifier. The proposed SRF-BN DMT architecture has several appealing properties. First, while SRF operates at a patch-level (regular image region), BN operates at the super-pixel level (irregular image region), thereby enabling the DMT to integrate multi-level image knowledge in the learning process. Second, although BN is powerful in modeling dependencies between image elements (superpixels, edges) and their features, the learning of its structure and parameters is challenging. On the other hand, SRF may fail to accurately detect very irregular object boundaries. The proposed DMT robustly overcomes these limitations for both classifiers through the ascending and descending flow of contextual information between each parent node and its children nodes. Third, we train DMT using different scales, where we progressively decrease the patch and superpixel sizes as we go deeper along the tree edges nearing its leaf nodes. Last, DMT demonstrates its outperformance in comparison to several state-of-the-art segmentation methods for multi-labeling of brain images with gliomas.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
80,111
2412.03268
RFSR: Improving ISR Diffusion Models via Reward Feedback Learning
Generative diffusion models (DM) have been extensively utilized in image super-resolution (ISR). Most of the existing methods adopt the denoising loss from DDPMs for model optimization. We posit that introducing reward feedback learning to finetune the existing models can further improve the quality of the generated images. In this paper, we propose a timestep-aware training strategy with reward feedback learning. Specifically, in the initial denoising stages of ISR diffusion, we apply low-frequency constraints to super-resolution (SR) images to maintain structural stability. In the later denoising stages, we use reward feedback learning to improve the perceptual and aesthetic quality of the SR images. In addition, we incorporate Gram-KL regularization to alleviate stylization caused by reward hacking. Our method can be integrated into any diffusion-based ISR model in a plug-and-play manner. Experiments show that ISR diffusion models, when fine-tuned with our method, significantly improve the perceptual and aesthetic quality of SR images, achieving excellent subjective results. Code: https://github.com/sxpro/RFSR
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
513,893
2211.08864
PrivacyProber: Assessment and Detection of Soft-Biometric Privacy-Enhancing Techniques
Soft-biometric privacy-enhancing techniques represent machine learning methods that aim to: (i) mitigate privacy concerns associated with face recognition technology by suppressing selected soft-biometric attributes in facial images (e.g., gender, age, ethnicity) and (ii) make unsolicited extraction of sensitive personal information infeasible. Because such techniques are increasingly used in real-world applications, it is imperative to understand to what extent the privacy enhancement can be inverted and how much attribute information can be recovered from privacy-enhanced images. While these aspects are critical, they have not been investigated in the literature. We, therefore, study the robustness of several state-of-the-art soft-biometric privacy-enhancing techniques to attribute recovery attempts. We propose PrivacyProber, a high-level framework for restoring soft-biometric information from privacy-enhanced facial images, and apply it for attribute recovery in comprehensive experiments on three public face datasets, i.e., LFW, MUCT and Adience. Our experiments show that the proposed framework is able to restore a considerable amount of suppressed information, regardless of the privacy-enhancing technique used, but also that there are significant differences between the considered privacy models. These results point to the need for novel mechanisms that can improve the robustness of existing privacy-enhancing techniques and secure them against potential adversaries trying to restore suppressed information.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
330,801
2402.04599
Meet JEANIE: a Similarity Measure for 3D Skeleton Sequences via Temporal-Viewpoint Alignment
Video sequences exhibit significant nuisance variations (undesired effects) of speed of actions, temporal locations, and subjects' poses, leading to temporal-viewpoint misalignment when comparing two sets of frames or evaluating the similarity of two sequences. Thus, we propose Joint tEmporal and cAmera viewpoiNt alIgnmEnt (JEANIE) for sequence pairs. In particular, we focus on 3D skeleton sequences whose camera and subjects' poses can be easily manipulated in 3D. We evaluate JEANIE on skeletal Few-shot Action Recognition (FSAR), where matching well temporal blocks (temporal chunks that make up a sequence) of support-query sequence pairs (by factoring out nuisance variations) is essential due to limited samples of novel classes. Given a query sequence, we create its several views by simulating several camera locations. For a support sequence, we match it with view-simulated query sequences, as in the popular Dynamic Time Warping (DTW). Specifically, each support temporal block can be matched to the query temporal block with the same or adjacent (next) temporal index, and adjacent camera views to achieve joint local temporal-viewpoint warping. JEANIE selects the smallest distance among matching paths with different temporal-viewpoint warping patterns, an advantage over DTW which only performs temporal alignment. We also propose an unsupervised FSAR akin to clustering of sequences with JEANIE as a distance measure. JEANIE achieves state-of-the-art results on NTU-60, NTU-120, Kinetics-skeleton and UWA3D Multiview Activity II on supervised and unsupervised FSAR, and their meta-learning inspired fusion.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
427,520
2310.07355
IMITATE: Clinical Prior Guided Hierarchical Vision-Language Pre-training
In the field of medical Vision-Language Pre-training (VLP), significant efforts have been devoted to deriving text and image features from both clinical reports and associated medical images. However, most existing methods may have overlooked the opportunity in leveraging the inherent hierarchical structure of clinical reports, which are generally split into `findings' for descriptive content and `impressions' for conclusive observation. Instead of utilizing this rich, structured format, current medical VLP approaches often simplify the report into either a unified entity or fragmented tokens. In this work, we propose a novel clinical prior guided VLP framework named IMITATE to learn the structure information from medical reports with hierarchical vision-language alignment. The framework derives multi-level visual features from the chest X-ray (CXR) images and separately aligns these features with the descriptive and the conclusive text encoded in the hierarchical medical report. Furthermore, a new clinical-informed contrastive loss is introduced for cross-modal learning, which accounts for clinical prior knowledge in formulating sample correlations in contrastive learning. The proposed model, IMITATE, outperforms baseline VLP methods across six different datasets, spanning five medical imaging downstream tasks. Comprehensive experimental results highlight the advantages of integrating the hierarchical structure of medical reports for vision-language alignment. The code related to this paper is available at https://github.com/cheliu-computation/IMITATE-TMI2024.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
398,942
2307.04973
SAM-U: Multi-box prompts triggered uncertainty estimation for reliable SAM in medical image
Recently, Segmenting Anything has taken an important step towards general artificial intelligence. At the same time, its reliability and fairness have also attracted great attention, especially in the field of health care. In this study, we propose multi-box prompts triggered uncertainty estimation for SAM cues to demonstrate the reliability of segmented lesions or tissues. We estimate the distribution of SAM predictions via Monte Carlo with prior distribution parameters, which employs different prompts as formulation of test-time augmentation. Our experimental results found that multi-box prompts augmentation improve the SAM performance, and endowed each pixel with uncertainty. This provides the first paradigm for a reliable SAM.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
378,569
1907.12133
An Empirical Study on Leveraging Scene Graphs for Visual Question Answering
Visual question answering (Visual QA) has attracted significant attention these years. While a variety of algorithms have been proposed, most of them are built upon different combinations of image and language features as well as multi-modal attention and fusion. In this paper, we investigate an alternative approach inspired by conventional QA systems that operate on knowledge graphs. Specifically, we investigate the use of scene graphs derived from images for Visual QA: an image is abstractly represented by a graph with nodes corresponding to object entities and edges to object relationships. We adapt the recently proposed graph network (GN) to encode the scene graph and perform structured reasoning according to the input question. Our empirical studies demonstrate that scene graphs can already capture essential information of images and graph networks have the potential to outperform state-of-the-art Visual QA algorithms but with a much cleaner architecture. By analyzing the features generated by GNs we can further interpret the reasoning process, suggesting a promising direction towards explainable Visual QA.
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
140,041
2107.11856
Graph Representation Learning on Tissue-Specific Multi-Omics
Combining different modalities of data from human tissues has been critical in advancing biomedical research and personalised medical care. In this study, we leverage a graph embedding model (i.e VGAE) to perform link prediction on tissue-specific Gene-Gene Interaction (GGI) networks. Through ablation experiments, we prove that the combination of multiple biological modalities (i.e multi-omics) leads to powerful embeddings and better link prediction performances. Our evaluation shows that the integration of gene methylation profiles and RNA-sequencing data significantly improves the link prediction performance. Overall, the combination of RNA-sequencing and gene methylation data leads to a link prediction accuracy of 71% on GGI networks. By harnessing graph representation learning on multi-omics data, our work brings novel insights to the current literature on multi-omics integration in bioinformatics.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
247,722
1812.10869
Hypergraph Clustering: A Modularity Maximization Approach
Clustering on hypergraphs has been garnering increased attention with potential applications in network analysis, VLSI design and computer vision, among others. In this work, we generalize the framework of modularity maximization for clustering on hypergraphs. To this end, we introduce a hypergraph null model, analogous to the configuration model on undirected graphs, and a node-degree preserving reduction to work with this model. This is used to define a modularity function that can be maximized using the popular and fast Louvain algorithm. We additionally propose a refinement over this clustering, by reweighting cut hyperedges in an iterative fashion. The efficacy and efficiency of our methods are demonstrated on several real-world datasets.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
117,454
1901.07132
Universal Rules for Fooling Deep Neural Networks based Text Classification
Recently, deep learning based natural language processing techniques are being extensively used to deal with spam mail, censorship evaluation in social networks, among others. However, there is only a couple of works evaluating the vulnerabilities of such deep neural networks. Here, we go beyond attacks to investigate, for the first time, universal rules, i.e., rules that are sample agnostic and therefore could turn any text sample in an adversarial one. In fact, the universal rules do not use any information from the method itself (no information from the method, gradient information or training dataset information is used), making them black-box universal attacks. In other words, the universal rules are sample and method agnostic. By proposing a coevolutionary optimization algorithm we show that it is possible to create universal rules that can automatically craft imperceptible adversarial samples (only less than five perturbations which are close to misspelling are inserted in the text sample). A comparison with a random search algorithm further justifies the strength of the method. Thus, universal rules for fooling networks are here shown to exist. Hopefully, the results from this work will impact the development of yet more sample and model agnostic attacks as well as their defenses, culminating in perhaps a new age for artificial intelligence.
false
false
false
false
false
false
true
false
true
false
false
false
true
false
false
false
false
false
119,154
2206.12848
Analysis of Stochastic Processes through Replay Buffers
Replay buffers are a key component in many reinforcement learning schemes. Yet, their theoretical properties are not fully understood. In this paper we analyze a system where a stochastic process X is pushed into a replay buffer and then randomly sampled to generate a stochastic process Y from the replay buffer. We provide an analysis of the properties of the sampled process such as stationarity, Markovity and autocorrelation in terms of the properties of the original process. Our theoretical analysis sheds light on why replay buffer may be a good de-correlator. Our analysis provides theoretical tools for proving the convergence of replay buffer based algorithms which are prevalent in reinforcement learning schemes.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
304,754
2501.13950
DEFEND: A Large-scale 1M Dataset and Foundation Model for Tobacco Addiction Prevention
While tobacco advertising innovates at unprecedented speed, traditional surveillance methods remain frozen in time, especially in the context of social media. The lack of large-scale, comprehensive datasets and sophisticated monitoring systems has created a widening gap between industry advancement and public health oversight. This paper addresses this critical challenge by introducing Tobacco-1M, a comprehensive dataset of one million tobacco product images with hierarchical labels spanning 75 product categories, and DEFEND, a novel foundation model for tobacco product understanding. Our approach integrates a Feature Enhancement Module for rich multimodal representation learning, a Local-Global Visual Coherence mechanism for detailed feature discrimination, and an Enhanced Image-Text Alignment strategy for precise product characterization. Experimental results demonstrate DEFEND's superior performance, achieving 83.1% accuracy in product classification and 73.8% in visual question-answering tasks, outperforming existing methods by significant margins. Moreover, the model exhibits robust zero-shot learning capabilities with 45.6% accuracy on novel product categories. This work provides regulatory bodies and public health researchers with powerful tools for monitoring emerging tobacco products and marketing strategies, potentially revolutionizing approaches to tobacco control and public health surveillance.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
526,896
1209.4022
Game Theoretic Formation of a Centrality Based Network
We model the formation of networks as a game where players aspire to maximize their own centrality by increasing the number of other players to which they are path-wise connected, while simultaneously incurring a cost for each added adjacent edge. We simulate the interactions between players using an algorithm that factors in rational strategic behavior based on a common objective function. The resulting networks exhibit pairwise stability, from which we derive necessary stable conditions for specific graph topologies. We then expand the model to simulate non-trivial games with large numbers of players. We show that using conditions necessary for the stability of star topologies we can induce the formation of hub players that positively impact the total welfare of the network.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
18,615
1903.01024
Mixed-Triggered Reliable Control for Singular Networked Cascade Control Systems with Randomly Occurring Cyber Attack
In this paper, the issue of mixed-triggered reliable dissipative control is investigated for singular networked cascade control systems (NCCSs) with actuator saturation and randomly occurring cyber attacks. In order to utilize the limited communication resources effectively, a more general mixed-triggered scheme is established which includes both schemes namely time-triggered and event-triggered in a single framework. In particular, two main factors are incorporated to the proposed singular NCCS model namely, actuator saturation and randomly occurring cyber attack, which is an important role to damage the overall network security. By employing Lyapunov-Krasovskii stability theory, a new set of sufficient conditions in terms of linear matrix inequalities (LMIs) is derived to guarantee the singular NCCSs to be admissible and strictly (Q,S,R)-dissipative. Subsequently, a power plant boiler-turbine system based on a numerical example is provided to demonstrate the effectiveness of the proposed control scheme.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
123,170
1803.03916
Deep reinforcement learning for time series: playing idealized trading games
Deep Q-learning is investigated as an end-to-end solution to estimate the optimal strategies for acting on time series input. Experiments are conducted on two idealized trading games. 1) Univariate: the only input is a wave-like price time series, and 2) Bivariate: the input includes a random stepwise price time series and a noisy signal time series, which is positively correlated with future price changes. The Univariate game tests whether the agent can capture the underlying dynamics, and the Bivariate game tests whether the agent can utilize the hidden relation among the inputs. Stacked Gated Recurrent Unit (GRU), Long Short-Term Memory (LSTM) units, Convolutional Neural Network (CNN), and multi-layer perceptron (MLP) are used to model Q values. For both games, all agents successfully find a profitable strategy. The GRU-based agents show best overall performance in the Univariate game, while the MLP-based agents outperform others in the Bivariate game.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
92,355
2007.05353
List Viterbi Decoding of PAC Codes
Polarization-adjusted convolutional (PAC) codes are special concatenated codes in which we employ a one-to-one convolutional transform as a pre-coding step before the polar transform. In this scheme, the polar transform (as a mapper) and the successive cancellation process (as a demapper) present a synthetic vector channel to the convolutional transformation. The numerical results show that this concatenation improves the Hamming distance properties of polar codes. In this work, we implement the parallel list Viterbi algorithm (LVA) and show how the error correction performance moves from the poor performance of the Viterbi algorithm (VA) to the superior performance of list decoding by changing the constraint length, list size, and the sorting strategy (local sorting and global sorting) in the LVA. Also, we analyze the latency of the local sorting of the paths in LVA relative to the global sorting in the list decoding and the trade-off between the sorting latency and the error correction performance.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
186,648
2307.03748
Incentive-Theoretic Bayesian Inference for Collaborative Science
Contemporary scientific research is a distributed, collaborative endeavor, carried out by teams of researchers, regulatory institutions, funding agencies, commercial partners, and scientific bodies, all interacting with each other and facing different incentives. To maintain scientific rigor, statistical methods should acknowledge this state of affairs. To this end, we study hypothesis testing when there is an agent (e.g., a researcher or a pharmaceutical company) with a private prior about an unknown parameter and a principal (e.g., a policymaker or regulator) who wishes to make decisions based on the parameter value. The agent chooses whether to run a statistical trial based on their private prior and then the result of the trial is used by the principal to reach a decision. We show how the principal can conduct statistical inference that leverages the information that is revealed by an agent's strategic behavior -- their choice to run a trial or not. In particular, we show how the principal can design a policy to elucidate partial information about the agent's private prior beliefs and use this to control the posterior probability of the null. One implication is a simple guideline for the choice of significance threshold in clinical trials: the type-I error level should be set to be strictly less than the cost of the trial divided by the firm's profit if the trial is successful.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
378,136
1912.12795
Quantifying the Performance of Federated Transfer Learning
The scarcity of data and isolated data islands encourage different organizations to share data with each other to train machine learning models. However, there are increasing concerns on the problems of data privacy and security, which urges people to seek a solution like Federated Transfer Learning (FTL) to share training data without violating data privacy. FTL leverages transfer learning techniques to utilize data from different sources for training, while achieving data privacy protection without significant accuracy loss. However, the benefits come with a cost of extra computation and communication consumption, resulting in efficiency problems. In order to efficiently deploy and scale up FTL solutions in practice, we need a deep understanding on how the infrastructure affects the efficiency of FTL. Our paper tries to answer this question by quantitatively measuring a real-world FTL implementation FATE on Google Cloud. According to the results of carefully designed experiments, we verified that the following bottlenecks can be further optimized: 1) Inter-process communication is the major bottleneck; 2) Data encryption adds considerable computation overhead; 3) The Internet networking condition affects the performance a lot when the model is large.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
158,930
1906.05828
Overcoming Mean-Field Approximations in Recurrent Gaussian Process Models
We identify a new variational inference scheme for dynamical systems whose transition function is modelled by a Gaussian process. Inference in this setting has either employed computationally intensive MCMC methods, or relied on factorisations of the variational posterior. As we demonstrate in our experiments, the factorisation between latent system states and transition function can lead to a miscalibrated posterior and to learning unnecessarily large noise terms. We eliminate this factorisation by explicitly modelling the dependence between state trajectories and the Gaussian process posterior. Samples of the latent states can then be tractably generated by conditioning on this representation. The method we obtain (VCDT: variationally coupled dynamics and trajectories) gives better predictive performance and more calibrated estimates of the transition function, yet maintains the same time and space complexities as mean-field methods. Code is available at: github.com/ialong/GPt.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
135,133
2201.09786
Aerial Energy Provisioning for Massive Energy-Constrained IoT by UAVs
Autonomy of devices is a major challenge in many Internet of Things (IoT) applications, in particular when the nodes are deployed remotely or difficult to assess places. In this paper we present an approach to provide energy to these devices by Unmanned Aerial Vehicles (UAVs). Therefore, the two major challenges, finding and charging the node are presented. We propose a model to give the energy constrained node an unlimited autonomy by taken the Wireless Power Transfer (WPT) link and battery capacity into account. Selecting the most suitable battery technology allows a reduction in battery capacity and waste. Moreover, an upgrade of existing IoT nodes is feasible with a limited impact on the design and form factor.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
276,782
2403.02496
Choose Your Own Adventure: Interactive E-Books to Improve Word Knowledge and Comprehension Skills
The purpose of this feasibility study was to examine the potential impact of reading digital interactive e-books on essential skills that support reading comprehension with third-fifth grade students. Students read two e-Books that taught word learning and comprehension monitoring strategies in the service of learning difficult vocabulary and targeted science concepts about hurricanes. We investigated whether specific comprehension strategies including word learning and strategies that supported general reading comprehension, summarization, and question generation, show promise of effectiveness in building vocabulary knowledge and comprehension skills in the e-Books. Students were assigned to read one of three versions of each of the e-Books, each version implemented one strategy. The books employed a choose-your-adventure format with embedded comprehension questions that provided students with immediate feedback on their responses. Paired samples t-tests were run to examine pre-to-post differences in learning the targeted vocabulary and science concepts taught in both e-Books. For both e-Books, students demonstrated significant gains in word learning and on the targeted hurricane concepts. Additionally, Hierarchical Linear Modeling (HLM) revealed that no one strategy was more associated with larger gains than the other. Performance on the embedded questions in the books was also associated with greater posttest outcomes for both e-Books. This work discusses important considerations for implementation and future development of e-books that can enhance student engagement and improve reading comprehension.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
434,820
2103.09414
Toward Neural-Network-Guided Program Synthesis and Verification
We propose a novel framework of program and invariant synthesis called neural network-guided synthesis. We first show that, by suitably designing and training neural networks, we can extract logical formulas over integers from the weights and biases of the trained neural networks. Based on the idea, we have implemented a tool to synthesize formulas from positive/negative examples and implication constraints, and obtained promising experimental results. We also discuss two applications of our synthesis method. One is the use of our tool for qualifier discovery in the framework of ICE-learning-based CHC solving, which can in turn be applied to program verification and inductive invariant synthesis. Another application is to a new program development framework called oracle-based programming, which is a neural-network-guided variation of Solar-Lezama's program synthesis by sketching.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
225,157
2106.12561
Fine-Grained Data Selection for Improved Energy Efficiency of Federated Edge Learning
In Federated edge learning (FEEL), energy-constrained devices at the network edge consume significant energy when training and uploading their local machine learning models, leading to a decrease in their lifetime. This work proposes novel solutions for energy-efficient FEEL by jointly considering local training data, available computation, and communications resources, and deadline constraints of FEEL rounds to reduce energy consumption. This paper considers a system model where the edge server is equipped with multiple antennas employing beamforming techniques to communicate with the local users through orthogonal channels. Specifically, we consider a problem that aims to find the optimal user's resources, including the fine-grained selection of relevant training samples, bandwidth, transmission power, beamforming weights, and processing speed with the goal of minimizing the total energy consumption given a deadline constraint on the communication rounds of FEEL. Then, we devise tractable solutions by first proposing a novel fine-grained training algorithm that excludes less relevant training samples and effectively chooses only the samples that improve the model's performance. After that, we derive closed-form solutions, followed by a Golden-Section-based iterative algorithm to find the optimal computation and communication resources that minimize energy consumption. Experiments using MNIST and CIFAR-10 datasets demonstrate that our proposed algorithms considerably outperform the state-of-the-art solutions as energy consumption decreases by 79% for MNIST and 73% for CIFAR-10 datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
242,764
2007.11709
Adversarial Attacks against Face Recognition: A Comprehensive Study
Face recognition (FR) systems have demonstrated outstanding verification performance, suggesting suitability for real-world applications ranging from photo tagging in social media to automated border control (ABC). In an advanced FR system with deep learning-based architecture, however, promoting the recognition efficiency alone is not sufficient, and the system should also withstand potential kinds of attacks designed to target its proficiency. Recent studies show that (deep) FR systems exhibit an intriguing vulnerability to imperceptible or perceptible but natural-looking adversarial input images that drive the model to incorrect output predictions. In this article, we present a comprehensive survey on adversarial attacks against FR systems and elaborate on the competence of new countermeasures against them. Further, we propose a taxonomy of existing attack and defense methods based on different criteria. We compare attack methods on the orientation and attributes and defense approaches on the category. Finally, we explore the challenges and potential research direction.
false
false
false
false
false
false
true
false
false
false
false
true
true
false
false
false
false
false
188,609
2412.06299
4D Gaussian Splatting with Scale-aware Residual Field and Adaptive Optimization for Real-time Rendering of Temporally Complex Dynamic Scenes
Reconstructing dynamic scenes from video sequences is a highly promising task in the multimedia domain. While previous methods have made progress, they often struggle with slow rendering and managing temporal complexities such as significant motion and object appearance/disappearance. In this paper, we propose SaRO-GS as a novel dynamic scene representation capable of achieving real-time rendering while effectively handling temporal complexities in dynamic scenes. To address the issue of slow rendering speed, we adopt a Gaussian primitive-based representation and optimize the Gaussians in 4D space, which facilitates real-time rendering with the assistance of 3D Gaussian Splatting. Additionally, to handle temporally complex dynamic scenes, we introduce a Scale-aware Residual Field. This field considers the size information of each Gaussian primitive while encoding its residual feature and aligns with the self-splitting behavior of Gaussian primitives. Furthermore, we propose an Adaptive Optimization Schedule, which assigns different optimization strategies to Gaussian primitives based on their distinct temporal properties, thereby expediting the reconstruction of dynamic regions. Through evaluations on monocular and multi-view datasets, our method has demonstrated state-of-the-art performance. Please see our project page at https://yjb6.github.io/SaRO-GS.github.io.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
515,196
2301.05797
FedSSC: Shared Supervised-Contrastive Federated Learning
Federated learning is widely used to perform decentralized training of a global model on multiple devices while preserving the data privacy of each device. However, it suffers from heterogeneous local data on each training device which increases the difficulty to reach the same level of accuracy as the centralized training. Supervised Contrastive Learning which outperform cross-entropy tries to minimizes the difference between feature space of points belongs to the same class and pushes away points from different classes. We propose Supervised Contrastive Federated Learning in which devices can share the learned class-wise feature spaces with each other and add the supervised-contrastive learning loss as a regularization term to foster the feature space learning. The loss tries to minimize the cosine similarity distance between the feature map and the averaged feature map from another device in the same class and maximizes the distance between the feature map and that in a different class. This new regularization term when added on top of the moon regularization term is found to outperform the other state-of-the-art regularization terms in solving the heterogeneous data distribution problem.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
340,447
2404.01618
Multi-Robot Collaborative Navigation with Formation Adaptation
Multi-robot collaborative navigation is an essential ability where teamwork and synchronization are keys. In complex and uncertain environments, adaptive formation is vital, as rigid formations prove to be inadequate. The ability of robots to dynamically adjust their formation enables navigation through unpredictable spaces, maintaining cohesion, and effectively responding to environmental challenges. In this paper, we introduce a novel approach that uses bi-level learning framework. Specifically, we use graph learning at a high level for group coordination and reinforcement learning for individual navigation. We innovate by integrating a spring-damper model within the reinforcement learning reward mechanism, addressing the rigidity of traditional formation control methods. During execution, our approach enables a team of robots to successfully navigate challenging environments, maintain a desired formation shape, and dynamically adjust their formation scale based on environmental information. We conduct extensive experiments to evaluate our approach across three distinct formation scenarios in multi-robot navigation: circle, line, and wedge. Experimental results show that our approach achieves promising results and scalability on multi-robot navigation with formation adaptation.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
443,498
2305.02780
Interpretable Regional Descriptors: Hyperbox-Based Local Explanations
This work introduces interpretable regional descriptors, or IRDs, for local, model-agnostic interpretations. IRDs are hyperboxes that describe how an observation's feature values can be changed without affecting its prediction. They justify a prediction by providing a set of "even if" arguments (semi-factual explanations), and they indicate which features affect a prediction and whether pointwise biases or implausibilities exist. A concrete use case shows that this is valuable for both machine learning modelers and persons subject to a decision. We formalize the search for IRDs as an optimization problem and introduce a unifying framework for computing IRDs that covers desiderata, initialization techniques, and a post-processing method. We show how existing hyperbox methods can be adapted to fit into this unified framework. A benchmark study compares the methods based on several quality measures and identifies two strategies to improve IRDs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
362,171
2403.07949
Algorithmic Bayesian Epistemology
One aspect of the algorithmic lens in theoretical computer science is a view on other scientific disciplines that focuses on satisfactory solutions that adhere to real-world constraints, as opposed to solutions that would be optimal ignoring such constraints. The algorithmic lens has provided a unique and important perspective on many academic fields, including molecular biology, ecology, neuroscience, quantum physics, economics, and social science. This thesis applies the algorithmic lens to Bayesian epistemology. Traditional Bayesian epistemology provides a comprehensive framework for how an individual's beliefs should evolve upon receiving new information. However, these methods typically assume an exhaustive model of such information, including the correlation structure between different pieces of evidence. In reality, individuals might lack such an exhaustive model, while still needing to form beliefs. Beyond such informational constraints, an individual may be bounded by limited computation, or by limited communication with agents that have access to information, or by the strategic behavior of such agents. Even when these restrictions prevent the formation of a *perfectly* accurate belief, arriving at a *reasonably* accurate belief remains crucial. In this thesis, we establish fundamental possibility and impossibility results about belief formation under a variety of restrictions, and lay the groundwork for further exploration.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
437,112
1902.06130
Atlas-based automated detection of swim bladder in Medaka embryo
Fish embryo models are increasingly being used both for the assessment of chemicals efficacy and potential toxicity. This article proposes a methodology to automatically detect the swim bladder on 2D images of Medaka fish embryos seen either in dorsal view or in lateral view. After embryo segmentation and for each studied orientation, the method builds an atlas of a healthy embryo. This atlas is then used to define the region of interest and to guide the swim bladder segmentation with a discrete globally optimal active contour. Descriptors are subsequently designed from this segmentation. An automated random forest clas-sifier is built from these descriptors in order to classify embryos with and without a swim bladder. The proposed method is assessed on a dataset of 261 images, containing 202 embryos with a swim bladder (where 196 are in dorsal view and 6 are in lateral view) and 59 without (where 43 are in dorsal view and 16 are in lateral view). We obtain an average precision rate of 95% in the total dataset following 5-fold cross-validation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
121,688
2301.03236
Optimistic Meta-Gradients
We study the connection between gradient-based meta-learning and convex op-timisation. We observe that gradient descent with momentum is a special case of meta-gradients, and building on recent results in optimisation, we prove convergence rates for meta-learning in the single task setting. While a meta-learned update rule can yield faster convergence up to constant factor, it is not sufficient for acceleration. Instead, some form of optimism is required. We show that optimism in meta-learning can be captured through Bootstrapped Meta-Gradients (Flennerhag et al., 2022), providing deeper insight into its underlying mechanics.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
339,741
2212.07860
Multi-Level Association Rule Mining for Wireless Network Time Series Data
Key performance indicators(KPIs) are of great significance in the monitoring of wireless network service quality. The network service quality can be improved by adjusting relevant configuration parameters(CPs) of the base station. However, there are numerous CPs and different cells may affect each other, which bring great challenges to the association analysis of wireless network data. In this paper, we propose an adjustable multi-level association rule mining framework, which can quantitatively mine association rules at each level with environmental information, including engineering parameters and performance management(PMs), and it has interpretability at each level. Specifically, We first cluster similar cells, then quantify KPIs and CPs, and integrate expert knowledge into the association rule mining model, which improve the robustness of the model. The experimental results in real world dataset prove the effectiveness of our method.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
336,550
1912.13192
PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection
We present a novel and high-performance 3D object detection framework, named PointVoxel-RCNN (PV-RCNN), for accurate 3D object detection from point clouds. Our proposed method deeply integrates both 3D voxel Convolutional Neural Network (CNN) and PointNet-based set abstraction to learn more discriminative point cloud features. It takes advantages of efficient learning and high-quality proposals of the 3D voxel CNN and the flexible receptive fields of the PointNet-based networks. Specifically, the proposed framework summarizes the 3D scene with a 3D voxel CNN into a small set of keypoints via a novel voxel set abstraction module to save follow-up computations and also to encode representative scene features. Given the high-quality 3D proposals generated by the voxel CNN, the RoI-grid pooling is proposed to abstract proposal-specific features from the keypoints to the RoI-grid points via keypoint set abstraction with multiple receptive fields. Compared with conventional pooling operations, the RoI-grid feature points encode much richer context information for accurately estimating object confidences and locations. Extensive experiments on both the KITTI dataset and the Waymo Open dataset show that our proposed PV-RCNN surpasses state-of-the-art 3D detection methods with remarkable margins by using only point clouds. Code is available at https://github.com/open-mmlab/OpenPCDet.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
159,029
2105.07593
Differentiable SLAM-net: Learning Particle SLAM for Visual Navigation
Simultaneous localization and mapping (SLAM) remains challenging for a number of downstream applications, such as visual robot navigation, because of rapid turns, featureless walls, and poor camera quality. We introduce the Differentiable SLAM Network (SLAM-net) along with a navigation architecture to enable planar robot navigation in previously unseen indoor environments. SLAM-net encodes a particle filter based SLAM algorithm in a differentiable computation graph, and learns task-oriented neural network components by backpropagating through the SLAM algorithm. Because it can optimize all model components jointly for the end-objective, SLAM-net learns to be robust in challenging conditions. We run experiments in the Habitat platform with different real-world RGB and RGB-D datasets. SLAM-net significantly outperforms the widely adapted ORB-SLAM in noisy conditions. Our navigation architecture with SLAM-net improves the state-of-the-art for the Habitat Challenge 2020 PointNav task by a large margin (37% to 64% success). Project website: http://sites.google.com/view/slamnet
false
false
false
false
true
false
true
true
false
false
false
true
false
false
false
false
false
false
235,483
2302.03202
Exploring the Benefits of Training Expert Language Models over Instruction Tuning
Recently, Language Models (LMs) instruction-tuned on multiple tasks, also known as multitask-prompted fine-tuning (MT), have shown the capability to generalize to unseen tasks. Previous work has shown that scaling the number of training tasks is the key component in making stronger MT LMs. In this work, we report an unexpected finding that an expert LM fine-tuned on just a single task can outperform an MT LM trained with 300+ different tasks on 11 different unseen datasets and on 13 datasets of the BIG-bench benchmark by a mean accuracy of 3.20% and 1.29%, respectively. This finding casts doubt on the previously held belief that simply scaling the number of tasks makes stronger MT LMs. Leveraging this finding, we further show that this distributed approach of training a separate expert LM per training task instead of a single MT LM for zero-shot inference possesses many benefits including (1) avoiding negative task transfer that often occurs during instruction tuning, (2) being able to continually learn new tasks without having to re-train on previous tasks to avoid catastrophic forgetting, and (3) showing compositional capabilities when merging individual experts together. The code is available at https://github.com/joeljang/ELM.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
344,254
2202.00602
Meta-Learning Hypothesis Spaces for Sequential Decision-making
Obtaining reliable, adaptive confidence sets for prediction functions (hypotheses) is a central challenge in sequential decision-making tasks, such as bandits and model-based reinforcement learning. These confidence sets typically rely on prior assumptions on the hypothesis space, e.g., the known kernel of a Reproducing Kernel Hilbert Space (RKHS). Hand-designing such kernels is error prone, and misspecification may lead to poor or unsafe performance. In this work, we propose to meta-learn a kernel from offline data (Meta-KeL). For the case where the unknown kernel is a combination of known base kernels, we develop an estimator based on structured sparsity. Under mild conditions, we guarantee that our estimated RKHS yields valid confidence sets that, with increasing amounts of offline data, become as tight as those given the true unknown kernel. We demonstrate our approach on the kernelized bandit problem (a.k.a.~Bayesian optimization), where we establish regret bounds competitive with those given the true kernel. We also empirically evaluate the effectiveness of our approach on a Bayesian optimization task.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
278,201
2407.04733
Accurate Passive Radar via an Uncertainty-Aware Fusion of Wi-Fi Sensing Data
Wi-Fi devices can effectively be used as passive radar systems that sense what happens in the surroundings and can even discern human activity. We propose, for the first time, a principled architecture which employs Variational Auto-Encoders for estimating a latent distribution responsible for generating the data, and Evidential Deep Learning for its ability to sense out-of-distribution activities. We verify that the fused data processed by different antennas of the same Wi-Fi receiver results in increased accuracy of human activity recognition compared with the most recent benchmarks, while still being informative when facing out-of-distribution samples and enabling semantic interpretation of latent variables in terms of physical phenomena. The results of this paper are a first contribution toward the ultimate goal of providing a flexible, semantic characterisation of black-swan events, i.e., events for which we have limited to no training data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
470,681
2207.06333
6D Camera Relocalization in Visually Ambiguous Extreme Environments
We propose a novel method to reliably estimate the pose of a camera given a sequence of images acquired in extreme environments such as deep seas or extraterrestrial terrains. Data acquired under these challenging conditions are corrupted by textureless surfaces, image degradation, and presence of repetitive and highly ambiguous structures. When naively deployed, the state-of-the-art methods can fail in those scenarios as confirmed by our empirical analysis. In this paper, we attempt to make camera relocalization work in these extreme situations. To this end, we propose: (i) a hierarchical localization system, where we leverage temporal information and (ii) a novel environment-aware image enhancement method to boost the robustness and accuracy. Our extensive experimental results demonstrate superior performance in favor of our method under two extreme settings: localizing an autonomous underwater vehicle and localizing a planetary rover in a Mars-like desert. In addition, our method achieves comparable performance with state-of-the-art methods on the indoor benchmark (7-Scenes dataset) using only 20% training data.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
307,846
2210.10114
Transferable Unlearnable Examples
With more people publishing their personal data online, unauthorized data usage has become a serious concern. The unlearnable strategies have been introduced to prevent third parties from training on the data without permission. They add perturbations to the users' data before publishing, which aims to make the models trained on the perturbed published dataset invalidated. These perturbations have been generated for a specific training setting and a target dataset. However, their unlearnable effects significantly decrease when used in other training settings and datasets. To tackle this issue, we propose a novel unlearnable strategy based on Classwise Separability Discriminant (CSD), which aims to better transfer the unlearnable effects to other training settings and datasets by enhancing the linear separability. Extensive experiments demonstrate the transferability of the proposed unlearnable examples across training settings and datasets.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
324,790
1312.3913
Blowfish Privacy: Tuning Privacy-Utility Trade-offs using Policies
Privacy definitions provide ways for trading-off the privacy of individuals in a statistical database for the utility of downstream analysis of the data. In this paper, we present Blowfish, a class of privacy definitions inspired by the Pufferfish framework, that provides a rich interface for this trade-off. In particular, we allow data publishers to extend differential privacy using a policy, which specifies (a) secrets, or information that must be kept secret, and (b) constraints that may be known about the data. While the secret specification allows increased utility by lessening protection for certain individual properties, the constraint specification provides added protection against an adversary who knows correlations in the data (arising from constraints). We formalize policies and present novel algorithms that can handle general specifications of sensitive information and certain count constraints. We show that there are reasonable policies under which our privacy mechanisms for k-means clustering, histograms and range queries introduce significantly lesser noise than their differentially private counterparts. We quantify the privacy-utility trade-offs for various policies analytically and empirically on real datasets.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
29,080