id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2204.13154
Attention Mechanism in Neural Networks: Where it Comes and Where it Goes
A long time ago in the machine learning literature, the idea of incorporating a mechanism inspired by the human visual system into neural networks was introduced. This idea is named the attention mechanism, and it has gone through a long development period. Today, many works have been devoted to this idea in a variety of tasks. Remarkable performance has recently been demonstrated. The goal of this paper is to provide an overview from the early work on searching for ways to implement attention idea with neural networks until the recent trends. This review emphasizes the important milestones during this progress regarding different tasks. By this way, this study aims to provide a road map for researchers to explore the current development and get inspired for novel approaches beyond the attention.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
293,713
1408.4102
Estimation of Monotone Treatment Effects in Network Experiments
Randomized experiments on social networks pose statistical challenges, due to the possibility of interference between units. We propose new methods for estimating attributable treatment effects in such settings. The methods do not require partial interference, but instead require an identifying assumption that is similar to requiring nonnegative treatment effects. Network or spatial information can be used to customize the test statistic; in principle, this can increase power without making assumptions on the data generating process.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
35,440
2202.01627
Non-Vacuous Generalisation Bounds for Shallow Neural Networks
We focus on a specific class of shallow neural networks with a single hidden layer, namely those with $L_2$-normalised data and either a sigmoid-shaped Gaussian error function ("erf") activation or a Gaussian Error Linear Unit (GELU) activation. For these networks, we derive new generalisation bounds through the PAC-Bayesian theory; unlike most existing such bounds they apply to neural networks with deterministic rather than randomised parameters. Our bounds are empirically non-vacuous when the network is trained with vanilla stochastic gradient descent on MNIST and Fashion-MNIST.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
278,533
1110.4414
(1+eps)-approximate Sparse Recovery
The problem central to sparse recovery and compressive sensing is that of stable sparse recovery: we want a distribution of matrices A in R^{m\times n} such that, for any x \in R^n and with probability at least 2/3 over A, there is an algorithm to recover x* from Ax with ||x* - x||_p <= C min_{k-sparse x'} ||x - x'||_p for some constant C > 1 and norm p. The measurement complexity of this problem is well understood for constant C > 1. However, in a variety of applications it is important to obtain C = 1 + eps for a small eps > 0, and this complexity is not well understood. We resolve the dependence on eps in the number of measurements required of a k-sparse recovery algorithm, up to polylogarithmic factors for the central cases of p = 1 and p = 2. Namely, we give new algorithms and lower bounds that show the number of measurements required is (1/eps^{p/2})k polylog(n). For p = 2, our bound of (1/eps) k log(n/k) is tight up to constant factors. We also give matching bounds when the output is required to be k-sparse, in which case we achieve (1/eps^p) k polylog(n). This shows the distinction between the complexity of sparse and non-sparse outputs is fundamental.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
12,720
2107.10599
Towards Explaining Adversarial Examples Phenomenon in Artificial Neural Networks
In this paper, we study the adversarial examples existence and adversarial training from the standpoint of convergence and provide evidence that pointwise convergence in ANNs can explain these observations. The main contribution of our proposal is that it relates the objective of the evasion attacks and adversarial training with concepts already defined in learning theory. Also, we extend and unify some of the other proposals in the literature and provide alternative explanations on the observations made in those proposals. Through different experiments, we demonstrate that the framework is valuable in the study of the phenomenon and is applicable to real-world problems.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
247,341
2208.14706
Transfering Low-Frequency Features for Domain Adaptation
Previous unsupervised domain adaptation methods did not handle the cross-domain problem from the perspective of frequency for computer vision. The images or feature maps of different domains can be decomposed into the low-frequency component and high-frequency component. This paper proposes the assumption that low-frequency information is more domain-invariant while the high-frequency information contains domain-related information. Hence, we introduce an approach, named low-frequency module (LFM), to extract domain-invariant feature representations. The LFM is constructed with the digital Gaussian low-pass filter. Our method is easy to implement and introduces no extra hyperparameter. We design two effective ways to utilize the LFM for domain adaptation, and our method is complementary to other existing methods and formulated as a plug-and-play unit that can be combined with these methods. Experimental results demonstrate that our LFM outperforms state-of-the-art methods for various computer vision tasks, including image classification and object detection.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
315,402
2208.05040
Economics of Semantic Communication System: An Auction Approach
Semantic communication technologies enable wireless edge devices to communicate effectively by transmitting semantic meaning of data. Edge components, such as vehicles in next-generation intelligent transport systems, use well-trained semantic models to encode and decode semantic information extracted from raw and sensor data. However, the limitation in computing resources makes it difficult to support the training process of accurate semantic models on edge devices. As such, edge devices can buy the pretrained semantic models from semantic model providers, which is called "semantic model trading". Upon collecting semantic information with the semantic models, the edge devices can then sell the extracted semantic information, e.g., information about urban road conditions or traffic signs, to the interested buyers for profit, which is called "semantic information trading". To facilitate both types of the trades, effective incentive mechanisms should be designed. Thus, in this paper, we propose a hierarchical trading system to support both semantic model trading and semantic information trading jointly. The proposed incentive mechanism helps to maximize the revenue of semantic model providers in the semantic model trading, and effectively incentivizes model providers to participate in the development of semantic communication systems. For semantic information trading, our designed auction approach can support the trading between multiple semantic information sellers and buyers, while ensuring individual rationality, incentive compatibility, and budget balance, and moreover, allowing them achieve higher utilities than the baseline method.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
312,296
1703.04727
Tracking Gaze and Visual Focus of Attention of People Involved in Social Interaction
The visual focus of attention (VFOA) has been recognized as a prominent conversational cue. We are interested in estimating and tracking the VFOAs associated with multi-party social interactions. We note that in this type of situations the participants either look at each other or at an object of interest; therefore their eyes are not always visible. Consequently both gaze and VFOA estimation cannot be based on eye detection and tracking. We propose a method that exploits the correlation between eye gaze and head movements. Both VFOA and gaze are modeled as latent variables in a Bayesian switching state-space model. The proposed formulation leads to a tractable learning procedure and to an efficient algorithm that simultaneously tracks gaze and visual focus. The method is tested and benchmarked using two publicly available datasets that contain typical multi-party human-robot and human-human interactions.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
69,940
2501.00199
GPT-4 on Clinic Depression Assessment: An LLM-Based Pilot Study
Depression has impacted millions of people worldwide and has become one of the most prevalent mental disorders. Early mental disorder detection can lead to cost savings for public health agencies and avoid the onset of other major comorbidities. Additionally, the shortage of specialized personnel is a critical issue because clinical depression diagnosis is highly dependent on expert professionals and is time consuming. In this study, we explore the use of GPT-4 for clinical depression assessment based on transcript analysis. We examine the model's ability to classify patient interviews into binary categories: depressed and not depressed. A comparative analysis is conducted considering prompt complexity (e.g., using both simple and complex prompts) as well as varied temperature settings to assess the impact of prompt complexity and randomness on the model's performance. Results indicate that GPT-4 exhibits considerable variability in accuracy and F1-Score across configurations, with optimal performance observed at lower temperature values (0.0-0.2) for complex prompts. However, beyond a certain threshold (temperature >= 0.3), the relationship between randomness and performance becomes unpredictable, diminishing the gains from prompt complexity. These findings suggest that, while GPT-4 shows promise for clinical assessment, the configuration of the prompts and model parameters requires careful calibration to ensure consistent results. This preliminary study contributes to understanding the dynamics between prompt engineering and large language models, offering insights for future development of AI-powered tools in clinical settings.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
521,570
2407.20139
Emission Reduction in Urban Environments by Replacing Conventional City Buses with Electric Bus Technology: A Case Study of Pakistan
The global transportation industry has become one of the main contributors to air pollution. Consequently, electric buses and green transportation are gaining popularity as crucial steps to reduce emission concerns. Many developed countries have already adopted the concept of Battery Electric Buses (BEBs), while the developing ones are just starting with it. However, BEB fleets have advantages, such as lower fuel, higher efficiency, lower maintenance, and energy security. Yet, several obstacles must be overcome to support the mass deployment of BEBs. These incorporate forthright expense charges, arranging loads, BEB reach, and newness to BEB innovation. Stakeholders like policymakers, private company owners, and government leaders have a lot to consider before introducing BEBs at any level in Pakistan. As a result, to operate an electric bus system profitably, it is crucial to develop a proper electric bus network and fleet, especially for bus operators who need to buy enough electric buses at the appropriate time. As a result, this paper aims to investigate if operating an electric bus could be an alternative to regular bus operations. The proposed methodology develops modeling software to cater to various scenarios to determine a proper-designed electric bus operating system in terms of the electric bus route, service frequency, and quantity. This research work simulates and financially analyses an operating Public Transport Infrastructure with a proposed Green Solution. The results show that regardless of the high upfront costs of BEB infrastructure, it becomes profitable in 6-7 years, resulting in a decreased Total Cost of Ownership (TCO) of approximately 30% of its counterpart. The study also provides a clear policy pathway to help stakeholders make informed decisions related to the electrification of public transport in Pakistan.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
477,057
2409.18408
Query matching for spatio-temporal action detection with query-based object detector
In this paper, we propose a method that extends the query-based object detection model, DETR, to spatio-temporal action detection, which requires maintaining temporal consistency in videos. Our proposed method applies DETR to each frame and uses feature shift to incorporate temporal information. However, DETR's object queries in each frame may correspond to different objects, making a simple feature shift ineffective. To overcome this issue, we propose query matching across different frames, ensuring that queries for the same object are matched and used for the feature shift. Experimental results show that performance on the JHMDB21 dataset improves significantly when query features are shifted using the proposed query matching.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
492,240
2107.07397
Level generation and style enhancement -- deep learning for game development overview
We present practical approaches of using deep learning to create and enhance level maps and textures for video games -- desktop, mobile, and web. We aim to present new possibilities for game developers and level artists. The task of designing levels and filling them with details is challenging. It is both time-consuming and takes effort to make levels rich, complex, and with a feeling of being natural. Fortunately, recent progress in deep learning provides new tools to accompany level designers and visual artists. Moreover, they offer a way to generate infinite worlds for game replayability and adjust educational games to players' needs. We present seven approaches to create level maps, each using statistical methods, machine learning, or deep learning. In particular, we include: - Generative Adversarial Networks for creating new images from existing examples (e.g. ProGAN). - Super-resolution techniques for upscaling images while preserving crisp detail (e.g. ESRGAN). - Neural style transfer for changing visual themes. - Image translation - turning semantic maps into images (e.g. GauGAN). - Semantic segmentation for turning images into semantic masks (e.g. U-Net). - Unsupervised semantic segmentation for extracting semantic features (e.g. Tile2Vec). - Texture synthesis - creating large patterns based on a smaller sample (e.g. InGAN).
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
246,405
2202.05843
Fast Model-based Policy Search for Universal Policy Networks
Adapting an agent's behaviour to new environments has been one of the primary focus areas of physics based reinforcement learning. Although recent approaches such as universal policy networks partially address this issue by enabling the storage of multiple policies trained in simulation on a wide range of dynamic/latent factors, efficiently identifying the most appropriate policy for a given environment remains a challenge. In this work, we propose a Gaussian Process-based prior learned in simulation, that captures the likely performance of a policy when transferred to a previously unseen environment. We integrate this prior with a Bayesian Optimisation-based policy search process to improve the efficiency of identifying the most appropriate policy from the universal policy network. We empirically evaluate our approach in a range of continuous and discrete control environments, and show that it outperforms other competing baselines.
false
false
false
false
true
false
true
false
false
false
true
false
false
false
false
false
false
false
280,011
2105.01238
Supervised multi-specialist topic model with applications on large-scale electronic health record data
Motivation: Electronic health record (EHR) data provides a new venue to elucidate disease comorbidities and latent phenotypes for precision medicine. To fully exploit its potential, a realistic data generative process of the EHR data needs to be modelled. We present MixEHR-S to jointly infer specialist-disease topics from the EHR data. As the key contribution, we model the specialist assignments and ICD-coded diagnoses as the latent topics based on patient's underlying disease topic mixture in a novel unified supervised hierarchical Bayesian topic model. For efficient inference, we developed a closed-form collapsed variational inference algorithm to learn the model distributions of MixEHR-S. We applied MixEHR-S to two independent large-scale EHR databases in Quebec with three targeted applications: (1) Congenital Heart Disease (CHD) diagnostic prediction among 154,775 patients; (2) Chronic obstructive pulmonary disease (COPD) diagnostic prediction among 73,791 patients; (3) future insulin treatment prediction among 78,712 patients diagnosed with diabetes as a mean to assess the disease exacerbation. In all three applications, MixEHR-S conferred clinically meaningful latent topics among the most predictive latent topics and achieved superior target prediction accuracy compared to the existing methods, providing opportunities for prioritizing high-risk patients for healthcare services. MixEHR-S source code and scripts of the experiments are freely available at https://github.com/li-lab-mcgill/mixehrS
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
233,465
2101.09986
Multi-view Integration Learning for Irregularly-sampled Clinical Time Series
Electronic health record (EHR) data is sparse and irregular as it is recorded at irregular time intervals, and different clinical variables are measured at each observation point. In this work, we propose a multi-view features integration learning from irregular multivariate time series data by self-attention mechanism in an imputation-free manner. Specifically, we devise a novel multi-integration attention module (MIAM) to extract complex information inherent in irregular time series data. In particular, we explicitly learn the relationships among the observed values, missing indicators, and time interval between the consecutive observations, simultaneously. The rationale behind our approach is the use of human knowledge such as what to measure and when to measure in different situations, which are indirectly represented in the data. In addition, we build an attention-based decoder as a missing value imputer that helps empower the representation learning of the inter-relations among multi-view observations for the prediction task, which operates at the training phase only. We validated the effectiveness of our method over the public MIMIC-III and PhysioNet challenge 2012 datasets by comparing with and outperforming the state-of-the-art methods for in-hospital mortality prediction.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
216,781
2403.16170
Voltage Regulation in Polymer Electrolyte Fuel Cell Systems Using Gaussian Process Model Predictive Control
This study introduces a novel approach utilizing Gaussian process model predictive control (MPC) to stabilize the output voltage of a polymer electrolyte fuel cell (PEFC) system by simultaneously regulating hydrogen and airflow rates. Two Gaussian process models are developed to capture PEFC dynamics, taking into account constraints including hydrogen pressure and input change rates, thereby aiding in mitigating errors inherent to PEFC predictive control. The dynamic performance of the physical model and Gaussian process MPC in constraint handling and system inputs is compared and analyzed. Simulation outcomes demonstrate that the proposed Gaussian process MPC effectively maintains the voltage at the target 48 V while adhering to safety constraints, even amidst workload disturbances ranging from 110-120 A. In comparison to traditional MPC using detailed system models, Gaussian process MPC exhibits a 43\% higher overshoot and 25\% slower response time. Nonetheless, it offers the advantage of not requiring the underlying true system model and needing less system information.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
440,908
1512.07980
Diversity Enhancement for Micro-Differential Evolution
The differential evolution (DE) algorithm suffers from high computational time due to slow nature of evaluation. In contrast, micro-DE (MDE) algorithms employ a very small population size, which can converge faster to a reasonable solution. However, these algorithms are vulnerable to a premature convergence as well as to high risk of stagnation. In this paper, MDE algorithm with vectorized random mutation factor (MDEVM) is proposed, which utilizes the small size population benefit while empowers the exploration ability of mutation factor through randomizing it in the decision variable level. The idea is supported by analyzing mutation factor using Monte-Carlo based simulations. To facilitate the usage of MDE algorithms with very-small population sizes, new mutation schemes for population sizes less than four are also proposed. Furthermore, comprehensive comparative simulations and analysis on performance of the MDE algorithms over various mutation schemes, population sizes, problem types (i.e. uni-modal, multi-modal, and composite), problem dimensionalities, and mutation factor ranges are conducted by considering population diversity analysis for stagnation and trapping in local optimum situations. The studies are conducted on 28 benchmark functions provided for the IEEE CEC-2013 competition. Experimental results demonstrate high performance and convergence speed of the proposed MDEVM algorithm.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
50,473
2402.04812
Aspect-Based Sentiment Analysis for Open-Ended HR Survey Responses
Understanding preferences, opinions, and sentiment of the workforce is paramount for effective employee lifecycle management. Open-ended survey responses serve as a valuable source of information. This paper proposes a machine learning approach for aspect-based sentiment analysis (ABSA) of Dutch open-ended responses in employee satisfaction surveys. Our approach aims to overcome the inherent noise and variability in these responses, enabling a comprehensive analysis of sentiments that can support employee lifecycle management. Through response clustering we identify six key aspects (salary, schedule, contact, communication, personal attention, agreements), which we validate by domain experts. We compile a dataset of 1,458 Dutch survey responses, revealing label imbalance in aspects and sentiments. We propose few-shot approaches for ABSA based on Dutch BERT models, and compare them against bag-of-words and zero-shot baselines. Our work significantly contributes to the field of ABSA by demonstrating the first successful application of Dutch pre-trained language models to aspect-based sentiment analysis in the domain of human resources (HR).
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
427,595
2401.13796
Don't Push the Button! Exploring Data Leakage Risks in Machine Learning and Transfer Learning
Machine Learning (ML) has revolutionized various domains, offering predictive capabilities in several areas. However, with the increasing accessibility of ML tools, many practitioners, lacking deep ML expertise, adopt a "push the button" approach, utilizing user-friendly interfaces without a thorough understanding of underlying algorithms. While this approach provides convenience, it raises concerns about the reliability of outcomes, leading to challenges such as incorrect performance evaluation. This paper addresses a critical issue in ML, known as data leakage, where unintended information contaminates the training data, impacting model performance evaluation. Users, due to a lack of understanding, may inadvertently overlook crucial steps, leading to optimistic performance estimates that may not hold in real-world scenarios. The discrepancy between evaluated and actual performance on new data is a significant concern. In particular, this paper categorizes data leakage in ML, discussing how certain conditions can propagate through the ML workflow. Furthermore, it explores the connection between data leakage and the specific task being addressed, investigates its occurrence in Transfer Learning, and compares standard inductive ML with transductive ML frameworks. The conclusion summarizes key findings, emphasizing the importance of addressing data leakage for robust and reliable ML applications.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
423,854
2302.14406
Instruction Clarification Requests in Multimodal Collaborative Dialogue Games: Tasks, and an Analysis of the CoDraw Dataset
In visual instruction-following dialogue games, players can engage in repair mechanisms in face of an ambiguous or underspecified instruction that cannot be fully mapped to actions in the world. In this work, we annotate Instruction Clarification Requests (iCRs) in CoDraw, an existing dataset of interactions in a multimodal collaborative dialogue game. We show that it contains lexically and semantically diverse iCRs being produced self-motivatedly by players deciding to clarify in order to solve the task successfully. With 8.8k iCRs found in 9.9k dialogues, CoDraw-iCR (v1) is a large spontaneous iCR corpus, making it a valuable resource for data-driven research on clarification in dialogue. We then formalise and provide baseline models for two tasks: Determining when to make an iCR and how to recognise them, in order to investigate to what extent these tasks are learnable from data.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
348,282
1810.02866
Artificial Intelligence Assisted Power Grid Hardening in Response to Extreme Weather Events
In this paper, an artificial intelligence based grid hardening model is proposed with the objective of improving power grid resilience in response to extreme weather events. At first, a machine learning model is proposed to predict the component states (either operational or outage) in response to the extreme event. Then, these predictions are fed into a hardening model, which determines strategic locations for placement of distributed generation (DG) units. In contrast to existing literature in hardening and resilience enhancement, this paper co-optimizes grid economic and resilience objectives by considering the intricate dependencies of the two. The numerical simulations on the standard IEEE 118-bus test system illustrate the merits and applicability of the proposed hardening model. The results indicate that the proposed hardening model through decentralized and distributed local energy resources can produce a more robust solution that can protect the system significantly against multiple component outages due to an extreme event.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
109,676
2205.13163
Cost-efficient Gaussian Tensor Network Embeddings for Tensor-structured Inputs
This work discusses tensor network embeddings, which are random matrices ($S$) with tensor network structure. These embeddings have been used to perform dimensionality reduction of tensor network structured inputs $x$ and accelerate applications such as tensor decomposition and kernel regression. Existing works have designed embeddings for inputs $x$ with specific structures, such that the computational cost for calculating $Sx$ is efficient. We provide a systematic way to design tensor network embeddings consisting of Gaussian random tensors, such that for inputs with more general tensor network structures, both the sketch size (row size of $S$) and the sketching computational cost are low. We analyze general tensor network embeddings that can be reduced to a sequence of sketching matrices. We provide a sufficient condition to quantify the accuracy of such embeddings and derive sketching asymptotic cost lower bounds using embeddings that satisfy this condition and have a sketch size lower than any input dimension. We then provide an algorithm to efficiently sketch input data using such embeddings. The sketch size of the embedding used in the algorithm has a linear dependence on the number of sketching dimensions of the input. Assuming tensor contractions are performed with classical dense matrix multiplication algorithms, this algorithm achieves asymptotic cost within a factor of $O(\sqrt{m})$ of our cost lower bound, where $m$ is the sketch size. Further, when each tensor in the input has a dimension that needs to be sketched, this algorithm yields the optimal sketching asymptotic cost. We apply our sketching analysis to inexact tensor decomposition optimization algorithms. We provide a sketching algorithm for CP decomposition that is asymptotically faster than existing work in multiple regimes, and show optimality of an existing algorithm for tensor train rounding.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
298,833
1601.05353
On the complexity of bounded time and precision reachability for piecewise affine systems
Reachability for piecewise affine systems is known to be undecidable, starting from dimension $2$. In this paper we investigate the exact complexity of several decidable variants of reachability and control questions for piecewise affine systems. We show in particular that the region to region bounded time versions leads to $NP$-complete or co-$NP$-complete problems, starting from dimension $2$. We also prove that a bounded precision version leads to $PSPACE$-complete problems.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
51,116
2006.02708
Auto-Rectify Network for Unsupervised Indoor Depth Estimation
Single-View depth estimation using the CNNs trained from unlabelled videos has shown significant promise. However, excellent results have mostly been obtained in street-scene driving scenarios, and such methods often fail in other settings, particularly indoor videos taken by handheld devices. In this work, we establish that the complex ego-motions exhibited in handheld settings are a critical obstacle for learning depth. Our fundamental analysis suggests that the rotation behaves as noise during training, as opposed to the translation (baseline) which provides supervision signals. To address the challenge, we propose a data pre-processing method that rectifies training images by removing their relative rotations for effective learning. The significantly improved performance validates our motivation. Towards end-to-end learning without requiring pre-processing, we propose an Auto-Rectify Network with novel loss functions, which can automatically learn to rectify images during training. Consequently, our results outperform the previous unsupervised SOTA method by a large margin on the challenging NYUv2 dataset. We also demonstrate the generalization of our trained model in ScanNet and Make3D, and the universality of our proposed learning method on 7-Scenes and KITTI datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
180,121
1403.7948
Structure of conflict graphs in constraint alignment problems and algorithms
We consider the constrained graph alignment problem which has applications in biological network analysis. Given two input graphs $G_1=(V_1,E_1), G_2=(V_2,E_2)$, a pair of vertex mappings induces an {\it edge conservation} if the vertex pairs are adjacent in their respective graphs. %In general terms The goal is to provide a one-to-one mapping between the vertices of the input graphs in order to maximize edge conservation. However the allowed mappings are restricted since each vertex from $V_1$ (resp. $V_2$) is allowed to be mapped to at most $m_1$ (resp. $m_2$) specified vertices in $V_2$ (resp. $V_1$). Most of results in this paper deal with the case $m_2=1$ which attracted most attention in the related literature. We formulate the problem as a maximum independent set problem in a related {\em conflict graph} and investigate structural properties of this graph in terms of forbidden subgraphs. We are interested, in particular, in excluding certain wheals, fans, cliques or claws (all terms are defined in the paper), which corresponds in excluding certain cycles, paths, cliques or independent sets in the neighborhood of each vertex. Then, we investigate algorithmic consequences of some of these properties, which illustrates the potential of this approach and raises new horizons for further works. In particular this approach allows us to reinterpret a known polynomial case in terms of conflict graph and to improve known approximation and fixed-parameter tractability results through efficiently solving the maximum independent set problem in conflict graphs. Some of our new approximation results involve approximation ratios that are function of the optimal value, in particular its square root; this kind of results cannot be achieved for maximum independent set in general graphs.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
31,954
1907.06817
Energy-efficient Alternating Iterative Secure Structure of Maximizing Secrecy Rate for Directional Modulation Networks
In a directional modulation (DM) network, the issues of security and privacy have taken on an increasingly important role. Since the power allocation of confidential message and artificial noise will make a constructive effect on the system performance, it is important to jointly consider the relationship between the beamforming vectors and the power allocation (PA) factors. To maximize the secrecy rate (SR), an alternating iterative structure (AIS) between the beamforming and PA is proposed. With only two or three iterations, it can rapidly converge to its rate ceil. Simulation results indicate that the SR performance of proposed AIS is much better than the null-space projection (NSP) based PA strategy in the medium and large signal-to-noise ratio (SNR) regions, especially when the number of antennas at the DM transmitter is small.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
138,710
2408.01928
A Semi-supervised Multi-channel Graph Convolutional Network for Query Classification in E-commerce
Query intent classification is an essential module for customers to find desired products on the e-commerce application quickly. Most existing query intent classification methods rely on the users' click behavior as a supervised signal to construct training samples. However, these methods based entirely on posterior labels may lead to serious category imbalance problems because of the Matthew effect in click samples. Compared with popular categories, it is difficult for products under long-tail categories to obtain traffic and user clicks, which makes the models unable to detect users' intent for products under long-tail categories. This in turn aggravates the problem that long-tail categories cannot obtain traffic, forming a vicious circle. In addition, due to the randomness of the user's click, the posterior label is unstable for the query with similar semantics, which makes the model very sensitive to the input, leading to an unstable and incomplete recall of categories. In this paper, we propose a novel Semi-supervised Multi-channel Graph Convolutional Network (SMGCN) to address the above problems from the perspective of label association and semi-supervised learning. SMGCN extends category information and enhances the posterior label by utilizing the similarity score between the query and categories. Furthermore, it leverages the co-occurrence and semantic similarity graph of categories to strengthen the relations among labels and weaken the influence of posterior label instability. We conduct extensive offline and online A/B experiments, and the experimental results show that SMGCN significantly outperforms the strong baselines, which shows its effectiveness and practicality.
false
false
false
false
true
true
false
false
true
false
false
false
false
false
false
false
false
false
478,422
1711.01345
Computationally efficient cardiac views projection using 3D Convolutional Neural Networks
4D Flow is an MRI sequence which allows acquisition of 3D images of the heart. The data is typically acquired volumetrically, so it must be reformatted to generate cardiac long axis and short axis views for diagnostic interpretation. These views may be generated by placing 6 landmarks: the left and right ventricle apex, and the aortic, mitral, pulmonary, and tricuspid valves. In this paper, we propose an automatic method to localize landmarks in order to compute the cardiac views. Our approach consists of first calculating a bounding box that tightly crops the heart, followed by a landmark localization step within this bounded region. Both steps are based on a 3D extension of the recently introduced ENet. We demonstrate that the long and short axis projections computed with our automated method are of equivalent quality to projections created with landmarks placed by an experienced cardiac radiologist, based on a blinded test administered to a different cardiac radiologist.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
83,864
2206.09032
Conjunctive Queries with Free Access Patterns under Updates
We study the problem of answering conjunctive queries with free access patterns (CQAPs) under updates. A free access pattern is a partition of the free variables of the query into input and output. The query returns tuples over the output variables given a tuple of values over the input variables. We introduce a fully dynamic evaluation approach that works for all CQAPs and is optimal for two classes of CQAPs. This approach recovers prior work on the dynamic evaluation of conjunctive queries without access patterns. We first give a syntactic characterisation of all CQAPs that admit constant time per single-tuple update and whose output tuples can be enumerated with constant delay given a tuple of values over the input variables. We further chart the complexity trade-off between the preprocessing time, update time and enumeration delay for a class of CQAPs. For some of these CQAPs, our approach achieves optimal, albeit non-constant, update time and delay. This optimality is predicated on the Online Matrix-Vector Multiplication conjecture. We finally adapt our approach to the dynamic evaluation of tractable CQAPs over probabilistic databases under updates.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
303,402
2412.06777
Driv3R: Learning Dense 4D Reconstruction for Autonomous Driving
Realtime 4D reconstruction for dynamic scenes remains a crucial challenge for autonomous driving perception. Most existing methods rely on depth estimation through self-supervision or multi-modality sensor fusion. In this paper, we propose Driv3R, a DUSt3R-based framework that directly regresses per-frame point maps from multi-view image sequences. To achieve streaming dense reconstruction, we maintain a memory pool to reason both spatial relationships across sensors and dynamic temporal contexts to enhance multi-view 3D consistency and temporal integration. Furthermore, we employ a 4D flow predictor to identify moving objects within the scene to direct our network focus more on reconstructing these dynamic regions. Finally, we align all per-frame pointmaps consistently to the world coordinate system in an optimization-free manner. We conduct extensive experiments on the large-scale nuScenes dataset to evaluate the effectiveness of our method. Driv3R outperforms previous frameworks in 4D dynamic scene reconstruction, achieving 15x faster inference speed compared to methods requiring global alignment. Code: https://github.com/Barrybarry-Smith/Driv3R.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
515,370
1908.03142
The Hitchhiker's Guide to LDA
Latent Dirichlet Allocation (LDA) model is a famous model in the topic model field, it has been studied for years due to its extensive application value in industry and academia. However, the mathematical derivation of LDA model is challenging and difficult, which makes it difficult for the beginners to learn. To help the beginners in learning LDA, this book analyzes the mathematical derivation of LDA in detail, and it also introduces all the knowledge background to make it easy for beginners to understand. Thus, this book contains the author's unique insights. It should be noted that this book is written in Chinese.
false
false
false
false
false
true
true
false
true
false
false
false
false
false
false
false
false
false
141,169
2310.17168
Learning an Inventory Control Policy with General Inventory Arrival Dynamics
In this paper we address the problem of learning and backtesting inventory control policies in the presence of general arrival dynamics -- which we term as a quantity-over-time arrivals model (QOT). We also allow for order quantities to be modified as a post-processing step to meet vendor constraints such as order minimum and batch size constraints -- a common practice in real supply chains. To the best of our knowledge this is the first work to handle either arbitrary arrival dynamics or an arbitrary downstream post-processing of order quantities. Building upon recent work (Madeka et al., 2022) we similarly formulate the periodic review inventory control problem as an exogenous decision process, where most of the state is outside the control of the agent. Madeka et al., 2022 show how to construct a simulator that replays historic data to solve this class of problem. In our case, we incorporate a deep generative model for the arrivals process as part of the history replay. By formulating the problem as an exogenous decision process, we can apply results from Madeka et al., 2022 to obtain a reduction to supervised learning. Via simulation studies we show that this approach yields statistically significant improvements in profitability over production baselines. Using data from a real-world A/B test, we show that Gen-QOT generalizes well to off-policy data and that the resulting buying policy outperforms traditional inventory management systems in real world settings.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
403,026
2410.04690
SegINR: Segment-wise Implicit Neural Representation for Sequence Alignment in Neural Text-to-Speech
We present SegINR, a novel approach to neural Text-to-Speech (TTS) that addresses sequence alignment without relying on an auxiliary duration predictor and complex autoregressive (AR) or non-autoregressive (NAR) frame-level sequence modeling. SegINR simplifies the process by converting text sequences directly into frame-level features. It leverages an optimal text encoder to extract embeddings, transforming each into a segment of frame-level features using a conditional implicit neural representation (INR). This method, named segment-wise INR (SegINR), models temporal dynamics within each segment and autonomously defines segment boundaries, reducing computational costs. We integrate SegINR into a two-stage TTS framework, using it for semantic token prediction. Our experiments in zero-shot adaptive TTS scenarios demonstrate that SegINR outperforms conventional methods in speech quality with computational efficiency.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
495,398
2103.14215
The Complete Affine Automorphism Group of Polar Codes
Recently, a permutation-based successive cancellation (PSC) decoding framework for polar codes attaches much attention. It decodes several permuted codewords with independent successive cancellation (SC) decoders. Its latency thus can be reduced to that of SC decoding. However, the PSC framework is ineffective for permutations falling into the lower-triangular affine (LTA) automorphism group, as they are invariant under SC decoding. As such, a larger block lower-triangular affine (BLTA) group that contains SC-variant permutations was discovered for decreasing polar codes. But it was unknown whether BLTA equals the complete automorphism group. In this paper, we prove that BLTA equals the complete automorphisms of decreasing polar codes that can be formulated as affine trasformations.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
226,770
2104.10116
Detection of Audio-Video Synchronization Errors Via Event Detection
We present a new method and a large-scale database to detect audio-video synchronization(A/V sync) errors in tennis videos. A deep network is trained to detect the visual signature of the tennis ball being hit by the racquet in the video stream. Another deep network is trained to detect the auditory signature of the same event in the audio stream. During evaluation, the audio stream is searched by the audio network for the audio event of the ball being hit. If the event is found in audio, the neighboring interval in video is searched for the corresponding visual signature. If the event is not found in the video stream but is found in the audio stream, A/V sync error is flagged. We developed a large-scaled database of 504,300 frames from 6 hours of videos of tennis events, simulated A/V sync errors, and found our method achieves high accuracy on the task.
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
231,469
1106.4064
Algorithmic Programming Language Identification
Motivated by the amount of code that goes unidentified on the web, we introduce a practical method for algorithmically identifying the programming language of source code. Our work is based on supervised learning and intelligent statistical features. We also explored, but abandoned, a grammatical approach. In testing, our implementation greatly outperforms that of an existing tool that relies on a Bayesian classifier. Code is written in Python and available under an MIT license.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
10,925
1909.04802
Variable Rate Deep Image Compression With a Conditional Autoencoder
In this paper, we propose a novel variable-rate learned image compression framework with a conditional autoencoder. Previous learning-based image compression methods mostly require training separate networks for different compression rates so they can yield compressed images of varying quality. In contrast, we train and deploy only one variable-rate image compression network implemented with a conditional autoencoder. We provide two rate control parameters, i.e., the Lagrange multiplier and the quantization bin size, which are given as conditioning variables to the network. Coarse rate adaptation to a target is performed by changing the Lagrange multiplier, while the rate can be further fine-tuned by adjusting the bin size used in quantizing the encoded representation. Our experimental results show that the proposed scheme provides a better rate-distortion trade-off than the traditional variable-rate image compression codecs such as JPEG2000 and BPG. Our model also shows comparable and sometimes better performance than the state-of-the-art learned image compression models that deploy multiple networks trained for varying rates.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
144,901
2408.17344
rerankers: A Lightweight Python Library to Unify Ranking Methods
This paper presents rerankers, a Python library which provides an easy-to-use interface to the most commonly used re-ranking approaches. Re-ranking is an integral component of many retrieval pipelines; however, there exist numerous approaches to it, relying on different implementation methods. rerankers unifies these methods into a single user-friendly interface, allowing practitioners and researchers alike to explore different methods while only changing a single line of Python code. Moreover ,rerankers ensures that its implementations are done with the fewest dependencies possible, and re-uses the original implementation whenever possible, guaranteeing that our simplified interface results in no performance degradation compared to more complex ones. The full source code and list of supported models are updated regularly and available at https://github.com/answerdotai/rerankers.
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
484,669
2203.11075
Dense Siamese Network for Dense Unsupervised Learning
This paper presents Dense Siamese Network (DenseSiam), a simple unsupervised learning framework for dense prediction tasks. It learns visual representations by maximizing the similarity between two views of one image with two types of consistency, i.e., pixel consistency and region consistency. Concretely, DenseSiam first maximizes the pixel level spatial consistency according to the exact location correspondence in the overlapped area. It also extracts a batch of region embeddings that correspond to some sub-regions in the overlapped area to be contrasted for region consistency. In contrast to previous methods that require negative pixel pairs, momentum encoders or heuristic masks, DenseSiam benefits from the simple Siamese network and optimizes the consistency of different granularities. It also proves that the simple location correspondence and interacted region embeddings are effective enough to learn the similarity. We apply DenseSiam on ImageNet and obtain competitive improvements on various downstream tasks. We also show that only with some extra task-specific losses, the simple framework can directly conduct dense prediction tasks. On an existing unsupervised semantic segmentation benchmark, it surpasses state-of-the-art segmentation methods by 2.1 mIoU with 28% training costs. Code and models are released at https://github.com/ZwwWayne/DenseSiam.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
286,787
2106.12894
InFlow: Robust outlier detection utilizing Normalizing Flows
Normalizing flows are prominent deep generative models that provide tractable probability distributions and efficient density estimation. However, they are well known to fail while detecting Out-of-Distribution (OOD) inputs as they directly encode the local features of the input representations in their latent space. In this paper, we solve this overconfidence issue of normalizing flows by demonstrating that flows, if extended by an attention mechanism, can reliably detect outliers including adversarial attacks. Our approach does not require outlier data for training and we showcase the efficiency of our method for OOD detection by reporting state-of-the-art performance in diverse experimental settings. Code available at https://github.com/ComputationalRadiationPhysics/InFlow .
false
false
false
false
true
false
true
false
false
false
false
false
true
false
false
false
false
false
242,876
2402.01093
Need a Small Specialized Language Model? Plan Early!
Large language models are versatile tools but are not suitable for small inference budgets. Small models have more efficient inference, but their lower capacity means that their performance can be good only if one limits their scope to a specialized domain. This paper explores how to get good specialized small language models using a large, generic, pretraining set and a limited amount of specialized data. We consider two scenarios, depending on whether (i) one can afford pretraining a model for each specialization task, or (ii) one wants to cheaply adapt a single pretrained model for each task. In the first scenario, we propose an effective solution based on importance sampling: we resample the pretraining set to imitate the specialization data and train a small model on it. In the second scenario, we propose a novel architecture, projected networks (PN). PN is a large network whose parameters can be linearly projected into a small network for specialization. For both scenarios, we demonstrate the empirical effectiveness of our solutions across various domains, training set sizes, and training budgets.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
425,853
2108.00373
SPEAR : Semi-supervised Data Programming in Python
We present SPEAR, an open-source python library for data programming with semi supervision. The package implements several recent data programming approaches including facility to programmatically label and build training data. SPEAR facilitates weak supervision in the form of heuristics (or rules) and association of noisy labels to the training dataset. These noisy labels are aggregated to assign labels to the unlabeled data for downstream tasks. We have implemented several label aggregation approaches that aggregate the noisy labels and then train using the noisily labeled set in a cascaded manner. Our implementation also includes other approaches that jointly aggregate and train the model for text classification tasks. Thus, in our python package, we integrate several cascade and joint data-programming approaches while also providing the facility of data programming by letting the user define labeling functions or rules. The code and tutorial notebooks are available at https://github.com/decile-team/spear. Further, extensive documentation can be found at https://spear-decile.readthedocs.io/. Video tutorials demonstrating the usage of our package are available here. We also present some real-world use cases of SPEAR.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
248,693
2302.02338
Electromechanical phase-field fracture modelling of piezoresistive CNT-based composites
We present a novel computational framework to simulate the electromechanical response of self-sensing carbon nanotube (CNT)-based composites experiencing fracture. The computational framework combines electrical-deformation-fracture finite element modelling with a mixed micromechanics formulation. The latter is used to estimate the constitutive properties of CNT-based composites, including the elastic tensor, fracture energy, electrical conductivity, and linear piezoresistive coefficients. These properties are inputted into a coupled electro-structural finite element model, which simulates the evolution of cracks based upon phase-field fracture. The coupled physical problem is solved in a monolithic manner, exploiting the robustness and efficiency of a quasi-Newton algorithm. 2D and 3D boundary value problems are simulated to illustrate the potential of the modelling framework in assessing the influence of defects on the electromechanical response of meso- and macro-scale smart structures. Case studies aim at shedding light into the interplay between fracture and the electromechanical material response and include parametric analyses, validation against experiments and the simulation of complex cracking conditions (multiple defects, crack merging). The presented numerical results showcase the efficiency and robustness of the computational framework, as well as its ability to model a large variety of structural configurations and damage patterns. The deformation-electrical-fracture finite element code developed is made freely available to download.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
343,966
2308.00263
Asynchronous Federated Learning with Bidirectional Quantized Communications and Buffered Aggregation
Asynchronous Federated Learning with Buffered Aggregation (FedBuff) is a state-of-the-art algorithm known for its efficiency and high scalability. However, it has a high communication cost, which has not been examined with quantized communications. To tackle this problem, we present a new algorithm (QAFeL), with a quantization scheme that establishes a shared "hidden" state between the server and clients to avoid the error propagation caused by direct quantization. This approach allows for high precision while significantly reducing the data transmitted during client-server interactions. We provide theoretical convergence guarantees for QAFeL and corroborate our analysis with experiments on a standard benchmark.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
382,878
2406.06592
Improve Mathematical Reasoning in Language Models by Automated Process Supervision
Complex multi-step reasoning tasks, such as solving mathematical problems or generating code, remain a significant hurdle for even the most advanced large language models (LLMs). Verifying LLM outputs with an Outcome Reward Model (ORM) is a standard inference-time technique aimed at enhancing the reasoning performance of LLMs. However, this still proves insufficient for reasoning tasks with a lengthy or multi-hop reasoning chain, where the intermediate outcomes are neither properly rewarded nor penalized. Process supervision addresses this limitation by assigning intermediate rewards during the reasoning process. To date, the methods used to collect process supervision data have relied on either human annotation or per-step Monte Carlo estimation, both prohibitively expensive to scale, thus hindering the broad application of this technique. In response to this challenge, we propose a novel divide-and-conquer style Monte Carlo Tree Search (MCTS) algorithm named \textit{OmegaPRM} for the efficient collection of high-quality process supervision data. This algorithm swiftly identifies the first error in the Chain of Thought (CoT) with binary search and balances the positive and negative examples, thereby ensuring both efficiency and quality. As a result, we are able to collect over 1.5 million process supervision annotations to train Process Reward Models (PRMs). This fully automated process supervision alongside the weighted self-consistency algorithm is able to enhance LLMs' math reasoning performances. We improved the success rates of the instruction-tuned Gemini Pro model from 51\% to 69.4\% on MATH500 and from 86.4\% to 93.6\% on GSM8K. Similarly, we boosted the success rates of Gemma2 27B from 42.3\% to 58.2\% on MATH500 and from 74.0\% to 92.2\% on GSM8K. The entire process operates without any human intervention or supervision, making our method both financially and ...
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
462,675
2303.09941
Leaping Into Memories: Space-Time Deep Feature Synthesis
The success of deep learning models has led to their adaptation and adoption by prominent video understanding methods. The majority of these approaches encode features in a joint space-time modality for which the inner workings and learned representations are difficult to visually interpret. We propose LEArned Preconscious Synthesis (LEAPS), an architecture-independent method for synthesizing videos from the internal spatiotemporal representations of models. Using a stimulus video and a target class, we prime a fixed space-time model and iteratively optimize a video initialized with random noise. Additional regularizers are used to improve the feature diversity of the synthesized videos alongside the cross-frame temporal coherence of motions. We quantitatively and qualitatively evaluate the applicability of LEAPS by inverting a range of spatiotemporal convolutional and attention-based architectures trained on Kinetics-400, which to the best of our knowledge has not been previously accomplished.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
352,248
2012.13604
DNS Typo-squatting Domain Detection: A Data Analytics & Machine Learning Based Approach
Domain Name System (DNS) is a crucial component of current IP-based networks as it is the standard mechanism for name to IP resolution. However, due to its lack of data integrity and origin authentication processes, it is vulnerable to a variety of attacks. One such attack is Typosquatting. Detecting this attack is particularly important as it can be a threat to corporate secrets and can be used to steal information or commit fraud. In this paper, a machine learning-based approach is proposed to tackle the typosquatting vulnerability. To that end, exploratory data analytics is first used to better understand the trends observed in eight domain name-based extracted features. Furthermore, a majority voting-based ensemble learning classifier built using five classification algorithms is proposed that can detect suspicious domains with high accuracy. Moreover, the observed trends are validated by studying the same features in an unlabeled dataset using K-means clustering algorithm and through applying the developed ensemble learning classifier. Results show that legitimate domains have a smaller domain name length and fewer unique characters. Moreover, the developed ensemble learning classifier performs better in terms of accuracy, precision, and F-score. Furthermore, it is shown that similar trends are observed when clustering is used. However, the number of domains identified as potentially suspicious is high. Hence, the ensemble learning classifier is applied with results showing that the number of domains identified as potentially suspicious is reduced by almost a factor of five while still maintaining the same trends in terms of features' statistics.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
213,267
2007.02684
On the Influence of Ageing on Face Morph Attacks: Vulnerability and Detection
Face morphing attacks have raised critical concerns as they demonstrate a new vulnerability of Face Recognition Systems (FRS), which are widely deployed in border control applications. The face morphing process uses the images from multiple data subjects and performs an image blending operation to generate a morphed image of high quality. The generated morphed image exhibits similar visual characteristics corresponding to the biometric characteristics of the data subjects that contributed to the composite image and thus making it difficult for both humans and FRS, to detect such attacks. In this paper, we report a systematic investigation on the vulnerability of the Commercial-Off-The-Shelf (COTS) FRS when morphed images under the influence of ageing are presented. To this extent, we have introduced a new morphed face dataset with ageing derived from the publicly available MORPH II face dataset, which we refer to as MorphAge dataset. The dataset has two bins based on age intervals, the first bin - MorphAge-I dataset has 1002 unique data subjects with the age variation of 1 year to 2 years while the MorphAge-II dataset consists of 516 data subjects whose age intervals are from 2 years to 5 years. To effectively evaluate the vulnerability for morphing attacks, we also introduce a new evaluation metric, namely the Fully Mated Morphed Presentation Match Rate (FMMPMR), to quantify the vulnerability effectively in a realistic scenario. Extensive experiments are carried out by using two different COTS FRS (COTS I - Cognitec and COTS II - Neurotechnology) to quantify the vulnerability with ageing. Further, we also evaluate five different Morph Attack Detection (MAD) techniques to benchmark their detection performance with ageing.
false
false
false
false
false
false
false
false
false
false
false
true
false
true
false
false
false
false
185,820
1709.07598
Demography-based Facial Retouching Detection using Subclass Supervised Sparse Autoencoder
Digital retouching of face images is becoming more widespread due to the introduction of software packages that automate the task. Several researchers have introduced algorithms to detect whether a face image is original or retouched. However, previous work on this topic has not considered whether or how accuracy of retouching detection varies with the demography of face images. In this paper, we introduce a new Multi-Demographic Retouched Faces (MDRF) dataset, which contains images belonging to two genders, male and female, and three ethnicities, Indian, Chinese, and Caucasian. Further, retouched images are created using two different retouching software packages. The second major contribution of this research is a novel semi-supervised autoencoder incorporating "subclass" information to improve classification. The proposed approach outperforms existing state-of-the-art detection algorithms for the task of generalized retouching detection. Experiments conducted with multiple combinations of ethnicities show that accuracy of retouching detection can vary greatly based on the demographics of the training and testing images.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
81,310
1807.11878
FADE: Fast and Asymptotically efficient Distributed Estimator for dynamic networks
Consider a set of agents that wish to estimate a vector of parameters of their mutual interest. For this estimation goal, agents can sense and communicate. When sensing, an agent measures (in additive gaussian noise) linear combinations of the unknown vector of parameters. When communicating, an agent can broadcast information to a few other agents, by using the channels that happen to be randomly at its disposal at the time. To coordinate the agents towards their estimation goal, we propose a novel algorithm called FADE (Fast and Asymptotically efficient Distributed Estimator), in which agents collaborate at discrete time-steps; at each time-step, agents sense and communicate just once, while also updating their own estimate of the unknown vector of parameters. FADE enjoys five attractive features: first, it is an intuitive estimator, simple to derive; second, it withstands dynamic networks, that is, networks whose communication channels change randomly over time; third, it is strongly consistent in that, as time-steps play out, each agent's local estimate converges (almost surely) to the true vector of parameters; fourth, it is both asymptotically unbiased and efficient, which means that, across time, each agent's estimate becomes unbiased and the mean-square error (MSE) of each agent's estimate vanishes to zero at the same rate of the MSE of the optimal estimator at an almighty central node; fifth, and most importantly, when compared with a state-of-art consensus+innovation (CI) algorithm, it yields estimates with outstandingly lower mean-square errors, for the same number of communications -- for example, in a sparsely connected network model with 50 agents, we find through numerical simulations that the reduction can be dramatic, reaching several orders of magnitude.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
104,266
2109.08290
Generating Explainable Rule Sets from Tree-Ensemble Learning Methods by Answer Set Programming
We propose a method for generating explainable rule sets from tree-ensemble learners using Answer Set Programming (ASP). To this end, we adopt a decompositional approach where the split structures of the base decision trees are exploited in the construction of rules, which in turn are assessed using pattern mining methods encoded in ASP to extract interesting rules. We show how user-defined constraints and preferences can be represented declaratively in ASP to allow for transparent and flexible rule set generation, and how rules can be used as explanations to help the user better understand the models. Experimental evaluation with real-world datasets and popular tree-ensemble algorithms demonstrates that our approach is applicable to a wide range of classification tasks.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
255,845
2407.18060
Cross-Vendor Reproducibility of Radiomics-based Machine Learning Models for Computer-aided Diagnosis
Background: The reproducibility of machine-learning models in prostate cancer detection across different MRI vendors remains a significant challenge. Methods: This study investigates Support Vector Machines (SVM) and Random Forest (RF) models trained on radiomic features extracted from T2-weighted MRI images using Pyradiomics and MRCradiomics libraries. Feature selection was performed using the maximum relevance minimum redundancy (MRMR) technique. We aimed to enhance clinical decision support through multimodal learning and feature fusion. Results: Our SVM model, utilizing combined features from Pyradiomics and MRCradiomics, achieved an AUC of 0.74 on the Multi-Improd dataset (Siemens scanner) but decreased to 0.60 on the Philips test set. The RF model showed similar trends, with notable robustness for models using Pyradiomics features alone (AUC of 0.78 on Philips). Conclusions: These findings demonstrate the potential of multimodal feature integration to improve the robustness and generalizability of machine-learning models for clinical decision support in prostate cancer detection. This study marks a significant step towards developing reliable AI-driven diagnostic tools that maintain efficacy across various imaging platforms.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
476,231
2408.14348
Deep learning-based ecological analysis of camera trap images is impacted by training data quality and size
Large wildlife image collections from camera traps are crucial for biodiversity monitoring, offering insights into species richness, occupancy, and activity patterns. However, manual processing of these data is time-consuming, hindering analytical processes. To address this, deep neural networks have been widely adopted to automate image analysis. Despite their growing use, the impact of model training decisions on downstream ecological metrics remains unclear. Here, we analyse camera trap data from an African savannah and an Asian sub-tropical dry forest to compare key ecological metrics derived from expert-generated species identifications with those generated from deep neural networks. We assess the impact of model architecture, training data noise, and dataset size on ecological metrics, including species richness, occupancy, and activity patterns. Our results show that while model architecture has minimal impact, large amounts of noise and reduced dataset size significantly affect these metrics. Nonetheless, estimated ecological metrics are resilient to considerable noise, tolerating up to 10% error in species labels and a 50% reduction in training set size without changing significantly. We also highlight that conventional metrics like classification error may not always be representative of a model's ability to accurately measure ecological metrics. We conclude that ecological metrics derived from deep neural network predictions closely match those calculated from expert labels and remain robust to variations in the factors explored. However, training decisions for deep neural networks can impact downstream ecological analysis. Therefore, practitioners should prioritize creating large, clean training sets and evaluate deep neural network solutions based on their ability to measure the ecological metrics of interest.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
483,502
2211.13529
3D Dual-Fusion: Dual-Domain Dual-Query Camera-LiDAR Fusion for 3D Object Detection
Fusing data from cameras and LiDAR sensors is an essential technique to achieve robust 3D object detection. One key challenge in camera-LiDAR fusion involves mitigating the large domain gap between the two sensors in terms of coordinates and data distribution when fusing their features. In this paper, we propose a novel camera-LiDAR fusion architecture called, 3D Dual-Fusion, which is designed to mitigate the gap between the feature representations of camera and LiDAR data. The proposed method fuses the features of the camera-view and 3D voxel-view domain and models their interactions through deformable attention. We redesign the transformer fusion encoder to aggregate the information from the two domains. Two major changes include 1) dual query-based deformable attention to fuse the dual-domain features interactively and 2) 3D local self-attention to encode the voxel-domain queries prior to dual-query decoding. The results of an experimental evaluation show that the proposed camera-LiDAR fusion architecture achieved competitive performance on the KITTI and nuScenes datasets, with state-of-the-art performances in some 3D object detection benchmarks categories.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
332,509
2308.03239
Asynchronous Decentralized Q-Learning: Two Timescale Analysis By Persistence
Non-stationarity is a fundamental challenge in multi-agent reinforcement learning (MARL), where agents update their behaviour as they learn. Many theoretical advances in MARL avoid the challenge of non-stationarity by coordinating the policy updates of agents in various ways, including synchronizing times at which agents are allowed to revise their policies. Synchronization enables analysis of many MARL algorithms via multi-timescale methods, but such synchrony is infeasible in many decentralized applications. In this paper, we study an asynchronous variant of the decentralized Q-learning algorithm, a recent MARL algorithm for stochastic games. We provide sufficient conditions under which the asynchronous algorithm drives play to equilibrium with high probability. Our solution utilizes constant learning rates in the Q-factor update, which we show to be critical for relaxing the synchrony assumptions of earlier work. Our analysis also applies to asynchronous generalizations of a number of other algorithms from the regret testing tradition, whose performance is analyzed by multi-timescale methods that study Markov chains obtained via policy update dynamics. This work extends the applicability of the decentralized Q-learning algorithm and its relatives to settings in which parameters are selected in an independent manner, and tames non-stationarity without imposing the coordination assumptions of prior work.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
true
383,950
2006.00025
Environmental regulation using Plasticoding for the evolution of robots
Evolutionary robot systems are usually affected by the properties of the environment indirectly through selection. In this paper, we present and investigate a system where the environment also has a direct effect: through regulation. We propose a novel robot encoding method where a genotype encodes multiple possible phenotypes, and the incarnation of a robot depends on the environmental conditions taking place in a determined moment of its life. This means that the morphology, controller, and behavior of a robot can change according to the environment. Importantly, this process of development can happen at any moment of a robot lifetime, according to its experienced environmental stimuli. We provide an empirical proof-of-concept, and the analysis of the experimental results shows that Plasticoding improves adaptation (task performance) while leading to different evolved morphologies, controllers, and behaviour.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
179,342
1708.00549
Improved Representation Learning for Predicting Commonsense Ontologies
Recent work in learning ontologies (hierarchical and partially-ordered structures) has leveraged the intrinsic geometry of spaces of learned representations to make predictions that automatically obey complex structural constraints. We explore two extensions of one such model, the order-embedding model for hierarchical relation learning, with an aim towards improved performance on text data for commonsense knowledge representation. Our first model jointly learns ordering relations and non-hierarchical knowledge in the form of raw text. Our second extension exploits the partial order structure of the training data to find long-distance triplet constraints among embeddings which are poorly enforced by the pairwise training procedure. We find that both incorporating free text and augmented training constraints improve over the original order-embedding model and other strong baselines.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
78,224
1710.10400
Total-Text: A Comprehensive Dataset for Scene Text Detection and Recognition
Text in curve orientation, despite being one of the common text orientations in real world environment, has close to zero existence in well received scene text datasets such as ICDAR2013 and MSRA-TD500. The main motivation of Total-Text is to fill this gap and facilitate a new research direction for the scene text community. On top of the conventional horizontal and multi-oriented texts, it features curved-oriented text. Total-Text is highly diversified in orientations, more than half of its images have a combination of more than two orientations. Recently, a new breed of solutions that casted text detection as a segmentation problem has demonstrated their effectiveness against multi-oriented text. In order to evaluate its robustness against curved text, we fine-tuned DeconvNet and benchmark it on Total-Text. Total-Text with its annotation is available at https://github.com/cs-chan/Total-Text-Dataset
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
83,376
2408.10819
GS-KGC: A Generative Subgraph-based Framework for Knowledge Graph Completion with Large Language Models
Knowledge graph completion (KGC) focuses on identifying missing triples in a knowledge graph (KG) , which is crucial for many downstream applications. Given the rapid development of large language models (LLMs), some LLM-based methods are proposed for KGC task. However, most of them focus on prompt engineering while overlooking the fact that finer-grained subgraph information can aid LLMs in generating more accurate answers. In this paper, we propose a novel completion framework called \textbf{G}enerative \textbf{S}ubgraph-based KGC (GS-KGC), which utilizes subgraph information as contextual reasoning and employs a QA approach to achieve the KGC task. This framework primarily includes a subgraph partitioning algorithm designed to generate negatives and neighbors. Specifically, negatives can encourage LLMs to generate a broader range of answers, while neighbors provide additional contextual insights for LLM reasoning. Furthermore, we found that GS-KGC can discover potential triples within the KGs and new facts beyond the KGs. Experiments conducted on four common KGC datasets highlight the advantages of the proposed GS-KGC, e.g., it shows a 5.6\% increase in Hits@3 compared to the LLM-based model CP-KGC on the FB15k-237N, and a 9.3\% increase over the LLM-based model TECHS on the ICEWS14.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
482,036
2009.10692
TSV Extrusion Morphology Classification Using Deep Convolutional Neural Networks
In this paper, we utilize deep convolutional neural networks (CNNs) to classify the morphology of through-silicon via (TSV) extrusion in three dimensional (3D) integrated circuits (ICs). TSV extrusion is a crucial reliability concern which can deform and crack interconnect layers in 3D ICs and cause device failures. Herein, the white light interferometry (WLI) technique is used to obtain the surface profile of the extruded TSVs. We have developed a program that uses raw data obtained from WLI to create a TSV extrusion morphology dataset, including TSV images with 54x54 pixels that are labeled and categorized into three morphology classes. Four CNN architectures with different network complexities are implemented and trained for TSV extrusion morphology classification application. Data augmentation and dropout approaches are utilized to realize a balance between overfitting and underfitting in the CNN models. Results obtained show that the CNN model with optimized complexity, dropout, and data augmentation can achieve a classification accuracy comparable to that of a human expert.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
true
196,964
2407.06817
AstroSpy: On detecting Fake Images in Astronomy via Joint Image-Spectral Representations
The prevalence of AI-generated imagery has raised concerns about the authenticity of astronomical images, especially with advanced text-to-image models like Stable Diffusion producing highly realistic synthetic samples. Existing detection methods, primarily based on convolutional neural networks (CNNs) or spectral analysis, have limitations when used independently. We present AstroSpy, a hybrid model that integrates both spectral and image features to distinguish real from synthetic astronomical images. Trained on a unique dataset of real NASA images and AI-generated fakes (approximately 18k samples), AstroSpy utilizes a dual-pathway architecture to fuse spatial and spectral information. This approach enables AstroSpy to achieve superior performance in identifying authentic astronomical images. Extensive evaluations demonstrate AstroSpy's effectiveness and robustness, significantly outperforming baseline models in both in-domain and cross-domain tasks, highlighting its potential to combat misinformation in astronomy.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
471,546
2310.02877
Stationarity without mean reversion in improper Gaussian processes
The behavior of a GP regression depends on the choice of covariance function. Stationary covariance functions are preferred in machine learning applications. However, (non-periodic) stationary covariance functions are always mean reverting and can therefore exhibit pathological behavior when applied to data that does not relax to a fixed global mean value. In this paper we show that it is possible to use improper GP priors with infinite variance to define processes that are stationary but not mean reverting. To this aim, we use of non-positive kernels that can only be defined in this limit regime. The resulting posterior distributions can be computed analytically and it involves a simple correction of the usual formulas. The main contribution of the paper is the introduction of a large family of smooth non-reverting covariance functions that closely resemble the kernels commonly used in the GP literature (e.g. squared exponential and Mat\'ern class). By analyzing both synthetic and real data, we demonstrate that these non-positive kernels solve some known pathologies of mean reverting GP regression while retaining most of the favorable properties of ordinary smooth stationary kernels.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
397,026
1807.08476
Human peripheral blur is optimal for object recognition
Our vision is sharpest at the center of our gaze and becomes progressively blurry into the periphery. It is widely believed that this high foveal resolution evolved at the expense of peripheral acuity. But what if this sampling scheme is actually optimal for object recognition? To test this hypothesis, we trained deep neural networks on 'foveated' images with high resolution near objects and increasingly sparse sampling into the periphery. Neural networks trained using a blur profile matching the human eye yielded the best performance compared to shallower and steeper blur profiles. Even in humans, categorization accuracy deteriorated only for steeper blur profiles. Thus, our blurry peripheral vision may have evolved to optimize object recognition rather than merely due to wiring constraints.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
103,550
2409.12973
The Era of Foundation Models in Medical Imaging is Approaching : A Scoping Review of the Clinical Value of Large-Scale Generative AI Applications in Radiology
Social problems stemming from the shortage of radiologists are intensifying, and artificial intelligence is being highlighted as a potential solution. Recently emerging large-scale generative AI has expanded from large language models (LLMs) to multi-modal models, showing potential to revolutionize the entire process of medical imaging. However, comprehensive reviews on their development status and future challenges are currently lacking. This scoping review systematically organizes existing literature on the clinical value of large-scale generative AI applications by following PCC guidelines. A systematic search was conducted across four databases: PubMed, EMbase, IEEE-Xplore, and Google Scholar, and 15 studies meeting the inclusion/exclusion criteria set by the researchers were reviewed. Most of these studies focused on improving the efficiency of report generation in specific parts of the interpretation process or on translating reports to aid patient understanding, with the latest studies extending to AI applications performing direct interpretations. All studies were quantitatively evaluated by clinicians, with most utilizing LLMs and only three employing multi-modal models. Both LLMs and multi-modal models showed excellent results in specific areas, but none yet outperformed radiologists in diagnostic performance. Most studies utilized GPT, with few using models specialized for the medical imaging domain. This study provides insights into the current state and limitations of large-scale generative AI-based applications in the medical imaging field, offering foundational data and suggesting that the era of medical imaging foundation models is on the horizon, which may fundamentally transform clinical practice in the near future.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
489,799
2404.04904
Cross-Domain Audio Deepfake Detection: Dataset and Analysis
Audio deepfake detection (ADD) is essential for preventing the misuse of synthetic voices that may infringe on personal rights and privacy. Recent zero-shot text-to-speech (TTS) models pose higher risks as they can clone voices with a single utterance. However, the existing ADD datasets are outdated, leading to suboptimal generalization of detection models. In this paper, we construct a new cross-domain ADD dataset comprising over 300 hours of speech data that is generated by five advanced zero-shot TTS models. To simulate real-world scenarios, we employ diverse attack methods and audio prompts from different datasets. Experiments show that, through novel attack-augmented training, the Wav2Vec2-large and Whisper-medium models achieve equal error rates of 4.1\% and 6.5\% respectively. Additionally, we demonstrate our models' outstanding few-shot ADD ability by fine-tuning with just one minute of target-domain data. Nonetheless, neural codec compressors greatly affect the detection accuracy, necessitating further research.
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
444,853
2009.03091
Iterative Correction of Sensor Degradation and a Bayesian Multi-Sensor Data Fusion Method
We present a novel method for inferring ground-truth signal from multiple degraded signals, affected by different amounts of sensor exposure. The algorithm learns a multiplicative degradation effect by performing iterative corrections of two signals solely from the ratio between them. The degradation function d should be continuous, satisfy monotonicity, and d(0) = 1. We use smoothed monotonic regression method, where we easily incorporate the aforementioned criteria to the fitting part. We include theoretical analysis and prove convergence to the ground-truth signal for the noiseless measurement model. Lastly, we present an approach to fuse the noisy corrected signals using Gaussian processes. We use sparse Gaussian processes that can be utilized for a large number of measurements together with a specialized kernel that enables the estimation of noise values of all sensors. The data fusion framework naturally handles data gaps and provides a simple and powerful method for observing the signal trends on multiple timescales(long-term and short-term signal properties). The viability of correction method is evaluated on a synthetic dataset with known ground-truth signal.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
194,741
2004.01302
Distributed Inference with Sparse and Quantized Communication
We consider the problem of distributed inference where agents in a network observe a stream of private signals generated by an unknown state, and aim to uniquely identify this state from a finite set of hypotheses. We focus on scenarios where communication between agents is costly, and takes place over channels with finite bandwidth. To reduce the frequency of communication, we develop a novel event-triggered distributed learning rule that is based on the principle of diffusing low beliefs on each false hypothesis. Building on this principle, we design a trigger condition under which an agent broadcasts only those components of its belief vector that have adequate innovation, to only those neighbors that require such information. We prove that our rule guarantees convergence to the true state exponentially fast almost surely despite sparse communication, and that it has the potential to significantly reduce information flow from uninformative agents to informative agents. Next, to deal with finite-precision communication channels, we propose a distributed learning rule that leverages the idea of adaptive quantization. We show that by sequentially refining the range of the quantizers, every agent can learn the truth exponentially fast almost surely, while using just $1$ bit to encode its belief on each hypothesis. For both our proposed algorithms, we rigorously characterize the trade-offs between communication-efficiency and the learning rate.
false
false
false
false
false
false
true
false
false
true
true
false
false
false
false
false
false
true
170,875
2410.08551
Context-Aware Full Body Anonymization using Text-to-Image Diffusion Models
Anonymization plays a key role in protecting sensible information of individuals in real world datasets. Self-driving cars for example need high resolution facial features to track people and their viewing direction to predict future behaviour and react accordingly. In order to protect people's privacy whilst keeping important features in the dataset, it is important to replace the full body of a person with a highly detailed anonymized one. In contrast to doing face anonymization, full body replacement decreases the ability of recognizing people by their hairstyle or clothes. In this paper, we propose a workflow for full body person anonymization utilizing Stable Diffusion as a generative backend. Text-to-image diffusion models, like Stable Diffusion, OpenAI's DALL-E or Midjourney, have become very popular in recent time, being able to create photorealistic images from a single text prompt. We show that our method outperforms state-of-the art anonymization pipelines with respect to image quality, resolution, Inception Score (IS) and Frechet Inception Distance (FID). Additionally, our method is invariant with respect to the image generator and thus able to be used with the latest models available.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
497,164
2108.11859
Binary sequences with length n and nonlinear complexity not less than n/2
In this paper, the construction of finite-length binary sequences whose nonlinear complexity is not less than half of the length is investigated. By characterizing the structure of the sequences, an algorithm is proposed to generate all binary sequences with length $n$ and nonlinear complexity $c_{n}\geq n/2$, where $n$ is an integer larger than $2$. Furthermore, a formula is established to calculate the exact number of these sequences. The distribution of nonlinear complexity for these sequences is thus completely determined.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
252,316
2112.13626
Generation of Synthetic Rat Brain MRI scans with a 3D Enhanced Alpha-GAN
Translational brain research using Magnetic Resonance Imaging (MRI) is becoming increasingly popular as animal models are an essential part of scientific studies and more ultra-high-field scanners are becoming available. Some disadvantages of MRI are the availability of MRI scanners and the time required for a full scanning session (it usually takes over 30 minutes). Privacy laws and the 3Rs ethics rule also make it difficult to create large datasets for training deep learning models. Generative Adversarial Networks (GANs) can perform data augmentation with higher quality than other techniques. In this work, the alpha-GAN architecture is used to test its ability to produce realistic 3D MRI scans of the rat brain. As far as the authors are aware, this is the first time that a GAN-based approach has been used for data augmentation in preclinical data. The generated scans are evaluated using various qualitative and quantitative metrics. A Turing test conducted by 4 experts has shown that the generated scans can trick almost any expert. The generated scans were also used to evaluate their impact on the performance of an existing deep learning model developed for segmenting the rat brain into white matter, grey matter and cerebrospinal fluid. The models were compared using the Dice score. The best results for whole brain and white matter segmentation were obtained when 174 real scans and 348 synthetic scans were used, with improvements of 0.0172 and 0.0129, respectively. Using 174 real scans and 87 synthetic scans resulted in improvements of 0.0038 and 0.0764 for grey matter and CSF segmentation, respectively. Thus, by using the proposed new normalisation layer and loss functions, it was possible to improve the realism of the generated rat MRI scans and it was shown that using the generated data improved the segmentation model more than using the conventional data augmentation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
273,310
1710.10329
Lower Bounds for Higher-Order Convex Optimization
State-of-the-art methods in convex and non-convex optimization employ higher-order derivative information, either implicitly or explicitly. We explore the limitations of higher-order optimization and prove that even for convex optimization, a polynomial dependence on the approximation guarantee and higher-order smoothness parameters is necessary. As a special case, we show Nesterov's accelerated cubic regularization method to be nearly tight.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
83,350
1505.05232
Multi-scale recognition with DAG-CNNs
We explore multi-scale convolutional neural nets (CNNs) for image classification. Contemporary approaches extract features from a single output layer. By extracting features from multiple layers, one can simultaneously reason about high, mid, and low-level features during classification. The resulting multi-scale architecture can itself be seen as a feed-forward model that is structured as a directed acyclic graph (DAG-CNNs). We use DAG-CNNs to learn a set of multiscale features that can be effectively shared between coarse and fine-grained classification tasks. While fine-tuning such models helps performance, we show that even "off-the-self" multiscale features perform quite well. We present extensive analysis and demonstrate state-of-the-art classification performance on three standard scene benchmarks (SUN397, MIT67, and Scene15). In terms of the heavily benchmarked MIT67 and Scene15 datasets, our results reduce the lowest previously-reported error by 23.9% and 9.5%, respectively.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
43,279
1603.06541
A Comparison Study of Nonlinear Kernels
In this paper, we compare 5 different nonlinear kernels: min-max, RBF, fRBF (folded RBF), acos, and acos-$\chi^2$, on a wide range of publicly available datasets. The proposed fRBF kernel performs very similarly to the RBF kernel. Both RBF and fRBF kernels require an important tuning parameter ($\gamma$). Interestingly, for a significant portion of the datasets, the min-max kernel outperforms the best-tuned RBF/fRBF kernels. The acos kernel and acos-$\chi^2$ kernel also perform well in general and in some datasets achieve the best accuracies. One crucial issue with the use of nonlinear kernels is the excessive computational and memory cost. These days, one increasingly popular strategy is to linearize the kernels through various randomization algorithms. In our study, the randomization method for the min-max kernel demonstrates excellent performance compared to the randomization methods for other types of nonlinear kernels, measured in terms of the number of nonzero terms in the transformed dataset. Our study provides evidence for supporting the use of the min-max kernel and the corresponding randomized linearization method (i.e., the so-called "0-bit CWS"). Furthermore, the results motivate at least two directions for future research: (i) To develop new (and linearizable) nonlinear kernels for better accuracies; and (ii) To develop better linearization algorithms for improving the current linearization methods for the RBF kernel, the acos kernel, and the acos-$\chi^2$ kernel. One attempt is to combine the min-max kernel with the acos kernel or the acos-$\chi^2$ kernel. The advantages of these two new and tuning-free nonlinear kernels are demonstrated vias our extensive experiments.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
53,510
2404.04517
Latent-based Diffusion Model for Long-tailed Recognition
Long-tailed imbalance distribution is a common issue in practical computer vision applications. Previous works proposed methods to address this problem, which can be categorized into several classes: re-sampling, re-weighting, transfer learning, and feature augmentation. In recent years, diffusion models have shown an impressive generation ability in many sub-problems of deep computer vision. However, its powerful generation has not been explored in long-tailed problems. We propose a new approach, the Latent-based Diffusion Model for Long-tailed Recognition (LDMLR), as a feature augmentation method to tackle the issue. First, we encode the imbalanced dataset into features using the baseline model. Then, we train a Denoising Diffusion Implicit Model (DDIM) using these encoded features to generate pseudo-features. Finally, we train the classifier using the encoded and pseudo-features from the previous two steps. The model's accuracy shows an improvement on the CIFAR-LT and ImageNet-LT datasets by using the proposed method.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
444,678
2207.05705
Conservative SPDEs as fluctuating mean field limits of stochastic gradient descent
The convergence of stochastic interacting particle systems in the mean-field limit to solutions of conservative stochastic partial differential equations is established, with optimal rate of convergence. As a second main result, a quantitative central limit theorem for such SPDEs is derived, again, with optimal rate of convergence. The results apply, in particular, to the convergence in the mean-field scaling of stochastic gradient descent dynamics in overparametrized, shallow neural networks to solutions of SPDEs. It is shown that the inclusion of fluctuations in the limiting SPDE improves the rate of convergence, and retains information about the fluctuations of stochastic gradient descent in the continuum limit.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
307,635
1906.01102
Do place cells dream of conditional probabilities? Learning Neural Nystr\"om representations
We posit that hippocampal place cells encode information about future locations under a transition distribution observed as an agent explores a given (physical or conceptual) space. The encoding of information about the current location, usually associated with place cells, then emerges as a necessary step to achieve this broader goal. We formally derive a biologically-inspired neural network from Nystr\"om kernel approximations and empirically demonstrate that the network successfully approximates transition distributions. The proposed network yields representations that, just like place cells, soft-tile the input space with highly sparse and localized receptive fields. Additionally, we show that the proposed computational motif can be extended to handle supervised problems, creating class-specific place cells while exhibiting low sample complexity.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
133,590
1608.03008
Network Topology Inference from Spectral Templates
We address the problem of identifying a graph structure from the observation of signals defined on its nodes. Fundamentally, the unknown graph encodes direct relationships between signal elements, which we aim to recover from observable indirect relationships generated by a diffusion process on the graph. The fresh look advocated here permeates benefits from convex optimization and stationarity of graph signals, in order to identify the graph shift operator (a matrix representation of the graph) given only its eigenvectors. These spectral templates can be obtained, e.g., from the sample covariance of independent graph signals diffused on the sought network. The novel idea is to find a graph shift that, while being consistent with the provided spectral information, endows the network with certain desired properties such as sparsity. To that end we develop efficient inference algorithms stemming from provably-tight convex relaxations of natural nonconvex criteria, particularizing the results for two shifts: the adjacency matrix and the normalized Laplacian. Algorithms and theoretical recovery conditions are developed not only when the templates are perfectly known, but also when the eigenvectors are noisy or when only a subset of them are given. Numerical tests showcase the effectiveness of the proposed algorithms in recovering social, brain, and amino-acid networks.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
59,624
2407.00886
Efficient Automated Circuit Discovery in Transformers using Contextual Decomposition
Automated mechanistic interpretation research has attracted great interest due to its potential to scale explanations of neural network internals to large models. Existing automated circuit discovery work relies on activation patching or its approximations to identify subgraphs in models for specific tasks (circuits). They often suffer from slow runtime, approximation errors, and specific requirements of metrics, such as non-zero gradients. In this work, we introduce contextual decomposition for transformers (CD-T) to build interpretable circuits in large language models. CD-T can produce circuits of arbitrary level of abstraction, and is the first able to produce circuits as fine-grained as attention heads at specific sequence positions efficiently. CD-T consists of a set of mathematical equations to isolate contribution of model features. Through recursively computing contribution of all nodes in a computational graph of a model using CD-T followed by pruning, we are able to reduce circuit discovery runtime from hours to seconds compared to state-of-the-art baselines. On three standard circuit evaluation datasets (indirect object identification, greater-than comparisons, and docstring completion), we demonstrate that CD-T outperforms ACDC and EAP by better recovering the manual circuits with an average of 97% ROC AUC under low runtimes. In addition, we provide evidence that faithfulness of CD-T circuits is not due to random chance by showing our circuits are 80% more faithful than random circuits of up to 60% of the original model size. Finally, we show CD-T circuits are able to perfectly replicate original models' behavior (faithfulness $ = 1$) using fewer nodes than the baselines for all tasks. Our results underscore the great promise of CD-T for efficient automated mechanistic interpretability, paving the way for new insights into the workings of large language models.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
469,039
2011.06259
Learning to Segment Dynamic Objects using SLAM Outliers
We present a method to automatically learn to segment dynamic objects using SLAM outliers. It requires only one monocular sequence per dynamic object for training and consists in localizing dynamic objects using SLAM outliers, creating their masks, and using these masks to train a semantic segmentation network. We integrate the trained network in ORB-SLAM 2 and LDSO. At runtime we remove features on dynamic objects, making the SLAM unaffected by them. We also propose a new stereo dataset and new metrics to evaluate SLAM robustness. Our dataset includes consensus inversions, i.e., situations where the SLAM uses more features on dynamic objects that on the static background. Consensus inversions are challenging for SLAM as they may cause major SLAM failures. Our approach performs better than the State-of-the-Art on the TUM RGB-D dataset in monocular mode and on our dataset in both monocular and stereo modes.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
206,195
2412.10690
Affiliation-based Local Community Detection across Multiple Networks
Real-world networks are often constructed from different sources or domains, including various types of entities and diverse relationships between networks, thus forming multi-domain networks. A single network typically fails to capture the complete graph structure and the diverse relationships among multiple networks. Consequently, leveraging multiple networks is crucial for a comprehensive detection of community structures. Most existing local community detection methods discover community structures by integrating information from different views on multi-view networks. However, methods designed for multi-view networks are not suitable for multi-domain networks. Therefore, to mine communities from multiple networks, we propose a Local Algorithm for Multiple networks with node Affiliation, called LAMA, which is suitable for both multi-view and multi-domain networks. The core idea of LAMA is to optimize node affiliations by maximizing the quality of communities within each network while ensuring consistency in community structures across multiple networks. The algorithm iteratively optimizes node affiliations and expands the community outward based on affiliations to detect the community containing the seed node. Experimental results show that LAMA outperforms comparison algorithms on two synthetic datasets and five real datasets.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
517,055
2302.06600
Task-Specific Skill Localization in Fine-tuned Language Models
Pre-trained language models can be fine-tuned to solve diverse NLP tasks, including in few-shot settings. Thus fine-tuning allows the model to quickly pick up task-specific ``skills,'' but there has been limited study of where these newly-learnt skills reside inside the massive model. This paper introduces the term skill localization for this problem and proposes a solution. Given the downstream task and a model fine-tuned on that task, a simple optimization is used to identify a very small subset of parameters ($\sim0.01$% of model parameters) responsible for ($>95$%) of the model's performance, in the sense that grafting the fine-tuned values for just this tiny subset onto the pre-trained model gives performance almost as well as the fine-tuned model. While reminiscent of recent works on parameter-efficient fine-tuning, the novel aspects here are that: (i) No further re-training is needed on the subset (unlike, say, with lottery tickets). (ii) Notable improvements are seen over vanilla fine-tuning with respect to calibration of predictions in-distribution ($40$-$90$% error reduction) as well as the quality of predictions out-of-distribution (OOD). In models trained on multiple tasks, a stronger notion of skill localization is observed, where the sparse regions corresponding to different tasks are almost disjoint, and their overlap (when it happens) is a proxy for task similarity. Experiments suggest that localization via grafting can assist certain forms of continual learning.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
345,470
1204.2420
Variational Principle underlying Scale Invariant Social Systems
MaxEnt's variational principle, in conjunction with Shannon's logarithmic information measure, yields only exponential functional forms in straightforward fashion. In this communication we show how to overcome this limitation via the incorporation, into the variational process, of suitable dynamical information. As a consequence, we are able to formulate a somewhat generalized Shannonian Maximum Entropy approach which provides a unifying "thermodynamic-like" explanation for the scale-invariant phenomena observed in social contexts, as city-population distributions. We confirm the MaxEnt predictions by means of numerical experiments with random walkers, and compare them with some empirical data.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
15,409
2404.03606
Analyzing Musical Characteristics of National Anthems in Relation to Global Indices
Music plays a huge part in shaping peoples' psychology and behavioral patterns. This paper investigates the connection between national anthems and different global indices with computational music analysis and statistical correlation analysis. We analyze national anthem musical data to determine whether certain musical characteristics are associated with peace, happiness, suicide rate, crime rate, etc. To achieve this, we collect national anthems from 169 countries and use computational music analysis techniques to extract pitch, tempo, beat, and other pertinent audio features. We then compare these musical characteristics with data on different global indices to ascertain whether a significant correlation exists. Our findings indicate that there may be a correlation between the musical characteristics of national anthems and the indices we investigated. The implications of our findings for music psychology and policymakers interested in promoting social well-being are discussed. This paper emphasizes the potential of musical data analysis in social research and offers a novel perspective on the relationship between music and social indices. The source code and data are made open-access for reproducibility and future research endeavors. It can be accessed at http://bit.ly/na_code.
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
444,324
2107.06750
Fast and Slow Enigmas and Parental Guidance
We describe several additions to the ENIGMA system that guides clause selection in the E automated theorem prover. First, we significantly speed up its neural guidance by adding server-based GPU evaluation. The second addition is motivated by fast weight-based rejection filters that are currently used in systems like E and Prover9. Such systems can be made more intelligent by instead training fast versions of ENIGMA that implement more intelligent pre-filtering. This results in combinations of trainable fast and slow thinking that improves over both the fast-only and slow-only methods. The third addition is based on "judging the children by their parents", i.e., possibly rejecting an inference before it produces a clause. This is motivated by standard evolutionary mechanisms, where there is always a cost to producing all possible offsprings in the current population. This saves time by not evaluating all clauses by more expensive methods and provides a complementary view of the generated clauses. The methods are evaluated on a large benchmark coming from the Mizar Mathematical Library, showing good improvements over the state of the art.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
246,184
1507.01272
VEWS: A Wikipedia Vandal Early Warning System
We study the problem of detecting vandals on Wikipedia before any human or known vandalism detection system reports flagging potential vandals so that such users can be presented early to Wikipedia administrators. We leverage multiple classical ML approaches, but develop 3 novel sets of features. Our Wikipedia Vandal Behavior (WVB) approach uses a novel set of user editing patterns as features to classify some users as vandals. Our Wikipedia Transition Probability Matrix (WTPM) approach uses a set of features derived from a transition probability matrix and then reduces it via a neural net auto-encoder to classify some users as vandals. The VEWS approach merges the previous two approaches. Without using any information (e.g. reverts) provided by other users, these algorithms each have over 85% classification accuracy. Moreover, when temporal recency is considered, accuracy goes to almost 90%. We carry out detailed experiments on a new data set we have created consisting of about 33K Wikipedia users (including both a black list and a white list of editors) and containing 770K edits. We describe specific behaviors that distinguish between vandals and non-vandals. We show that VEWS beats ClueBot NG and STiki, the best known algorithms today for vandalism detection. Moreover, VEWS detects far more vandals than ClueBot NG and on average, detects them 2.39 edits before ClueBot NG when both detect the vandal. However, we show that the combination of VEWS and ClueBot NG can give a fully automated vandal early warning system with even higher accuracy.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
44,842
2111.07015
HydraGAN A Multi-head, Multi-objective Approach to Synthetic Data Generation
Synthetic data generation overcomes limitations of real-world machine learning. Traditional methods are valuable for augmenting costly datasets but only optimize one criterion: realism. In this paper, we tackle the problem of generating synthetic data that optimize multiple criteria. This goal is necessary when real data are replaced by synthetic for privacy preservation. We introduce HydraGAN, a new approach to synthetic data generation that introduces multiple generator and discriminator agents into the system. The multi-agent GAN optimizes the goal of privacy-preservation as well as data realism. To facilitate multi-agent training, we adapt game-theoretic principles to offer equilibrium guarantees. We observe that HydraGAN outperforms baseline methods for three datasets for multiple criteria of maximizing data realism, maximizing model accuracy, and minimizing re-identification risk.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
266,247
2101.00628
On Secure Degrees of Freedom of the MIMO Interference Channel with Local Output Feedback
This paper studies the problem of sum-secure degrees of freedom (SDoF) of the (M,M,N,N) multiple-input multiple-output (MIMO) interference channel with local output feedback, so as to build an information-theoretic foundation and provide practical transmission schemes for 6G-enabled vehicles-to-vehicles (V2V). For this problem, we propose two novel transmission schemes, i.e., the interference decoding scheme and the interference alignment scheme, and thus establish a sum-SDoF lower bound. In particular, to optimize the phase duration, we analyze the security and decoding constraints and formulate a linear-fractional optimization problem. Furthermore, we show that the derived sum-SDoF lower bound is the sum-SDoF for M <= N/2, N=M, and 2N <= M antenna configurations, and reveal that for a fixed N, the optimal M to maximize the sum-SDoF is not less than 2N. Through simulations, we examine the secure sum-rate performance of proposed transmission schemes, and reveal that using local output feedback can lead to a higher secure sum-rate than that by using delayed channel state information at the transmitter.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
214,147
2111.00438
Decentralized Multi-Agent Reinforcement Learning: An Off-Policy Method
We discuss the problem of decentralized multi-agent reinforcement learning (MARL) in this work. In our setting, the global state, action, and reward are assumed to be fully observable, while the local policy is protected as privacy by each agent, and thus cannot be shared with others. There is a communication graph, among which the agents can exchange information with their neighbors. The agents make individual decisions and cooperate to reach a higher accumulated reward. Towards this end, we first propose a decentralized actor-critic (AC) setting. Then, the policy evaluation and policy improvement algorithms are designed for discrete and continuous state-action-space Markov Decision Process (MDP) respectively. Furthermore, convergence analysis is given under the discrete-space case, which guarantees that the policy will be reinforced by alternating between the processes of policy evaluation and policy improvement. In order to validate the effectiveness of algorithms, we design experiments and compare them with previous algorithms, e.g., Q-learning \cite{watkins1992q} and MADDPG \cite{lowe2017multi}. The results show that our algorithms perform better from the aspects of both learning speed and final performance. Moreover, the algorithms can be executed in an off-policy manner, which greatly improves the data efficiency compared with on-policy algorithms.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
264,220
2109.01942
On the ability of monolingual models to learn language-agnostic representations
Pretrained multilingual models have become a de facto default approach for zero-shot cross-lingual transfer. Previous work has shown that these models are able to achieve cross-lingual representations when pretrained on two or more languages with shared parameters. In this work, we provide evidence that a model can achieve language-agnostic representations even when pretrained on a single language. That is, we find that monolingual models pretrained and finetuned on different languages achieve competitive performance compared to the ones that use the same target language. Surprisingly, the models show a similar performance on a same task regardless of the pretraining language. For example, models pretrained on distant languages such as German and Portuguese perform similarly on English tasks.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
253,585
1910.11986
Compensation of Charging Station Overload via On-road Mobile Energy Storage Scheduling
Supported by the technical development of electric battery and charging facilities, plug-in electric vehicle (PEV) has the potential to be mobile energy storage (MES) for energy delivery from resourceful charging stations (RCSs) to limited-capacity charging stations (LCSs). In this paper, we study the problem of using on-road PEVs as MESs for energy compensation service to compensate charging station (CS) overload. A price-incentive scheme is proposed for power system operator (PSO) to stimulate on-road MESs fulfilling energy compensation tasks. The price-service interaction between the PSO and MESs is characterized as a one-leader, multiple-follower Stackelberg game. The PSO acts as a leader to schedule on-road MESs by posting service price and on-road MESs respond to the price by choosing their service amount. The existence and uniqueness of the Stackelberg equilibrium are validated, and an algorithm is developed to find the equilibrium. Simulation results show the effectiveness of the proposed scheme in utility optimization and overload mitigation.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
150,928
2302.08669
Learning to Forecast Aleatoric and Epistemic Uncertainties over Long Horizon Trajectories
Giving autonomous agents the ability to forecast their own outcomes and uncertainty will allow them to communicate their competencies and be used more safely. We accomplish this by using a learned world model of the agent system to forecast full agent trajectories over long time horizons. Real world systems involve significant sources of both aleatoric and epistemic uncertainty that compound and interact over time in the trajectory forecasts. We develop a deep generative world model that quantifies aleatoric uncertainty while incorporating the effects of epistemic uncertainty during the learning process. We show on two reinforcement learning problems that our uncertainty model produces calibrated outcome uncertainty estimates over the full trajectory horizon.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
346,131
1907.05496
Online Learning to Estimate Warfarin Dose with Contextual Linear Bandits
Warfarin is one of the most commonly used oral blood anticoagulant agent in the world, the proper dose of Warfarin is difficult to establish not only because it is substantially variant among patients, but also adverse even severe consequences of taking an incorrect dose. Typical practice is to prescribe an initial dose, then doctor closely monitor patient response and adjust accordingly to the correct dosage. The three commonly used strategies for an initial dosage are the fixed-dose approach, the Warfarin Clinical algorithm, and the Pharmacogenetic algorithm developed by the IWPC (International Warfarin Pharmacogenetics Consortium). It is always best to prescribe correct initial dosage, motivated by this challenge, this work explores the performance of multi-armed bandit algorithms to best predict the correct dosage of Warfarin instead of trial-and-error procedure. Real data from the Pharmacogenetics and Pharmacogenomics Knowledge Base (PharmGKB) is used, with it a series of linear bandit algorithms and variants are developed and evaluated on Warfarin dataset. All proposed algorithms outperformed the fixed-dose baseline algorithm, and some even matched up the Warfarin Clinical Dosing Algorithm. In addition, a few promising future directions are given for further exploration and development.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
138,378
2408.00293
Gradient Flow Decoding
This paper presents the Gradient Flow (GF) decoding for LDPC codes. GF decoding, a continuous-time methodology based on gradient flow, employs a potential energy function associated with bipolar codewords of LDPC codes. The decoding process of the GF decoding is concisely defined by an ordinary differential equation and thus it is well suited to an analog circuit implementation. We experimentally demonstrate that the decoding performance of the GF decoding for AWGN channels is comparable to that of the multi-bit mode gradient descent bit flipping algorithm. We further introduce the negative log-likelihood function of the channel for generalizing the GF decoding. The proposed method is shown to be tensor-computable, which means that the gradient of the objective function can be evaluated with the combination of basic tensor computations. This characteristic is well-suited to emerging AI accelerators, potentially applicable in wireless signal processing. The paper assesses the decoding performance of the generalized GF decoding in LDPC-coded MIMO channels. Our numerical experiments reveal that the decoding performance rivals that of established techniques like MMSE + BP. Furthermore, an exploration of score-based channel learning for capturing statistical properties is also provided.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
477,777
1906.08864
Accurate and Energy-Efficient Classification with Spiking Random Neural Network: Corrected and Expanded Version
Artificial Neural Network (ANN) based techniques have dominated state-of-the-art results in most problems related to computer vision, audio recognition, and natural language processing in the past few years, resulting in strong industrial adoption from all leading technology companies worldwide. One of the major obstacles that have historically delayed large scale adoption of ANNs is the huge computational and power costs associated with training and testing (deploying) them. In the mean-time, Neuromorphic Computing platforms have recently achieved remarkable performance running more bio-realistic Spiking Neural Networks at high throughput and very low power consumption making them a natural alternative to ANNs. Here, we propose using the Random Neural Network (RNN), a spiking neural network with both theoretical and practical appealing properties, as a general purpose classifier that can match the classification power of ANNs on a number of tasks while enjoying all the features of a spiking neural network. This is demonstrated on a number of real-world classification datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
135,990
2109.12935
Time Series Model Attribution Visualizations as Explanations
Attributions are a common local explanation technique for deep learning models on single samples as they are easily extractable and demonstrate the relevance of input values. In many cases, heatmaps visualize such attributions for samples, for instance, on images. However, heatmaps are not always the ideal visualization to explain certain model decisions for other data types. In this review, we focus on attribution visualizations for time series. We collect attribution heatmap visualizations and some alternatives, discuss the advantages as well as disadvantages and give a short position towards future opportunities for attributions and explanations for time series.
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
257,465
2402.05000
Pedagogical Alignment of Large Language Models
Large Language Models (LLMs), when used in educational settings without pedagogical fine-tuning, often provide immediate answers rather than guiding students through the problem-solving process. This approach falls short of pedagogically best practices and limits their effectiveness as educational tools. We term the objective of training LLMs to emulate effective teaching strategies as `pedagogical alignment.' In this paper, we investigate Learning from Human Preferences (LHP) algorithms to achieve this alignment objective. A key challenge in this process is the scarcity of high-quality preference datasets to guide the alignment. To address this, we propose a novel approach for constructing a large-scale dataset using synthetic data generation techniques, eliminating the need for time-consuming and costly manual annotation. Leveraging this dataset, our experiments with Llama and Mistral models demonstrate that LHP methods outperform standard supervised fine-tuning (SFT), improving pedagogical alignment accuracy by 13.1% and 8.7% respectively. Existing evaluation methods also lack quantitative metrics to adequately measure the pedagogical alignment of LLMs. To address this gap, we propose novel perplexity-based metrics that quantify LLMs' tendency to provide scaffolded guidance versus direct answers, offering a robust measure of pedagogical alignment. Our analysis provides compelling evidence for the superiority of LHP methods over SFT in optimizing LLMs' behavior, underscoring the potential of LHP methods in better aligning LLMs with educational objectives and fostering effective learning experiences. Code and models are available \href{https://github.com/luffycodes/Tutorbot-Spock}{here}.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
427,678
2205.07463
Gradient Descent Optimizes Infinite-Depth ReLU Implicit Networks with Linear Widths
Implicit deep learning has recently become popular in the machine learning community since these implicit models can achieve competitive performance with state-of-the-art deep networks while using significantly less memory and computational resources. However, our theoretical understanding of when and how first-order methods such as gradient descent (GD) converge on \textit{nonlinear} implicit networks is limited. Although this type of problem has been studied in standard feed-forward networks, the case of implicit models is still intriguing because implicit networks have \textit{infinitely} many layers. The corresponding equilibrium equation probably admits no or multiple solutions during training. This paper studies the convergence of both gradient flow (GF) and gradient descent for nonlinear ReLU activated implicit networks. To deal with the well-posedness problem, we introduce a fixed scalar to scale the weight matrix of the implicit layer and show that there exists a small enough scaling constant, keeping the equilibrium equation well-posed throughout training. As a result, we prove that both GF and GD converge to a global minimum at a linear rate if the width $m$ of the implicit network is \textit{linear} in the sample size $N$, i.e., $m=\Omega(N)$.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
296,611
1812.00914
Accelerating Large Scale Knowledge Distillation via Dynamic Importance Sampling
Knowledge distillation is an effective technique that transfers knowledge from a large teacher model to a shallow student. However, just like massive classification, large scale knowledge distillation also imposes heavy computational costs on training models of deep neural networks, as the softmax activations at the last layer involve computing probabilities over numerous classes. In this work, we apply the idea of importance sampling which is often used in Neural Machine Translation on large scale knowledge distillation. We present a method called dynamic importance sampling, where ranked classes are sampled from a dynamic distribution derived from the interaction between the teacher and student in full distillation. We highlight the utility of our proposal prior which helps the student capture the main information in the loss function. Our approach manages to reduce the computational cost at training time while maintaining the competitive performance on CIFAR-100 and Market-1501 person re-identification datasets.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
115,386
1608.05138
Hybrid CPU-GPU Framework for Network Motifs
Massively parallel architectures such as the GPU are becoming increasingly important due to the recent proliferation of data. In this paper, we propose a key class of hybrid parallel graphlet algorithms that leverages multiple CPUs and GPUs simultaneously for computing k-vertex induced subgraph statistics (called graphlets). In addition to the hybrid multi-core CPU-GPU framework, we also investigate single GPU methods (using multiple cores) and multi-GPU methods that leverage all available GPUs simultaneously for computing induced subgraph statistics. Both methods leverage GPU devices only, whereas the hybrid multi-core CPU-GPU framework leverages all available multi-core CPUs and multiple GPUs for computing graphlets in large networks. Compared to recent approaches, our methods are orders of magnitude faster, while also more cost effective enjoying superior performance per capita and per watt. In particular, the methods are up to 300 times faster than the recent state-of-the-art method. To the best of our knowledge, this is the first work to leverage multiple CPUs and GPUs simultaneously for computing induced subgraph statistics.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
59,931
1707.01047
Robust Optimization for Non-Convex Objectives
We consider robust optimization problems, where the goal is to optimize in the worst case over a class of objective functions. We develop a reduction from robust improper optimization to Bayesian optimization: given an oracle that returns $\alpha$-approximate solutions for distributions over objectives, we compute a distribution over solutions that is $\alpha$-approximate in the worst case. We show that de-randomizing this solution is NP-hard in general, but can be done for a broad class of statistical learning tasks. We apply our results to robust neural network training and submodular optimization. We evaluate our approach experimentally on corrupted character classification, and robust influence maximization in networks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
76,459