id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1409.2578
Feedback Control of Switched Stochastic Systems Using Randomly Available Active Mode Information
Almost sure asymptotic stabilization of a discrete-time switched stochastic system is investigated. Information on the active operation mode of the switched system is assumed to be available for control purposes only at random time instants. We propose a stabilizing feedback control framework that utilizes the information obtained through mode observations. We first consider the case where stochastic properties of mode observation instants are fully known. We obtain sufficient asymptotic stabilization conditions for the closed-loop switched stochastic system under our proposed control law. We then explore the case where exact knowledge of the stochastic properties of mode observation instants is not available. We present a set of alternative stabilization conditions for this case. The results for both cases are predicated on the analysis of a sequence-valued process that encapsulates the stochastic nature of the evolution of active operation mode between mode observation instants. Finally, we demonstrate the efficacy of our results with numerical examples.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
35,919
0901.2396
Joint Source-Channel Coding at the Application Layer for Parallel Gaussian Sources
In this paper the multicasting of independent parallel Gaussian sources over a binary erasure broadcasted channel is considered. Multiresolution embedded quantizer and layered joint source-channel coding schemes are used in order to serve simultaneously several users at different channel capacities. The convex nature of the rate-distortion function, computed by means of reverse water-filling, allows us to solve relevant convex optimization problems corresponding to different performance criteria. Then, layered joint source-channel codes are constructed based on the concatenation of embedded scalar quantizers with binary rateless encoders.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
2,988
2011.06819
diagNNose: A Library for Neural Activation Analysis
In this paper we introduce diagNNose, an open source library for analysing the activations of deep neural networks. diagNNose contains a wide array of interpretability techniques that provide fundamental insights into the inner workings of neural networks. We demonstrate the functionality of diagNNose with a case study on subject-verb agreement within language models. diagNNose is available at https://github.com/i-machine-think/diagnnose.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
206,353
1405.1472
An Exploration of the Role of Principal Inertia Components in Information Theory
The principal inertia components of the joint distribution of two random variables $X$ and $Y$ are inherently connected to how an observation of $Y$ is statistically related to a hidden variable $X$. In this paper, we explore this connection within an information theoretic framework. We show that, under certain symmetry conditions, the principal inertia components play an important role in estimating one-bit functions of $X$, namely $f(X)$, given an observation of $Y$. In particular, the principal inertia components bear an interpretation as filter coefficients in the linear transformation of $p_{f(X)|X}$ into $p_{f(X)|Y}$. This interpretation naturally leads to the conjecture that the mutual information between $f(X)$ and $Y$ is maximized when all the principal inertia components have equal value. We also study the role of the principal inertia components in the Markov chain $B\rightarrow X\rightarrow Y\rightarrow \widehat{B}$, where $B$ and $\widehat{B}$ are binary random variables. We illustrate our results for the setting where $X$ and $Y$ are binary strings and $Y$ is the result of sending $X$ through an additive noise binary channel.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
32,882
1103.1559
Minimum Pseudoweight Analysis of 3-Dimensional Turbo Codes
In this work, we consider pseudocodewords of (relaxed) linear programming (LP) decoding of 3-dimensional turbo codes (3D-TCs). We present a relaxed LP decoder for 3D-TCs, adapting the relaxed LP decoder for conventional turbo codes proposed by Feldman in his thesis. We show that the 3D-TC polytope is proper and $C$-symmetric, and make a connection to finite graph covers of the 3D-TC factor graph. This connection is used to show that the support set of any pseudocodeword is a stopping set of iterative decoding of 3D-TCs using maximum a posteriori constituent decoders on the binary erasure channel. Furthermore, we compute ensemble-average pseudoweight enumerators of 3D-TCs and perform a finite-length minimum pseudoweight analysis for small cover degrees. Also, an explicit description of the fundamental cone of the 3D-TC polytope is given. Finally, we present an extensive numerical study of small-to-medium block length 3D-TCs, which shows that 1) typically (i.e., in most cases) when the minimum distance $d_{\rm min}$ and/or the stopping distance $h_{\rm min}$ is high, the minimum pseudoweight (on the additive white Gaussian noise channel) is strictly smaller than both the $d_{\rm min}$ and the $h_{\rm min}$, and 2) the minimum pseudoweight grows with the block length, at least for small-to-medium block lengths.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
9,529
2311.10736
Systematic Evaluation of Applying Space-Filling Curves to Automotive Maneuver Detection
Identifying driving maneuvers plays an essential role on-board vehicles to monitor driving and driver states, as well as off-board to train and evaluate machine learning algorithms for automated driving for example. Maneuvers can be characterized by vehicle kinematics or data from its surroundings including other traffic participants. Extracting relevant maneuvers therefore requires analyzing time-series of (i) structured, multi-dimensional kinematic data, and (ii) unstructured, large data samples for video, radar, or LiDAR sensors. However, such data analysis requires scalable and computationally efficient approaches, especially for non-annotated data. In this paper, we are presenting a maneuver detection approach based on two variants of space-filling curves (Z-order and Hilbert) to detect maneuvers when passing roundabouts that do not use GPS data. We systematically evaluate their respective performance by including permutations of selections of kinematic signals at varying frequencies and compare them with two alternative baselines: All manually identified roundabouts, and roundabouts that are marked by geofences. We find that encoding just longitudinal and lateral accelerations sampled at 10Hz using a Hilbert space-filling curve is already successfully identifying roundabout maneuvers, which allows to avoid the use of potentially sensitive signals such as GPS locations to comply with data protection and privacy regulations like GDPR.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
408,625
1602.00904
Comparative evaluation of state-of-the-art algorithms for SSVEP-based BCIs
Brain-computer interfaces (BCIs) have been gaining momentum in making human-computer interaction more natural, especially for people with neuro-muscular disabilities. Among the existing solutions the systems relying on electroencephalograms (EEG) occupy the most prominent place due to their non-invasiveness. However, the process of translating EEG signals into computer commands is far from trivial, since it requires the optimization of many different parameters that need to be tuned jointly. In this report, we focus on the category of EEG-based BCIs that rely on Steady-State-Visual-Evoked Potentials (SSVEPs) and perform a comparative evaluation of the most promising algorithms existing in the literature. More specifically, we define a set of algorithms for each of the various different parameters composing a BCI system (i.e. filtering, artifact removal, feature extraction, feature selection and classification) and study each parameter independently by keeping all other parameters fixed. The results obtained from this evaluation process are provided together with a dataset consisting of the 256-channel, EEG signals of 11 subjects, as well as a processing toolbox for reproducing the results and supporting further experimentation. In this way, we manage to make available for the community a state-of-the-art baseline for SSVEP-based BCIs that can be used as a basis for introducing novel methods and approaches.
true
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
51,632
2306.10728
AdaSelection: Accelerating Deep Learning Training through Data Subsampling
In this paper, we introduce AdaSelection, an adaptive sub-sampling method to identify the most informative sub-samples within each minibatch to speed up the training of large-scale deep learning models without sacrificing model performance. Our method is able to flexibly combines an arbitrary number of baseline sub-sampling methods incorporating the method-level importance and intra-method sample-level importance at each iteration. The standard practice of ad-hoc sampling often leads to continuous training with vast amounts of data from production environments. To improve the selection of data instances during forward and backward passes, we propose recording a constant amount of information per instance from these passes. We demonstrate the effectiveness of our method by testing it across various types of inputs and tasks, including the classification tasks on both image and language datasets, as well as regression tasks. Compared with industry-standard baselines, AdaSelection consistently displays superior performance.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
374,340
1611.01957
Linear Convergence of SVRG in Statistical Estimation
SVRG and its variants are among the state of art optimization algorithms for large scale machine learning problems. It is well known that SVRG converges linearly when the objective function is strongly convex. However this setup can be restrictive, and does not include several important formulations such as Lasso, group Lasso, logistic regression, and some non-convex models including corrected Lasso and SCAD. In this paper, we prove that, for a class of statistical M-estimators covering examples mentioned above, SVRG solves the formulation with {\em a linear convergence rate} without strong convexity or even convexity. Our analysis makes use of {\em restricted strong convexity}, under which we show that SVRG converges linearly to the fundamental statistical precision of the model, i.e., the difference between true unknown parameter $\theta^*$ and the optimal solution $\hat{\theta}$ of the model.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
63,474
2302.04562
NLP-based Decision Support System for Examination of Eligibility Criteria from Securities Prospectuses at the German Central Bank
As part of its digitization initiative, the German Central Bank (Deutsche Bundesbank) wants to examine the extent to which natural Language Processing (NLP) can be used to make independent decisions upon the eligibility criteria of securities prospectuses. Every month, the Directorate General Markets at the German Central Bank receives hundreds of scanned prospectuses in PDF format, which must be manually processed to decide upon their eligibility. We found that this tedious and time-consuming process can be (semi-)automated by employing modern NLP model architectures, which learn the linguistic feature representation in text to identify the present eligible and ineligible criteria. The proposed Decision Support System provides decisions of document-level eligibility criteria accompanied by human-understandable explanations of the decisions. The aim of this project is to model the described use case and to evaluate the extent to which current research results from the field of NLP can be applied to this problem. After creating a heterogeneous domain-specific dataset containing annotations of eligible and non-eligible mentions of relevant criteria, we were able to successfully build, train and deploy a semi-automatic decider model. This model is based on transformer-based language models and decision trees, which integrate the established rule-based parts of the decision processes. Results suggest that it is possible to efficiently model the problem and automate decision making to more than 90% for many of the considered eligibility criteria.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
344,750
2309.16055
Identifying Risk Factors for Post-COVID-19 Mental Health Disorders: A Machine Learning Perspective
In this study, we leveraged machine learning techniques to identify risk factors associated with post-COVID-19 mental health disorders. Our analysis, based on data collected from 669 patients across various provinces in Iraq, yielded valuable insights. We found that age, gender, and geographical region of residence were significant demographic factors influencing the likelihood of developing mental health disorders in post-COVID-19 patients. Additionally, comorbidities and the severity of COVID-19 illness were important clinical predictors. Psychosocial factors, such as social support, coping strategies, and perceived stress levels, also played a substantial role. Our findings emphasize the complex interplay of multiple factors in the development of mental health disorders following COVID-19 recovery. Healthcare providers and policymakers should consider these risk factors when designing targeted interventions and support systems for individuals at risk. Machine learning-based approaches can provide a valuable tool for predicting and preventing adverse mental health outcomes in post-COVID-19 patients. Further research and prospective studies are needed to validate these findings and enhance our understanding of the long-term psychological impact of the COVID-19 pandemic. This study contributes to the growing body of knowledge regarding the mental health consequences of the COVID-19 pandemic and underscores the importance of a multidisciplinary approach to address the diverse needs of individuals on the path to recovery. Keywords: COVID-19, mental health, risk factors, machine learning, Iraq
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
395,198
2408.16442
Integrating Features for Recognizing Human Activities through Optimized Parameters in Graph Convolutional Networks and Transformer Architectures
Human activity recognition is a major field of study that employs computer vision, machine vision, and deep learning techniques to categorize human actions. The field of deep learning has made significant progress, with architectures that are extremely effective at capturing human dynamics. This study emphasizes the influence of feature fusion on the accuracy of activity recognition. This technique addresses the limitation of conventional models, which face difficulties in identifying activities because of their limited capacity to understand spatial and temporal features. The technique employs sensory data obtained from four publicly available datasets: HuGaDB, PKU-MMD, LARa, and TUG. The accuracy and F1-score of two deep learning models, specifically a Transformer model and a Parameter-Optimized Graph Convolutional Network (PO-GCN), were evaluated using these datasets. The feature fusion technique integrated the final layer features from both models and inputted them into a classifier. Empirical evidence demonstrates that PO-GCN outperforms standard models in activity recognition. HuGaDB demonstrated a 2.3% improvement in accuracy and a 2.2% increase in F1-score. TUG showed a 5% increase in accuracy and a 0.5% rise in F1-score. On the other hand, LARa and PKU-MMD achieved lower accuracies of 64% and 69% respectively. This indicates that the integration of features enhanced the performance of both the Transformer model and PO-GCN.
false
false
false
false
true
false
false
true
false
false
false
true
false
false
false
false
false
false
484,331
2304.08492
STRAP: Structured Object Affordance Segmentation with Point Supervision
With significant annotation savings, point supervision has been proven effective for numerous 2D and 3D scene understanding problems. This success is primarily attributed to the structured output space; i.e., samples with high spatial affinity tend to share the same labels. Sharing this spirit, we study affordance segmentation with point supervision, wherein the setting inherits an unexplored dual affinity-spatial affinity and label affinity. By label affinity, we refer to affordance segmentation as a multi-label prediction problem: A plate can be both holdable and containable. By spatial affinity, we refer to a universal prior that nearby pixels with similar visual features should share the same point annotation. To tackle label affinity, we devise a dense prediction network that enhances label relations by effectively densifying labels in a new domain (i.e., label co-occurrence). To address spatial affinity, we exploit a Transformer backbone for global patch interaction and a regularization loss. In experiments, we benchmark our method on the challenging CAD120 dataset, showing significant performance gains over prior methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
358,734
2012.08216
FMODetect: Robust Detection of Fast Moving Objects
We propose the first learning-based approach for fast moving objects detection. Such objects are highly blurred and move over large distances within one video frame. Fast moving objects are associated with a deblurring and matting problem, also called deblatting. We show that the separation of deblatting into consecutive matting and deblurring allows achieving real-time performance, i.e. an order of magnitude speed-up, and thus enabling new classes of application. The proposed method detects fast moving objects as a truncated distance function to the trajectory by learning from synthetic data. For the sharp appearance estimation and accurate trajectory estimation, we propose a matting and fitting network that estimates the blurred appearance without background, followed by an energy minimization based deblurring. The state-of-the-art methods are outperformed in terms of recall, precision, trajectory estimation, and sharp appearance reconstruction. Compared to other methods, such as deblatting, the inference is of several orders of magnitude faster and allows applications such as real-time fast moving object detection and retrieval in large video collections.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
211,706
2006.01561
Studying The Effect of MIL Pooling Filters on MIL Tasks
There are different multiple instance learning (MIL) pooling filters used in MIL models. In this paper, we study the effect of different MIL pooling filters on the performance of MIL models in real world MIL tasks. We designed a neural network based MIL framework with 5 different MIL pooling filters: `max', `mean', `attention', `distribution' and `distribution with attention'. We also formulated 5 different MIL tasks on a real world lymph node metastases dataset. We found that the performance of our framework in a task is different for different filters. We also observed that the performances of the five pooling filters are also different from task to task. Hence, the selection of a correct MIL pooling filter for each MIL task is crucial for better performance. Furthermore, we noticed that models with `distribution' and `distribution with attention' pooling filters consistently perform well in almost all of the tasks. We attribute this phenomena to the amount of information captured by `distribution' based pooling filters. While point estimate based pooling filters, like `max' and `mean', produce point estimates of distributions, `distribution' based pooling filters capture the full information in distributions. Lastly, we compared the performance of our neural network model with `distribution' pooling filter with the performance of the best MIL methods in the literature on classical MIL datasets and our model outperformed the others.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
179,809
1711.10394
Exposing Computer Generated Images by Using Deep Convolutional Neural Networks
The recent computer graphics developments have upraised the quality of the generated digital content, astonishing the most skeptical viewer. Games and movies have taken advantage of this fact but, at the same time, these advances have brought serious negative impacts like the ones yielded by fakeimages produced with malicious intents. Digital artists can compose artificial images capable of deceiving the great majority of people, turning this into a very dangerous weapon in a timespan currently know as Fake News/Post-Truth" Era. In this work, we propose a new approach for dealing with the problem of detecting computer generated images, through the application of deep convolutional networks and transfer learning techniques. We start from Residual Networks and develop different models adapted to the binary problem of identifying if an image was or not computer generated. Differently from the current state-of-the-art approaches, we don't rely on hand-crafted features, but provide to the model the raw pixel information, achieving the same 0.97 of state-of-the-art methods with two main advantages: our methods show more stable results (depicted by lower variance) and eliminate the laborious and manual step of specialized features extraction and selection.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
85,591
2110.12064
Causal Effect Identification with Context-specific Independence Relations of Control Variables
We study the problem of causal effect identification from observational distribution given the causal graph and some context-specific independence (CSI) relations. It was recently shown that this problem is NP-hard, and while a sound algorithm to learn the causal effects is proposed in Tikka et al. (2019), no complete algorithm for the task exists. In this work, we propose a sound and complete algorithm for the setting when the CSI relations are limited to observed nodes with no parents in the causal graph. One limitation of the state of the art in terms of its applicability is that the CSI relations among all variables, even unobserved ones, must be given (as opposed to learned). Instead, We introduce a set of graphical constraints under which the CSI relations can be learned from mere observational distribution. This expands the set of identifiable causal effects beyond the state of the art.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
262,696
2403.10099
KP-RED: Exploiting Semantic Keypoints for Joint 3D Shape Retrieval and Deformation
In this paper, we present KP-RED, a unified KeyPoint-driven REtrieval and Deformation framework that takes object scans as input and jointly retrieves and deforms the most geometrically similar CAD models from a pre-processed database to tightly match the target. Unlike existing dense matching based methods that typically struggle with noisy partial scans, we propose to leverage category-consistent sparse keypoints to naturally handle both full and partial object scans. Specifically, we first employ a lightweight retrieval module to establish a keypoint-based embedding space, measuring the similarity among objects by dynamically aggregating deformation-aware local-global features around extracted keypoints. Objects that are close in the embedding space are considered similar in geometry. Then we introduce the neural cage-based deformation module that estimates the influence vector of each keypoint upon cage vertices inside its local support region to control the deformation of the retrieved shape. Extensive experiments on the synthetic dataset PartNet and the real-world dataset Scan2CAD demonstrate that KP-RED surpasses existing state-of-the-art approaches by a large margin. Codes and trained models are released on https://github.com/lolrudy/KP-RED.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
438,054
2206.00501
Benign Overfitting in Classification: Provably Counter Label Noise with Larger Models
Studies on benign overfitting provide insights for the success of overparameterized deep learning models. In this work, we examine whether overfitting is truly benign in real-world classification tasks. We start with the observation that a ResNet model overfits benignly on Cifar10 but not benignly on ImageNet. To understand why benign overfitting fails in the ImageNet experiment, we theoretically analyze benign overfitting under a more restrictive setup where the number of parameters is not significantly larger than the number of data points. Under this mild overparameterization setup, our analysis identifies a phase change: unlike in the previous heavy overparameterization settings, benign overfitting can now fail in the presence of label noise. Our analysis explains our empirical observations, and is validated by a set of control experiments with ResNets. Our work highlights the importance of understanding implicit bias in underfitting regimes as a future direction.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
300,146
2207.07331
Modeling Multi-interest News Sequence for News Recommendation
A session-based news recommender system recommends the next news to a user by modeling the potential interests embedded in a sequence of news read/clicked by her/him in a session. Generally, a user's interests are diverse, namely there are multiple interests corresponding to different types of news, e.g., news of distinct topics, within a session. %Modeling such multiple interests is critical for precise news recommendation. However, most of existing methods typically overlook such important characteristic and thus fail to distinguish and model the potential multiple interests of a user, impeding accurate recommendation of the next piece of news. Therefore, this paper proposes multi-interest news sequence (MINS) model for news recommendation. In MINS, a news encoder based on self-attention is devised on learn an informative embedding for each piece of news, and then a novel parallel interest network is devised to extract the potential multiple interests embedded in the news sequence in preparation for the subsequent next-news recommendations. The experimental results on a real-world dataset demonstrate that our model can achieve better performance than the state-of-the-art compared models.
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
308,176
2502.00529
Graph Data Management and Graph Machine Learning: Synergies and Opportunities
The ubiquity of machine learning, particularly deep learning, applied to graphs is evident in applications ranging from cheminformatics (drug discovery) and bioinformatics (protein interaction prediction) to knowledge graph-based query answering, fraud detection, and social network analysis. Concurrently, graph data management deals with the research and development of effective, efficient, scalable, robust, and user-friendly systems and algorithms for storing, processing, and analyzing vast quantities of heterogeneous and complex graph data. Our survey provides a comprehensive overview of the synergies between graph data management and graph machine learning, illustrating how they intertwine and mutually reinforce each other across the entire spectrum of the graph data science and machine learning pipeline. Specifically, the survey highlights two crucial aspects: (1) How graph data management enhances graph machine learning, including contributions such as improved graph neural network performance through graph data cleaning, scalable graph embedding, efficient graph-based vector data management, robust graph neural networks, user-friendly explainability methods; and (2) how graph machine learning, in turn, aids in graph data management, with a focus on applications like query answering over knowledge graphs and various data science tasks. We discuss pertinent open problems and delineate crucial research directions.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
529,422
2301.13088
Stationary Kernels and Gaussian Processes on Lie Groups and their Homogeneous Spaces II: non-compact symmetric spaces
Gaussian processes are arguably the most important class of spatiotemporal models within machine learning. They encode prior information about the modeled function and can be used for exact or approximate Bayesian learning. In many applications, particularly in physical sciences and engineering, but also in areas such as geostatistics and neuroscience, invariance to symmetries is one of the most fundamental forms of prior information one can consider. The invariance of a Gaussian process' covariance to such symmetries gives rise to the most natural generalization of the concept of stationarity to such spaces. In this work, we develop constructive and practical techniques for building stationary Gaussian processes on a very large class of non-Euclidean spaces arising in the context of symmetries. Our techniques make it possible to (i) calculate covariance kernels and (ii) sample from prior and posterior Gaussian processes defined on such spaces, both in a practical manner. This work is split into two parts, each involving different technical considerations: part I studies compact spaces, while part II studies non-compact spaces possessing certain structure. Our contributions make the non-Euclidean Gaussian process models we study compatible with well-understood computational techniques available in standard Gaussian process software packages, thereby making them accessible to practitioners.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
342,775
1205.2628
Multiple Source Adaptation and the Renyi Divergence
This paper presents a novel theoretical study of the general problem of multiple source adaptation using the notion of Renyi divergence. Our results build on our previous work [12], but significantly broaden the scope of that work in several directions. We extend previous multiple source loss guarantees based on distribution weighted combinations to arbitrary target distributions P, not necessarily mixtures of the source distributions, analyze both known and unknown target distribution cases, and prove a lower bound. We further extend our bounds to deal with the case where the learner receives an approximate distribution for each source instead of the exact one, and show that similar loss guarantees can be achieved depending on the divergence between the approximate and true distributions. We also analyze the case where the labeling functions of the source domains are somewhat different. Finally, we report the results of experiments with both an artificial data set and a sentiment analysis task, showing the performance benefits of the distribution weighted combinations and the quality of our bounds based on the Renyi divergence.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
15,935
2008.13191
Caching Transient Content for IoT Sensing: Multi-Agent Soft Actor-Critic
Edge nodes (ENs) in Internet of Things commonly serve as gateways to cache sensing data while providing accessing services for data consumers. This paper considers multiple ENs that cache sensing data under the coordination of the cloud. Particularly, each EN can fetch content generated by sensors within its coverage, which can be uploaded to the cloud via fronthaul and then be delivered to other ENs beyond the communication range. However, sensing data are usually transient with time whereas frequent cache updates could lead to considerable energy consumption at sensors and fronthaul traffic loads. Therefore, we adopt age of information to evaluate data freshness and investigate intelligent caching policies to preserve data freshness while reducing cache update costs. Specifically, we model the cache update problem as a cooperative multi-agent Markov decision process with the goal of minimizing the long-term average weighted cost. To efficiently handle the exponentially large number of actions, we devise a novel reinforcement learning approach, which is a discrete multi-agent variant of soft actor-critic (SAC). Furthermore, we generalize the proposed approach into a decentralized control, where each EN can make decisions based on local observations only. Simulation results demonstrate the superior performance of the proposed SAC-based caching schemes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
193,786
1903.08228
How to Make Swarms Open-Ended? Evolving Collective Intelligence Through a Constricted Exploration of Adjacent Possibles
We propose an approach of open-ended evolution via the simulation of swarm dynamics. In nature, swarms possess remarkable properties, which allow many organisms, from swarming bacteria to ants and flocking birds, to form higher-order structures that enhance their behavior as a group. Swarm simulations highlight three important factors to create novelty and diversity: (a) communication generates combinatorial cooperative dynamics, (b) concurrency allows for separation of timescales, and (c) complexity and size increases push the system towards transitions in innovation. We illustrate these three components in a model computing the continuous evolution of a swarm of agents. The results, divided in three distinct applications, show how emergent structures are capable of filtering information through the bottleneck of their memory, to produce meaningful novelty and diversity within their simulated environment.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
false
true
124,790
2005.04153
A Hybrid Method for Training Convolutional Neural Networks
Artificial Intelligence algorithms have been steadily increasing in popularity and usage. Deep Learning, allows neural networks to be trained using huge datasets and also removes the need for human extracted features, as it automates the feature learning process. In the hearth of training deep neural networks, such as Convolutional Neural Networks, we find backpropagation, that by computing the gradient of the loss function with respect to the weights of the network for a given input, it allows the weights of the network to be adjusted to better perform in the given task. In this paper, we propose a hybrid method that uses both backpropagation and evolutionary strategies to train Convolutional Neural Networks, where the evolutionary strategies are used to help to avoid local minimas and fine-tune the weights, so that the network achieves higher accuracy results. We show that the proposed hybrid method is capable of improving upon regular training in the task of image classification in CIFAR-10, where a VGG16 model was used and the final test results increased 0.61%, in average, when compared to using only backpropagation.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
true
false
false
176,371
2112.09093
Network Realization Functions for Optimal Distributed Control
In this paper, we discuss a distributed control architecture, aimed at networks with linear and time-invariant dynamics, which is amenable to convex formulations for controller design. The proposed approach is well suited for large scale systems, since the resulting feedback schemes completely avoid the exchange of internal states, i.e., plant or controller states, among sub-controllers. Additionally, we provide state-space formulas for these sub-controllers, able to be implemented in a distributed manner.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
272,032
1808.00736
Dynamic Adaptation on Non-Stationary Visual Domains
Domain adaptation aims to learn models on a supervised source domain that perform well on an unsupervised target. Prior work has examined domain adaptation in the context of stationary domain shifts, i.e. static data sets. However, with large-scale or dynamic data sources, data from a defined domain is not usually available all at once. For instance, in a streaming data scenario, dataset statistics effectively become a function of time. We introduce a framework for adaptation over non-stationary distribution shifts applicable to large-scale and streaming data scenarios. The model is adapted sequentially over incoming unsupervised streaming data batches. This enables improvements over several batches without the need for any additionally annotated data. To demonstrate the effectiveness of our proposed framework, we modify associative domain adaptation to work well on source and target data batches with unequal class distributions. We apply our method to several adaptation benchmark datasets for classification and show improved classifier accuracy not only for the currently adapted batch, but also when applied on future stream batches. Furthermore, we show the applicability of our associative learning modifications to semantic segmentation, where we achieve competitive results.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
104,448
0707.1534
An Architecture Framework for Complex Data Warehouses
Nowadays, many decision support applications need to exploit data that are not only numerical or symbolic, but also multimedia, multistructure, multisource, multimodal, and/or multiversion. We term such data complex data. Managing and analyzing complex data involves a lot of different issues regarding their structure, storage and processing, and metadata are a key element in all these processes. Such problems have been addressed by classical data warehousing (i.e., applied to "simple" data). However, data warehousing approaches need to be adapted for complex data. In this paper, we first propose a precise, though open, definition of complex data. Then we present a general architecture framework for warehousing complex data. This architecture heavily relies on metadata and domain-related knowledge, and rests on the XML language, which helps storing data, metadata and domain-specific knowledge altogether, and facilitates communication between the various warehousing processes.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
415
2306.02348
Leverage Points in Modality Shifts: Comparing Language-only and Multimodal Word Representations
Multimodal embeddings aim to enrich the semantic information in neural representations of language compared to text-only models. While different embeddings exhibit different applicability and performance on downstream tasks, little is known about the systematic representation differences attributed to the visual modality. Our paper compares word embeddings from three vision-and-language models (CLIP, OpenCLIP and Multilingual CLIP) and three text-only models, with static (FastText) as well as contextual representations (multilingual BERT; XLM-RoBERTa). This is the first large-scale study of the effect of visual grounding on language representations, including 46 semantic parameters. We identify meaning properties and relations that characterize words whose embeddings are most affected by the inclusion of visual modality in the training data; that is, points where visual grounding turns out most important. We find that the effect of visual modality correlates most with denotational semantic properties related to concreteness, but is also detected for several specific semantic classes, as well as for valence, a sentiment-related connotational property of linguistic expressions.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
370,866
1610.06920
Bit-pragmatic Deep Neural Network Computing
We quantify a source of ineffectual computations when processing the multiplications of the convolutional layers in Deep Neural Networks (DNNs) and propose Pragmatic (PRA), an architecture that exploits it improving performance and energy efficiency. The source of these ineffectual computations is best understood in the context of conventional multipliers which generate internally multiple terms, that is, products of the multiplicand and powers of two, which added together produce the final product [1]. At runtime, many of these terms are zero as they are generated when the multiplicand is combined with the zero-bits of the multiplicator. While conventional bit-parallel multipliers calculate all terms in parallel to reduce individual product latency, PRA calculates only the non-zero terms using a) on-the-fly conversion of the multiplicator representation into an explicit list of powers of two, and b) hybrid bit-parallel multplicand/bit-serial multiplicator processing units. PRA exploits two sources of ineffectual computations: 1) the aforementioned zero product terms which are the result of the lack of explicitness in the multiplicator representation, and 2) the excess in the representation precision used for both multiplicants and multiplicators, e.g., [2]. Measurements demonstrate that for the convolutional layers, a straightforward variant of PRA improves performance by 2.6x over the DaDiaNao (DaDN) accelerator [3] and by 1.4x over STR [4]. Similarly, PRA improves energy efficiency by 28% and 10% on average compared to DaDN and STR. An improved cross lane synchronication scheme boosts performance improvements to 3.1x over DaDN. Finally, Pragmatic benefits persist even with an 8-bit quantized representation [5].
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
true
62,714
2201.07402
Flexible Parallel Learning in Edge Scenarios: Communication, Computational and Energy Cost
Traditionally, distributed machine learning takes the guise of (i) different nodes training the same model (as in federated learning), or (ii) one model being split among multiple nodes (as in distributed stochastic gradient descent). In this work, we highlight how fog- and IoT-based scenarios often require combining both approaches, and we present a framework for flexible parallel learning (FPL), achieving both data and model parallelism. Further, we investigate how different ways of distributing and parallelizing learning tasks across the participating nodes result in different computation, communication, and energy costs. Our experiments, carried out using state-of-the-art deep-network architectures and large-scale datasets, confirm that FPL allows for an excellent trade-off among computational (hence energy) cost, communication overhead, and learning performance.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
276,016
2005.00402
Workgroup Mapping: Visual Analysis of Collaboration Culture
The digital transformation of work presents new opportunities to understand how informal workgroups organize around the dynamic needs of organizations, potentially in contrast to the formal, static, and idealized hierarchies depicted by org charts. We present a design study that spans multiple enabling capabilities for the visual mapping and analysis of organizational workgroups, including metrics for quantifying two dimensions of collaboration culture: the fluidity of collaborative relationships (measured using network machine learning) and the freedom with which workgroups form across organizational boundaries. These capabilities come together to create a turnkey pipeline that combines the analysis of a target organization, the generation of data graphics and statistics, and their integration in a template-based presentation that enables narrative visualization of results. Our metrics and visuals have supported hundreds of presentations to executives of major US-based and multinational organizations, while our engineering practices have created an ensemble of standalone tools with broad relevance to visualization and visual analytics. We present our work as an example of applied visual analytics research, describing the design iterations that allowed us to move from experimentation to production, as well as the perspectives of the research team and the customer-facing team at each stage in this process.
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
175,225
1907.09585
Cooperative Pollution Source Localization and Cleanup with a Bio-inspired Swarm Robot Aggregation
Using robots for exploration of extreme and hazardous environments has the potential to significantly improve human safety. For example, robotic solutions can be deployed to find the source of a chemical leakage and clean the contaminated area. This paper demonstrates a proof-of-concept bio-inspired exploration method using a swarm robotic system, which is based on a combination of two bio-inspired behaviours: aggregation, and pheromone tracking. The main idea of the work presented is to follow pheromone trails to find the source of a chemical leakage and then carry out a decontamination task by aggregating at the critical zone. Using experiments conducted by a simulated model of a Mona robot, we evaluate the effects of population size and robot speed on the ability of the swarm in a decontamination task. The results indicate the feasibility of deploying robotic swarms in an exploration and cleaning task in an extreme environment.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
139,402
2211.04023
A Dynamic Graph Interactive Framework with Label-Semantic Injection for Spoken Language Understanding
Multi-intent detection and slot filling joint models are gaining increasing traction since they are closer to complicated real-world scenarios. However, existing approaches (1) focus on identifying implicit correlations between utterances and one-hot encoded labels in both tasks while ignoring explicit label characteristics; (2) directly incorporate multi-intent information for each token, which could lead to incorrect slot prediction due to the introduction of irrelevant intent. In this paper, we propose a framework termed DGIF, which first leverages the semantic information of labels to give the model additional signals and enriched priors. Then, a multi-grain interactive graph is constructed to model correlations between intents and slots. Specifically, we propose a novel approach to construct the interactive graph based on the injection of label semantics, which can automatically update the graph to better alleviate error propagation. Experimental results show that our framework significantly outperforms existing approaches, obtaining a relative improvement of 13.7% over the previous best model on the MixATIS dataset in overall accuracy.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
329,112
2212.13407
Hybrid Message Passing Algorithm for Downlink FDD Massive MIMO-OFDM Channel Estimation
The design of message passing (MP) algorithms on factor graphs is an effective manner to implement channel estimation (CE) in wireless communication systems, which performance can be further improved by exploiting prior probability models that accurately match the channel characteristics. In this work, we study the CE problem in a downlink massive multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) system. As the prior probability, we propose the Markov chain two-state Gaussian mixture with large variance differences (TSGM-LVD) model to exploit the structured sparsity in the angle-frequency domain of the channel. Existing single and combined MP rules cannot deal with the message computation of the proposed probability model. To overcome this issue, we present a general method to derive the hybrid message passing (HMP) rule, which allows the calculation of messages described by mixed linear and non-linear functions. Accordingly, we design the HMP-TSGM-LVD algorithm under the structured turbo framework (STF). Simulation results demonstrate that the proposed algorithm converges faster and obtains better and more stable performance than its counterparts. In particular, the gain of the proposed approach is maximum (3 dB) in the high signal-to-noise ratio regime, while benchmark approaches experience oscillating behavior due to the improper prior model characterization.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
338,295
2106.02543
Accelerating Dynamical System Simulations with Contracting and Physics-Projected Neural-Newton Solvers
Recent advances in deep learning have allowed neural networks (NNs) to successfully replace traditional numerical solvers in many applications, thus enabling impressive computing gains. One such application is time domain simulation, which is indispensable for the design, analysis and operation of many engineering systems. Simulating dynamical systems with implicit Newton-based solvers is a computationally heavy task, as it requires the solution of a parameterized system of differential and algebraic equations at each time step. A variety of NN-based methodologies have been shown to successfully approximate the trajectories computed by numerical solvers at a fraction of the time. However, few previous works have used NNs to model the numerical solver itself. For the express purpose of accelerating time domain simulation speeds, this paper proposes and explores two complementary alternatives for modeling numerical solvers. First, we use a NN to mimic the linear transformation provided by the inverse Jacobian in a single Newton step. Using this procedure, we evaluate and project the exact, physics-based residual error onto the NN mapping, thus leaving physics ``in the loop''. The resulting tool, termed the Physics-pRojected Neural-Newton Solver (PRoNNS), is able to achieve an extremely high degree of numerical accuracy at speeds which were observed to be up to 31% faster than a Newton-based solver. In the second approach, we model the Newton solver at the heart of an implicit Runge-Kutta integrator as a contracting map iteratively seeking a fixed point on a time domain trajectory. The associated recurrent NN simulation tool, termed the Contracting Neural-Newton Solver (CoNNS), is embedded with training constraints (via CVXPY Layers) which guarantee the mapping provided by the NN satisfies the Banach fixed-point theorem.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
238,913
2310.01720
Perceiver-based CDF Modeling for Time Series Forecasting
Transformers have demonstrated remarkable efficacy in forecasting time series data. However, their extensive dependence on self-attention mechanisms demands significant computational resources, thereby limiting their practical applicability across diverse tasks, especially in multimodal problems. In this work, we propose a new architecture, called perceiver-CDF, for modeling cumulative distribution functions (CDF) of time series data. Our approach combines the perceiver architecture with a copula-based attention mechanism tailored for multimodal time series prediction. By leveraging the perceiver, our model efficiently transforms high-dimensional and multimodal data into a compact latent space, thereby significantly reducing computational demands. Subsequently, we implement a copula-based attention mechanism to construct the joint distribution of missing data for prediction. Further, we propose an output variance testing mechanism to effectively mitigate error propagation during prediction. To enhance efficiency and reduce complexity, we introduce midpoint inference for the local attention mechanism. This enables the model to efficiently capture dependencies within nearby imputed samples without considering all previous samples. The experiments on the unimodal and multimodal benchmarks consistently demonstrate a 20% improvement over state-of-the-art methods while utilizing less than half of the computational resources.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
396,539
1910.02600
Deep Evidential Regression
Deterministic neural networks (NNs) are increasingly being deployed in safety critical domains, where calibrated, robust, and efficient measures of uncertainty are crucial. In this paper, we propose a novel method for training non-Bayesian NNs to estimate a continuous target as well as its associated evidence in order to learn both aleatoric and epistemic uncertainty. We accomplish this by placing evidential priors over the original Gaussian likelihood function and training the NN to infer the hyperparameters of the evidential distribution. We additionally impose priors during training such that the model is regularized when its predicted evidence is not aligned with the correct output. Our method does not rely on sampling during inference or on out-of-distribution (OOD) examples for training, thus enabling efficient and scalable uncertainty learning. We demonstrate learning well-calibrated measures of uncertainty on various benchmarks, scaling to complex computer vision tasks, as well as robustness to adversarial and OOD test samples.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
148,293
1009.2631
Google matrix of business process management
Development of efficient business process models and determination of their characteristic properties are subject of intense interdisciplinary research. Here, we consider a business process model as a directed graph. Its nodes correspond to the units identified by the modeler and the link direction indicates the causal dependencies between units. It is of primary interest to obtain the stationary flow on such a directed graph, which corresponds to the steady-state of a firm during the business process. Following the ideas developed recently for the World Wide Web, we construct the Google matrix for our business process model and analyze its spectral properties. The importance of nodes is characterized by Page-Rank and recently proposed CheiRank and 2DRank, respectively. The results show that this two-dimensional ranking gives a significant information about the influence and communication properties of business model units. We argue that the Google matrix method, described here, provides a new efficient tool helping companies to make their decisions on how to evolve in the exceedingly dynamic global market.
false
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
7,543
2305.10358
NUANCE: Near Ultrasound Attack On Networked Communication Environments
This study investigates a primary inaudible attack vector on Amazon Alexa voice services using near ultrasound trojans and focuses on characterizing the attack surface and examining the practical implications of issuing inaudible voice commands. The research maps each attack vector to a tactic or technique from the MITRE ATT&CK matrix, covering enterprise, mobile, and Industrial Control System (ICS) frameworks. The experiment involved generating and surveying fifty near-ultrasonic audios to assess the attacks' effectiveness, with unprocessed commands having a 100% success rate and processed ones achieving a 58% overall success rate. This systematic approach stimulates previously unaddressed attack surfaces, ensuring comprehensive detection and attack design while pairing each ATT&CK Identifier with a tested defensive method, providing attack and defense tactics for prompt-response options. The main findings reveal that the attack method employs Single Upper Sideband Amplitude Modulation (SUSBAM) to generate near-ultrasonic audio from audible sources, transforming spoken commands into a frequency range beyond human-adult hearing. By eliminating the lower sideband, the design achieves a 6 kHz minimum from 16-22 kHz while remaining inaudible after transformation. The research investigates the one-to-many attack surface where a single device simultaneously triggers multiple actions or devices. Additionally, the study demonstrates the reversibility or demodulation of the inaudible signal, suggesting potential alerting methods and the possibility of embedding secret messages like audio steganography.
false
false
true
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
365,020
1107.0922
GraphLab: A Distributed Framework for Machine Learning in the Cloud
Machine Learning (ML) techniques are indispensable in a wide range of fields. Unfortunately, the exponential increase of dataset sizes are rapidly extending the runtime of sequential algorithms and threatening to slow future progress in ML. With the promise of affordable large-scale parallel computing, Cloud systems offer a viable platform to resolve the computational challenges in ML. However, designing and implementing efficient, provably correct distributed ML algorithms is often prohibitively challenging. To enable ML researchers to easily and efficiently use parallel systems, we introduced the GraphLab abstraction which is designed to represent the computational patterns in ML algorithms while permitting efficient parallel and distributed implementations. In this paper we provide a formal description of the GraphLab parallel abstraction and present an efficient distributed implementation. We conduct a comprehensive evaluation of GraphLab on three state-of-the-art ML algorithms using real large-scale data and a 64 node EC2 cluster of 512 processors. We find that GraphLab achieves orders of magnitude performance gains over Hadoop while performing comparably or superior to hand-tuned MPI implementations.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
11,160
2306.01375
Robust and Generalisable Segmentation of Subtle Epilepsy-causing Lesions: a Graph Convolutional Approach
Focal cortical dysplasia (FCD) is a leading cause of drug-resistant focal epilepsy, which can be cured by surgery. These lesions are extremely subtle and often missed even by expert neuroradiologists. "Ground truth" manual lesion masks are therefore expensive, limited and have large inter-rater variability. Existing FCD detection methods are limited by high numbers of false positive predictions, primarily due to vertex- or patch-based approaches that lack whole-brain context. Here, we propose to approach the problem as semantic segmentation using graph convolutional networks (GCN), which allows our model to learn spatial relationships between brain regions. To address the specific challenges of FCD identification, our proposed model includes an auxiliary loss to predict distance from the lesion to reduce false positives and a weak supervision classification loss to facilitate learning from uncertain lesion masks. On a multi-centre dataset of 1015 participants with surface-based features and manual lesion masks from structural MRI data, the proposed GCN achieved an AUC of 0.74, a significant improvement against a previously used vertex-wise multi-layer perceptron (MLP) classifier (AUC 0.64). With sensitivity thresholded at 67%, the GCN had a specificity of 71% in comparison to 49% when using the MLP. This improvement in specificity is vital for clinical integration of lesion-detection tools into the radiological workflow, through increasing clinical confidence in the use of AI radiological adjuncts and reducing the number of areas requiring expert review.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
370,425
2501.12299
Sublinear Variational Optimization of Gaussian Mixture Models with Millions to Billions of Parameters
Gaussian Mixture Models (GMMs) range among the most frequently used machine learning models. However, training large, general GMMs becomes computationally prohibitive for datasets with many data points $N$ of high-dimensionality $D$. For GMMs with arbitrary covariances, we here derive a highly efficient variational approximation, which is integrated with mixtures of factor analyzers (MFAs). For GMMs with $C$ components, our proposed algorithm significantly reduces runtime complexity per iteration from $\mathcal{O}(NCD^2)$ to a complexity scaling linearly with $D$ and remaining constant w.r.t. $C$. Numerical validation of this theoretical complexity reduction then shows the following: the distance evaluations required for the entire GMM optimization process scale sublinearly with $NC$. On large-scale benchmarks, this sublinearity results in speed-ups of an order-of-magnitude compared to the state-of-the-art. As a proof of concept, we train GMMs with over 10 billion parameters on about 100 million images, and observe training times of approximately nine hours on a single state-of-the-art CPU.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
526,251
1903.08329
On Sampling Random Features From Empirical Leverage Scores: Implementation and Theoretical Guarantees
Random features provide a practical framework for large-scale kernel approximation and supervised learning. It has been shown that data-dependent sampling of random features using leverage scores can significantly reduce the number of features required to achieve optimal learning bounds. Leverage scores introduce an optimized distribution for features based on an infinite-dimensional integral operator (depending on input distribution), which is impractical to sample from. Focusing on empirical leverage scores in this paper, we establish an out-of-sample performance bound, revealing an interesting trade-off between the approximated kernel and the eigenvalue decay of another kernel in the domain of random features defined based on data distribution. Our experiments verify that the empirical algorithm consistently outperforms vanilla Monte Carlo sampling, and with a minor modification the method is even competitive to supervised data-dependent kernel learning, without using the output (label) information.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
124,809
2111.11213
Learn Quasi-stationary Distributions of Finite State Markov Chain
We propose a reinforcement learning (RL) approach to compute the expression of quasi-stationary distribution. Based on the fixed-point formulation of quasi-stationary distribution, we minimize the KL-divergence of two Markovian path distributions induced by the candidate distribution and the true target distribution. To solve this challenging minimization problem by gradient descent, we apply the reinforcement learning technique by introducing the reward and value functions. We derive the corresponding policy gradient theorem and design an actor-critic algorithm to learn the optimal solution and the value function. The numerical examples of finite state Markov chain are tested to demonstrate the new method.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
267,590
2311.15924
Diagnosis driven Anomaly Detection for CPS
In Cyber-Physical Systems (CPS) research, anomaly detection (detecting abnormal behavior) and diagnosis (identifying the underlying root cause) are often treated as distinct, isolated tasks. However, diagnosis algorithms require symptoms, i.e. temporally and spatially isolated anomalies, as input. Thus, anomaly detection and diagnosis must be developed together to provide a holistic solution for diagnosis in CPS. We therefore propose a method for utilizing deep learning-based anomaly detection to generate inputs for Consistency-Based Diagnosis (CBD). We evaluate our approach on a simulated and a real-world CPS dataset, where our model demonstrates strong performance relative to other state-of-the-art models.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
410,685
2111.15123
Outage and Finite-SNR DMT Analysis for IRS-aided MIMO Systems: How Large IRSs Need to Be?
Intelligent reflecting surfaces (IRSs) are promising enablers for high-capacity wireless communication systems by constructing favorable channels between the transmitter and receiver. However, general, accurate, and tractable outage analysis for IRS-aided multiple-input-multiple-output (MIMO) systems is not available in the literature. In this paper, we first characterize the mutual information (MI) of IRS-aided MIMO systems by capitalizing on large random matrix theory (RMT). Based on this result, a closed-form approximation for the outage probability is derived and a gradient-based algorithm is proposed to minimize the outage probability with statistical channel state information (CSI). We also investigate the diversity-multiplexing tradeoff (DMT) with the finite signal-to-noise ratio (SNR). Based on these theoretical results, we further study the impact of the IRS size on system performance. In the high SNR regime, we provide closed-form expressions for the ergodic mutual information (EMI) and outage probability as a function of the IRS size, which analytically reveal that the benefit of increasing the IRS size saturates quickly. Simulation results validate the accuracy of the theoretical analysis and confirm the increasing cost for deploying larger IRSs to improve system performance. For example, for an IRS-aided MIMO system with 20 antennas at both the transmitter and receiver, we need to double the size of the IRS to increase the throughout from 90% to 95% of its maximum value.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
268,818
1705.10899
Propositional Knowledge Representation and Reasoning in Restricted Boltzmann Machines
While knowledge representation and reasoning are considered the keys for human-level artificial intelligence, connectionist networks have been shown successful in a broad range of applications due to their capacity for robust learning and flexible inference under uncertainty. The idea of representing symbolic knowledge in connectionist networks has been well-received and attracted much attention from research community as this can establish a foundation for integration of scalable learning and sound reasoning. In previous work, there exist a number of approaches that map logical inference rules with feed-forward propagation of artificial neural networks (ANN). However, the discriminative structure of an ANN requires the separation of input/output variables which makes it difficult for general reasoning where any variables should be inferable. Other approaches address this issue by employing generative models such as symmetric connectionist networks, however, they are difficult and convoluted. In this paper we propose a novel method to represent propositional formulas in restricted Boltzmann machines which is less complex, especially in the cases of logical implications and Horn clauses. An integration system is then developed and evaluated in real datasets which shows promising results.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
74,491
2312.08749
Mitigating Label Bias in Machine Learning: Fairness through Confident Learning
Discrimination can occur when the underlying unbiased labels are overwritten by an agent with potential bias, resulting in biased datasets that unfairly harm specific groups and cause classifiers to inherit these biases. In this paper, we demonstrate that despite only having access to the biased labels, it is possible to eliminate bias by filtering the fairest instances within the framework of confident learning. In the context of confident learning, low self-confidence usually indicates potential label errors; however, this is not always the case. Instances, particularly those from underrepresented groups, might exhibit low confidence scores for reasons other than labeling errors. To address this limitation, our approach employs truncation of the confidence score and extends the confidence interval of the probabilistic threshold. Additionally, we incorporate with co-teaching paradigm for providing a more robust and reliable selection of fair instances and effectively mitigating the adverse effects of biased labels. Through extensive experimentation and evaluation of various datasets, we demonstrate the efficacy of our approach in promoting fairness and reducing the impact of label bias in machine learning models.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
415,435
1206.6735
Elimination of Spurious Ambiguity in Transition-Based Dependency Parsing
We present a novel technique to remove spurious ambiguity from transition systems for dependency parsing. Our technique chooses a canonical sequence of transition operations (computation) for a given dependency tree. Our technique can be applied to a large class of bottom-up transition systems, including for instance Nivre (2004) and Attardi (2006).
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
17,038
2107.06638
Procedural Content Generation using Behavior Trees (PCGBT)
Behavior trees (BTs) are a popular method for modeling NPC and enemy AI behavior and have been widely used in commercial games. In this work, rather than use BTs to model game playing agents, we use them for modeling game design agents, defining behaviors as content generation tasks rather than in-game actions. Similar to how traditional BTs enable modeling behaviors in a modular and dynamic manner, BTs for PCG enable simple subtrees for generating parts of levels to be combined modularly to form complex trees for generating whole levels as well as generators that can dynamically vary the generated content. We refer to this approach as Procedural Content Generation using Behavior Trees, or PCGBT, and demonstrate it by using BTs to model generators for Super Mario Bros., Mega Man and Metroid levels as well as dungeon layouts and discuss several ways in which this paradigm could be applied and extended in the future.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
246,150
2211.04903
Novel Chapter Abstractive Summarization using Spinal Tree Aware Sub-Sentential Content Selection
Summarizing novel chapters is a difficult task due to the input length and the fact that sentences that appear in the desired summaries draw content from multiple places throughout the chapter. We present a pipelined extractive-abstractive approach where the extractive step filters the content that is passed to the abstractive component. Extremely lengthy input also results in a highly skewed dataset towards negative instances for extractive summarization; we thus adopt a margin ranking loss for extraction to encourage separation between positive and negative examples. Our extraction component operates at the constituent level; our approach to this problem enriches the text with spinal tree information which provides syntactic context (in the form of constituents) to the extraction model. We show an improvement of 3.71 Rouge-1 points over best results reported in prior work on an existing novel chapter dataset.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
329,382
2107.11503
Efficient Inverse Design of 2D Elastic Metamaterial Systems Using Invertible Neural Networks
Locally resonant elastic metamaterials (LREM) can be designed, by optimizing the geometry of the constituent self-repeating unit cells, to potentially damp out vibration in selected frequency ranges, thus yielding desired bandgaps. However, it remains challenging to quickly arrive at unit cell designs that satisfy any requested bandgap specifications within a given global frequency range. This paper develops a computationally efficient framework for (fast) inverse design of LREM, by integrating a new type of machine learning models called invertible neural networks or INN. An INN can be trained to predict the bandgap bounds as a function of the unit cell design, and interestingly at the same time it learns to predict the unit cell design given a bandgap, when executed in reverse. In our case the unit cells are represented in terms of the width's of the outer matrix and middle soft filler layer of the unit cell. Training data on the frequency response of the unit cell is provided by Bloch dispersion analyses. The trained INN is used to instantaneously retrieve feasible (or near feasible) inverse designs given a specified bandgap constraint, which is then used to initialize a forward constrained optimization (based on sequential quadratic programming) to find the bandgap satisfying unit cell with minimum mass. Case studies show favorable performance of this approach, in terms of the bandgap characteristics and minimized mass, when compared to the median scenario over ten randomly initialized optimizations for the same specified bandgaps. Further analysis using FEA verify the bandgap performance of a finite structure comprised of $8\times 8$ arrangement of the unit cells obtained with INN-accelerated inverse design.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
247,605
1510.07104
Supporting Window Analytics over Large-scale Dynamic Graphs
In relational DBMS, window functions have been widely used to facilitate data analytics. Surprisingly, while similar concepts have been employed for graph analytics, there has been no explicit notions of graph window analytic functions. In this paper, we formally introduce window queries for graph analytics. In such queries, for each vertex, the analysis is performed on a window of vertices defined based on the graph structure. In particular, we identify two instantiations, namely the k-hop window and the topological window. We develop two novel indices, Dense Block index (DBIndex) and Inheritance index (I-Index), to facilitate efficient processing of these two types of windows respectively. Extensive experiments are conducted over both real and synthetic datasets with hundreds of millions of vertices and edges. Experimental results indicate that our proposed index-based query processing solutions achieve four orders of magnitude of query performance gain than the non-index algorithm and are superior over EAGR wrt scalability and efficiency.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
48,167
2005.06624
Comparative Analysis of Text Classification Approaches in Electronic Health Records
Text classification tasks which aim at harvesting and/or organizing information from electronic health records are pivotal to support clinical and translational research. However these present specific challenges compared to other classification tasks, notably due to the particular nature of the medical lexicon and language used in clinical records. Recent advances in embedding methods have shown promising results for several clinical tasks, yet there is no exhaustive comparison of such approaches with other commonly used word representations and classification models. In this work, we analyse the impact of various word representations, text pre-processing and classification algorithms on the performance of four different text classification tasks. The results show that traditional approaches, when tailored to the specific language and structure of the text inherent to the classification task, can achieve or exceed the performance of more recent ones based on contextual embeddings such as BERT.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
177,059
2410.14815
Adapting Multilingual LLMs to Low-Resource Languages using Continued Pre-training and Synthetic Corpus
Multilingual LLMs support a variety of languages; however, their performance is suboptimal for low-resource languages. In this work, we emphasize the importance of continued pre-training of multilingual LLMs and the use of translation-based synthetic pre-training corpora for improving LLMs in low-resource languages. We conduct our study in the context of the low-resource Indic language Hindi. We introduce Nemotron-Mini-Hindi 4B, a bilingual SLM supporting both Hindi and English, based on Nemotron-Mini 4B. The model is trained using a mix of real and synthetic Hindi + English tokens, with continuous pre-training performed on 400B tokens. We demonstrate that both the base and instruct models achieve state-of-the-art results on Hindi benchmarks while remaining competitive on English tasks. Additionally, we observe that the continued pre-training approach enhances the model's overall factual accuracy.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
500,223
2112.08106
Enhance Connectivity of Promising Regions for Sampling-based Path Planning
Sampling-based path planning algorithms usually implement uniform sampling methods to search the state space. However, uniform sampling may lead to unnecessary exploration in many scenarios, such as the environment with a few dead ends. Our previous work proposes to use the promising region to guide the sampling process to address the issue. However, the predicted promising regions are often disconnected, which means they cannot connect the start and goal state, resulting in a lack of probabilistic completeness. This work focuses on enhancing the connectivity of predicted promising regions. Our proposed method regresses the connectivity probability of the edges in the x and y directions. In addition, it calculates the weight of the promising edges in loss to guide the neural network to pay more attention to the connectivity of the promising regions. We conduct a series of simulation experiments, and the results show that the connectivity of promising regions improves significantly. Furthermore, we analyze the effect of connectivity on sampling-based path planning algorithms and conclude that connectivity plays an essential role in maintaining algorithm performance.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
271,698
2004.00428
Stability and Instability Divergence Conditions for Dynamical Systems
A novel method for stability and instability study of autonomous dynamical systems using the flow and divergence of the vector field is proposed. A relation between the method of Lyapunov functions and the proposed method is established. Bendixon and Bendixon-Dulac theorems for $n$th dimensional systems are extended. Based on the proposed method, the state feedback control law is designed. The control signal is obtained from the partial differential inequality. The examples illustrate the application of the proposed method and the existing ones.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
170,632
2303.10353
Sharpness-Aware Gradient Matching for Domain Generalization
The goal of domain generalization (DG) is to enhance the generalization capability of the model learned from a source domain to other unseen domains. The recently developed Sharpness-Aware Minimization (SAM) method aims to achieve this goal by minimizing the sharpness measure of the loss landscape. Though SAM and its variants have demonstrated impressive DG performance, they may not always converge to the desired flat region with a small loss value. In this paper, we present two conditions to ensure that the model could converge to a flat minimum with a small loss, and present an algorithm, named Sharpness-Aware Gradient Matching (SAGM), to meet the two conditions for improving model generalization capability. Specifically, the optimization objective of SAGM will simultaneously minimize the empirical risk, the perturbed loss (i.e., the maximum loss within a neighborhood in the parameter space), and the gap between them. By implicitly aligning the gradient directions between the empirical risk and the perturbed loss, SAGM improves the generalization capability over SAM and its variants without increasing the computational cost. Extensive experimental results show that our proposed SAGM method consistently outperforms the state-of-the-art methods on five DG benchmarks, including PACS, VLCS, OfficeHome, TerraIncognita, and DomainNet. Codes are available at https://github.com/Wang-pengfei/SAGM.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
352,410
2112.03551
Optimal Scheduling of Energy Storage for Power System with Capability of Sensing Short-term Future PV Power Production
Constant rise in energy consumption that comes with the population growth and introduction of new technologies has posed critical issues such as efficient energy management on the consumer side. That has elevated the importance of the use of renewable energy sources, particularly photovoltaic (PV) system and wind turbine. This work models and discusses design options based on the hybrid power system of grid and battery storage. The effects of installed capacity on renewable penetration (RP) and cost of electricity (COE) are investigated for each modality. For successful operation of hybrid power system and electricity trading in power market, accurate predictions of PV power production and load demand are taken into account. A machine learning (ML) model is introduced for scheduling, and predicting variations of the PV power production and load demand. Fitness of the ML model shows, when employing a linear regression model, the mean squared error (MSE) of 0.000012, root mean square error (RMSE) of 0.003560 and R2 of 0.999379. Using predicted PV power production and load demand, reduction of electricity cost is 37.5 % when PV and utility grid are utilized, and is 43.06% when PV, utility grid, and storage system are utilized.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
270,248
1505.04887
A Near-optimal User Ordering Algorithm for Non-iterative Interference Alignment Transceiver Design in MIMO Interfering Broadcast Channels
Interference alignment (IA) has recently emerged as a promising interference mitigation technique for interference networks. In this letter, we focus on the IA non-iterative transceiver design problem in a multiple-input-multiple-output interfering broadcast channel (MIMO-IBC), and observed that there is previously unexploited flexibility in different permutations of user ordering. By choosing a good user ordering for a pre-determined IA inter-channel-interference allocation, an improved transceiver design can be accomplished. In order to achieve a more practical performance-complexity tradeoff, a suboptimal user ordering algorithm is proposed. Simulation shows the proposed suboptimal user ordering algorithm can achieve near-optimal performance compared to the optimal ordering while exhibiting only moderate computational complexity.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
43,237
cs/0602088
Towards Low-Complexity Linear-Programming Decoding
We consider linear-programming (LP) decoding of low-density parity-check (LDPC) codes. While it is clear that one can use any general-purpose LP solver to solve the LP that appears in the decoding problem, we argue in this paper that the LP at hand is equipped with a lot of structure that one should take advantage of. Towards this goal, we study the dual LP and show how coordinate-ascent methods lead to very simple update rules that are tightly connected to the min-sum algorithm. Moreover, replacing minima in the formula of the dual LP with soft-minima one obtains update rules that are tightly connected to the sum-product algorithm. This shows that LP solvers with complexity similar to the min-sum algorithm and the sum-product algorithm are feasible. Finally, we also discuss some sub-gradient-based methods.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
539,296
2309.01377
Memory augment is All You Need for image restoration
Image restoration is a low-level vision task, most CNN methods are designed as a black box, lacking transparency and internal aesthetics. Although some methods combining traditional optimization algorithms with DNNs have been proposed, they all have some limitations. In this paper, we propose a three-granularity memory layer and contrast learning named MemoryNet, specifically, dividing the samples into positive, negative, and actual three samples for contrastive learning, where the memory layer is able to preserve the deep features of the image and the contrastive learning converges the learned features to balance. Experiments on Derain/Deshadow/Deblur task demonstrate that these methods are effective in improving restoration performance. In addition, this paper's model obtains significant PSNR, SSIM gain on three datasets with different degradation types, which is a strong proof that the recovered images are perceptually realistic. The source code of MemoryNet can be obtained from https://github.com/zhangbaijin/MemoryNet
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
389,665
2403.12892
Uoc luong kenh truyen trong he thong da robot su dung SDR
This study focuses on developing an experimental system for estimating communication channels in a multi-robot mobile system using software-defined radio (SDR) devices. The system consists of two mobile robots programmed for two scenarios: one where the robot remains stationary and another where it follows a predefined trajectory. Communication within the system is conducted through orthogonal frequency-division multiplexing (OFDM) to mitigate the effects of multipath propagation in indoor environments. The system's performance is evaluated using the bit error rate (BER). Connections related to robot motion and communication are implemented using Raspberry Pi 3 and BladeRF x115, respectively. The least squares (LS) technique is employed to estimate the channel with a bit error rate of approximately 10^(-2).
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
439,378
2412.01306
Multimodal Medical Disease Classification with LLaMA II
Medical patient data is always multimodal. Images, text, age, gender, histopathological data are only few examples for different modalities in this context. Processing and integrating this multimodal data with deep learning based methods is of utmost interest due to its huge potential for medical procedure such as diagnosis and patient treatment planning. In this work we retrain a multimodal transformer-based model for disease classification. To this end we use the text-image pair dataset from OpenI consisting of 2D chest X-rays associated with clinical reports. Our focus is on fusion methods for merging text and vision information extracted from medical datasets. Different architecture structures with a LLaMA II backbone model are tested. Early fusion of modality specific features creates better results with the best model reaching 97.10% mean AUC than late fusion from a deeper level of the architecture (best model: 96.67% mean AUC). Both outperform former classification models tested on the same multimodal dataset. The newly introduced multimodal architecture can be applied to other multimodal datasets with little effort and can be easily adapted for further research, especially, but not limited to, the field of medical AI.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
513,062
2306.11044
Frequency effects in Linear Discriminative Learning
Word frequency is a strong predictor in most lexical processing tasks. Thus, any model of word recognition needs to account for how word frequency effects arise. The Discriminative Lexicon Model (DLM; Baayen et al., 2018a, 2019) models lexical processing with linear mappings between words' forms and their meanings. So far, the mappings can either be obtained incrementally via error-driven learning, a computationally expensive process able to capture frequency effects, or in an efficient, but frequency-agnostic solution modelling the theoretical endstate of learning (EL) where all words are learned optimally. In this study we show how an efficient, yet frequency-informed mapping between form and meaning can be obtained (Frequency-informed learning; FIL). We find that FIL well approximates an incremental solution while being computationally much cheaper. FIL shows a relatively low type- and high token-accuracy, demonstrating that the model is able to process most word tokens encountered by speakers in daily life correctly. We use FIL to model reaction times in the Dutch Lexicon Project (Keuleers et al., 2010) and find that FIL predicts well the S-shaped relationship between frequency and the mean of reaction times but underestimates the variance of reaction times for low frequency words. FIL is also better able to account for priming effects in an auditory lexical decision task in Mandarin Chinese (Lee, 2007), compared to EL. Finally, we used ordered data from CHILDES (Brown, 1973; Demuth et al., 2006) to compare mappings obtained with FIL and incremental learning. The mappings are highly correlated, but with FIL some nuances based on word ordering effects are lost. Our results show how frequency effects in a learning model can be simulated efficiently, and raise questions about how to best account for low-frequency words in cognitive models.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
374,458
2304.10321
DropDim: A Regularization Method for Transformer Networks
We introduceDropDim, a structured dropout method designed for regularizing the self-attention mechanism, which is a key component of the transformer. In contrast to the general dropout method, which randomly drops neurons, DropDim drops part of the embedding dimensions. In this way, the semantic information can be completely discarded. Thus, the excessive coadapting between different embedding dimensions can be broken, and the self-attention is forced to encode meaningful featureswith a certain number of embedding dimensions erased. Experiments on a wide range of tasks executed on the MUST-C English-Germany dataset show that DropDim can effectively improve model performance, reduce over-fitting, and show complementary effects with other regularization methods. When combined with label smoothing, the WER can be reduced from 19.1% to 15.1% on the ASR task, and the BLEU value can be increased from26.90 to 28.38 on the MT task. On the ST task, the model can reach a BLEU score of 22.99, an increase by 1.86 BLEU points compared to the strong baseline.
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
359,373
2411.11530
SeqProFT: Applying LoRA Finetuning for Sequence-only Protein Property Predictions
Protein language models (PLMs) are capable of learning the relationships between protein sequences and functions by treating amino acid sequences as textual data in a self-supervised manner. However, fine-tuning these models typically demands substantial computational resources and time, with results that may not always be optimized for specific tasks. To overcome these challenges, this study employs the LoRA method to perform end-to-end fine-tuning of the ESM-2 model specifically for protein property prediction tasks, utilizing only sequence information. Additionally, a multi-head attention mechanism is integrated into the downstream network to combine sequence features with contact map information, thereby enhancing the model's comprehension of protein sequences. Experimental results of extensive classification and regression tasks demonstrate that the fine-tuned model achieves strong performance and faster convergence across multiple regression and classification tasks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
509,090
2402.14270
Take the Bull by the Horns: Hard Sample-Reweighted Continual Training Improves LLM Generalization
In the rapidly advancing arena of large language models (LLMs), a key challenge is to enhance their capabilities amid a looming shortage of high-quality training data. Our study starts from an empirical strategy for the light continual training of LLMs using their original pre-training data sets, with a specific focus on selective retention of samples that incur moderately high losses. These samples are deemed informative and beneficial for model refinement, contrasting with the highest-loss samples, which would be discarded due to their correlation with data noise and complexity. We then formalize this strategy into a principled framework of Instance-Reweighted Distributionally Robust Optimization (IR-DRO). IR-DRO is designed to dynamically prioritize the training focus on informative samples through an instance reweighting mechanism, streamlined by a closed-form solution for straightforward integration into established training protocols. Through rigorous experimentation with various models and datasets, our findings indicate that our sample-targeted methods significantly improve LLM performance across multiple benchmarks, in both continual pre-training and instruction tuning scenarios. Our codes are available at https://github.com/VITA-Group/HardFocusTraining.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
431,600
2206.07077
Comparison of Different Configurations of Saturated Core Fault Current Limiters in a Power Grid by Numerical Method
Short circuit fault currents are increasing due to growing demand for electricity and high complexity in power systems. Because the fault currents reach the highest value which the breakers are unable to restrict, the electrical grid security is under jeopardy. By entering a limiting impedance into a transmission line in series, these impedances restrict the rising amounts of fault currents to levels that are acceptable. Saturated core fault current limiters (SCFCLs) are a pivotal tool for limiting fault currents rise in power networks that have good performance characteristics. In a normal condition, these limiters have slight effects on the system and can effectively limit short circuit currents when occur. In this chapter, various structures of SCFCLs with different arrangements of ac windings & dc windings are presented and the currents passing through the FCLs under the normal and faulty system conditions are assessed and compared. The flux density in various regions of the core in different arrangements has been investigated as well and the desired analyzes have been performed. Simulation will be presented based on COMSOL Multiphysics 5.4, a finite element software package which can provide a precious assessment to compare these protective devices with different configurations.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
302,587
2202.01802
Different Affordances on Facebook and SMS Text Messaging Do Not Impede Generalization of Language-Based Predictive Models
Adaptive mobile device-based health interventions often use machine learning models trained on non-mobile device data, such as social media text, due to the difficulty and high expense of collecting large text message (SMS) data. Therefore, understanding the differences and generalization of models between these platforms is crucial for proper deployment. We examined the psycho-linguistic differences between Facebook and text messages, and their impact on out-of-domain model performance, using a sample of 120 users who shared both. We found that users use Facebook for sharing experiences (e.g., leisure) and SMS for task-oriented and conversational purposes (e.g., plan confirmations), reflecting the differences in the affordances. To examine the downstream effects of these differences, we used pre-trained Facebook-based language models to estimate age, gender, depression, life satisfaction, and stress on both Facebook and SMS. We found no significant differences in correlations between the estimates and self-reports across 6 of 8 models. These results suggest using pre-trained Facebook language models to achieve better accuracy with just-in-time interventions.
true
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
278,588
2303.08577
Investigating GANsformer: A Replication Study of a State-of-the-Art Image Generation Model
The field of image generation through generative modelling is abundantly discussed nowadays. It can be used for various applications, such as up-scaling existing images, creating non-existing objects, such as interior design scenes, products or even human faces, and achieving transfer-learning processes. In this context, Generative Adversarial Networks (GANs) are a class of widely studied machine learning frameworks first appearing in the paper "Generative adversarial nets" by Goodfellow et al. that achieve the goal above. In our work, we reproduce and evaluate a novel variation of the original GAN network, the GANformer, proposed in "Generative Adversarial Transformers" by Hudson and Zitnick. This project aimed to recreate the methods presented in this paper to reproduce the original results and comment on the authors' claims. Due to resources and time limitations, we had to constrain the network's training times, dataset types, and sizes. Our research successfully recreated both variations of the proposed GANformer model and found differences between the authors' and our results. Moreover, discrepancies between the publication methodology and the one implemented, made available in the code, allowed us to study two undisclosed variations of the presented procedures.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
351,696
2409.13774
Trustworthy Intrusion Detection: Confidence Estimation Using Latent Space
This work introduces a novel method for enhancing confidence in anomaly detection in Intrusion Detection Systems (IDS) through the use of a Variational Autoencoder (VAE) architecture. By developing a confidence metric derived from latent space representations, we aim to improve the reliability of IDS predictions against cyberattacks. Applied to the NSL-KDD dataset, our approach focuses on binary classification tasks to effectively distinguish between normal and malicious network activities. The methodology demonstrates a significant enhancement in anomaly detection, evidenced by a notable correlation of 0.45 between the reconstruction error and the proposed metric. Our findings highlight the potential of employing VAEs for more accurate and trustworthy anomaly detection in network security.
false
false
false
false
true
false
true
false
false
false
false
false
true
false
false
false
false
false
490,167
2310.07915
Tag Your Fish in the Broken Net: A Responsible Web Framework for Protecting Online Privacy and Copyright
The World Wide Web, a ubiquitous source of information, serves as a primary resource for countless individuals, amassing a vast amount of data from global internet users. However, this online data, when scraped, indexed, and utilized for activities like web crawling, search engine indexing, and, notably, AI model training, often diverges from the original intent of its contributors. The ascent of Generative AI has accentuated concerns surrounding data privacy and copyright infringement. Regrettably, the web's current framework falls short in facilitating pivotal actions like consent withdrawal or data copyright claims. While some companies offer voluntary measures, such as crawler access restrictions, these often remain inaccessible to individual users. To empower online users to exercise their rights and enable companies to adhere to regulations, this paper introduces a user-controlled consent tagging framework for online data. It leverages the extensibility of HTTP and HTML in conjunction with the decentralized nature of distributed ledger technology. With this framework, users have the ability to tag their online data at the time of transmission, and subsequently, they can track and request the withdrawal of consent for their data from the data holders. A proof-of-concept system is implemented, demonstrating the feasibility of the framework. This work holds significant potential for contributing to the reinforcement of user consent, privacy, and copyright on the modern internet and lays the groundwork for future insights into creating a more responsible and user-centric web ecosystem.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
true
399,178
1607.07939
A Sensorimotor Reinforcement Learning Framework for Physical Human-Robot Interaction
Modeling of physical human-robot collaborations is generally a challenging problem due to the unpredictive nature of human behavior. To address this issue, we present a data-efficient reinforcement learning framework which enables a robot to learn how to collaborate with a human partner. The robot learns the task from its own sensorimotor experiences in an unsupervised manner. The uncertainty of the human actions is modeled using Gaussian processes (GP) to implement action-value functions. Optimal action selection given the uncertain GP model is ensured by Bayesian optimization. We apply the framework to a scenario in which a human and a PR2 robot jointly control the ball position on a plank based on vision and force/torque data. Our experimental results show the suitability of the proposed method in terms of fast and data-efficient model learning, optimal action selection under uncertainties and equal role sharing between the partners.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
59,090
2412.16199
Stabilizing Machine Learning for Reproducible and Explainable Results: A Novel Validation Approach to Subject-Specific Insights
Machine Learning is transforming medical research by improving diagnostic accuracy and personalizing treatments. General ML models trained on large datasets identify broad patterns across populations, but their effectiveness is often limited by the diversity of human biology. This has led to interest in subject-specific models that use individual data for more precise predictions. However, these models are costly and challenging to develop. To address this, we propose a novel validation approach that uses a general ML model to ensure reproducible performance and robust feature importance analysis at both group and subject-specific levels. We tested a single Random Forest (RF) model on nine datasets varying in domain, sample size, and demographics. Different validation techniques were applied to evaluate accuracy and feature importance consistency. To introduce variability, we performed up to 400 trials per subject, randomly seeding the ML algorithm for each trial. This generated 400 feature sets per subject, from which we identified top subject-specific features. A group-specific feature importance set was then derived from all subject-specific results. We compared our approach to conventional validation methods in terms of performance and feature importance consistency. Our repeated trials approach, with random seed variation, consistently identified key features at the subject level and improved group-level feature importance analysis using a single general model. Subject-specific models address biological variability but are resource-intensive. Our novel validation technique provides consistent feature importance and improved accuracy within a general ML model, offering a practical and explainable alternative for clinical research.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
519,405
1909.07745
Adversarial Feature Training for Generalizable Robotic Visuomotor Control
Deep reinforcement learning (RL) has enabled training action-selection policies, end-to-end, by learning a function which maps image pixels to action outputs. However, it's application to visuomotor robotic policy training has been limited because of the challenge of large-scale data collection when working with physical hardware. A suitable visuomotor policy should perform well not just for the task-setup it has been trained for, but also for all varieties of the task, including novel objects at different viewpoints surrounded by task-irrelevant objects. However, it is impractical for a robotic setup to sufficiently collect interactive samples in a RL framework to generalize well to novel aspects of a task. In this work, we demonstrate that by using adversarial training for domain transfer, it is possible to train visuomotor policies based on RL frameworks, and then transfer the acquired policy to other novel task domains. We propose to leverage the deep RL capabilities to learn complex visuomotor skills for uncomplicated task setups, and then exploit transfer learning to generalize to new task domains provided only still images of the task in the target domain. We evaluate our method on two real robotic tasks, picking and pouring, and compare it to a number of prior works, demonstrating its superiority.
false
false
false
false
false
false
true
true
false
false
false
true
false
false
false
false
false
false
145,765
2210.05033
Multilingual Representation Distillation with Contrastive Learning
Multilingual sentence representations from large models encode semantic information from two or more languages and can be used for different cross-lingual information retrieval and matching tasks. In this paper, we integrate contrastive learning into multilingual representation distillation and use it for quality estimation of parallel sentences (i.e., find semantically similar sentences that can be used as translations of each other). We validate our approach with multilingual similarity search and corpus filtering tasks. Experiments across different low-resource languages show that our method greatly outperforms previous sentence encoders such as LASER, LASER3, and LaBSE.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
322,670
2309.04668
Influence Maximization in Social Networks: A Survey
Online social networks have become an important platform for people to communicate, share knowledge and disseminate information. Given the widespread usage of social media, individuals' ideas, preferences and behavior are often influenced by their peers or friends in the social networks that they participate in. Since the last decade, influence maximization (IM) problem has been extensively adopted to model the diffusion of innovations and ideas. The purpose of IM is to select a set of k seed nodes who can influence the most individuals in the network. In this survey, we present a systematical study over the researches and future directions with respect to IM problem. We review the information diffusion models and analyze a variety of algorithms for the classic IM algorithms. We propose a taxonomy for potential readers to understand the key techniques and challenges. We also organize the milestone works in time order such that the readers of this survey can experience the research roadmap in this field. Moreover, we also categorize other application-oriented IM studies and correspondingly study each of them. What's more, we list a series of open questions as the future directions for IM-related researches, where a potential reader of this survey can easily observe what should be done next in this field.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
390,802
2212.08619
Planting and Mitigating Memorized Content in Predictive-Text Language Models
Language models are widely deployed to provide automatic text completion services in user products. However, recent research has revealed that language models (especially large ones) bear considerable risk of memorizing private training data, which is then vulnerable to leakage and extraction by adversaries. In this study, we test the efficacy of a range of privacy-preserving techniques to mitigate unintended memorization of sensitive user text, while varying other factors such as model size and adversarial conditions. We test both "heuristic" mitigations (those without formal privacy guarantees) and Differentially Private training, which provides provable levels of privacy at the cost of some model performance. Our experiments show that (with the exception of L2 regularization), heuristic mitigations are largely ineffective in preventing memorization in our test suite, possibly because they make too strong of assumptions about the characteristics that define "sensitive" or "private" text. In contrast, Differential Privacy reliably prevents memorization in our experiments, despite its computational and model-performance costs.
false
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
336,811
2110.09962
PR-CIM: a Variation-Aware Binary-Neural-Network Framework for Process-Resilient Computation-in-memory
Binary neural networks (BNNs) that use 1-bit weights and activations have garnered interest as extreme quantization provides low power dissipation. By implementing BNNs as computing-in-memory (CIM), which computes multiplication and accumulations on memory arrays in an analog fashion, namely analog CIM, we can further improve the energy efficiency to process neural networks. However, analog CIMs suffer from the potential problem that process variation degrades the accuracy of BNNs. Our Monte-Carlo simulations show that in an SRAM-based analog CIM of VGG-9, the classification accuracy of CIFAR-10 is degraded even below 20% under process variations of 65nm CMOS. To overcome this problem, we present a variation-aware BNN framework. The proposed framework is developed for SRAM-based BNN CIMs since SRAM is most widely used as on-chip memory, however easily extensible to BNN CIMs based on other memories. Our extensive experimental results show that under process variation of 65nm CMOS, our framework significantly improves the CIFAR-10 accuracies of SRAM-based BNN CIMs, from 10% and 10.1% to 87.76% and 77.74% for VGG-9 and RESNET-18 respectively.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
261,989
1111.5720
A GP-MOEA/D Approach for Modelling Total Electron Content over Cyprus
Vertical Total Electron Content (vTEC) is an ionospheric characteristic used to derive the signal delay imposed by the ionosphere on near-vertical trans-ionospheric links. The major aim of this paper is to design a prediction model based on the main factors that influence the variability of this parameter on a diurnal, seasonal and long-term time-scale. The model should be accurate and general (comprehensive) enough for efficiently approximating the high variations of vTEC. However, good approximation and generalization are conflicting objectives. For this reason a Genetic Programming (GP) with Multi-objective Evolutionary Algorithm based on Decomposition characteristics (GP-MOEA/D) is designed and proposed for modeling vTEC over Cyprus. Experimental results show that the Multi-Objective GP-model, considering real vTEC measurements obtained over a period of 11 years, has produced a good approximation of the modeled parameter and can be implemented as a local model to account for the ionospheric imposed error in positioning. Particulary, the GP-MOEA/D approach performs better than a Single Objective Optimization GP, a GP with Non-dominated Sorting Genetic Algorithm-II (NSGA-II) characteristics and the previously proposed Neural Network-based approach in most cases.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
false
13,162
2310.07929
Crosslingual Structural Priming and the Pre-Training Dynamics of Bilingual Language Models
Do multilingual language models share abstract grammatical representations across languages, and if so, when do these develop? Following Sinclair et al. (2022), we use structural priming to test for abstract grammatical representations with causal effects on model outputs. We extend the approach to a Dutch-English bilingual setting, and we evaluate a Dutch-English language model during pre-training. We find that crosslingual structural priming effects emerge early after exposure to the second language, with less than 1M tokens of data in that language. We discuss implications for data contamination, low-resource transfer, and how abstract grammatical representations emerge in multilingual models.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
399,185
2005.06223
DREAM Architecture: a Developmental Approach to Open-Ended Learning in Robotics
Robots are still limited to controlled conditions, that the robot designer knows with enough details to endow the robot with the appropriate models or behaviors. Learning algorithms add some flexibility with the ability to discover the appropriate behavior given either some demonstrations or a reward to guide its exploration with a reinforcement learning algorithm. Reinforcement learning algorithms rely on the definition of state and action spaces that define reachable behaviors. Their adaptation capability critically depends on the representations of these spaces: small and discrete spaces result in fast learning while large and continuous spaces are challenging and either require a long training period or prevent the robot from converging to an appropriate behavior. Beside the operational cycle of policy execution and the learning cycle, which works at a slower time scale to acquire new policies, we introduce the redescription cycle, a third cycle working at an even slower time scale to generate or adapt the required representations to the robot, its environment and the task. We introduce the challenges raised by this cycle and we present DREAM (Deferred Restructuring of Experience in Autonomous Machines), a developmental cognitive architecture to bootstrap this redescription process stage by stage, build new state representations with appropriate motivations, and transfer the acquired knowledge across domains or tasks or even across robots. We describe results obtained so far with this approach and end up with a discussion of the questions it raises in Neuroscience.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
true
false
false
176,950
2403.13829
DecompOpt: Controllable and Decomposed Diffusion Models for Structure-based Molecular Optimization
Recently, 3D generative models have shown promising performances in structure-based drug design by learning to generate ligands given target binding sites. However, only modeling the target-ligand distribution can hardly fulfill one of the main goals in drug discovery -- designing novel ligands with desired properties, e.g., high binding affinity, easily synthesizable, etc. This challenge becomes particularly pronounced when the target-ligand pairs used for training do not align with these desired properties. Moreover, most existing methods aim at solving \textit{de novo} design task, while many generative scenarios requiring flexible controllability, such as R-group optimization and scaffold hopping, have received little attention. In this work, we propose DecompOpt, a structure-based molecular optimization method based on a controllable and decomposed diffusion model. DecompOpt presents a new generation paradigm which combines optimization with conditional diffusion models to achieve desired properties while adhering to the molecular grammar. Additionally, DecompOpt offers a unified framework covering both \textit{de novo} design and controllable generation. To achieve so, ligands are decomposed into substructures which allows fine-grained control and local optimization. Experiments show that DecompOpt can efficiently generate molecules with improved properties than strong de novo baselines, and demonstrate great potential in controllable generation tasks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
439,802
1312.5663
k-Sparse Autoencoders
Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions, sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the k-sparse autoencoder, which is an autoencoder with linear activation function, where in hidden layers only the k highest activities are kept. When applied to the MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and RBMs. k-sparse autoencoders are simple to train and the encoding stage is very fast, making them well-suited to large problem sizes, where conventional sparse coding algorithms cannot be applied.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
29,251
2408.03528
Exploring the extent of similarities in software failures across industries using LLMs
The rapid evolution of software development necessitates enhanced safety measures. Extracting information about software failures from companies is becoming increasingly more available through news articles. This research utilizes the Failure Analysis Investigation with LLMs (FAIL) model to extract industry-specific information. Although the FAIL model's database is rich in information, it could benefit from further categorization and industry-specific insights to further assist software engineers. In previous work news articles were collected from reputable sources and categorized by incidents inside a database. Prompt engineering and Large Language Models (LLMs) were then applied to extract relevant information regarding the software failure. This research extends these methods by categorizing articles into specific domains and types of software failures. The results are visually represented through graphs. The analysis shows that throughout the database some software failures occur significantly more often in specific industries. This categorization provides a valuable resource for software engineers and companies to identify and address common failures. This research highlights the synergy between software engineering and Large Language Models (LLMs) to automate and enhance the analysis of software failures. By transforming data from the database into an industry specific model, we provide a valuable resource that can be used to identify common vulnerabilities, predict potential risks, and implement proactive measures for preventing software failures. Leveraging the power of the current FAIL database and data visualization, we aim to provide an avenue for safer and more secure software in the future.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
479,053
1904.09339
Continuous-Time Birth-Death MCMC for Bayesian Regression Tree Models
Decision trees are flexible models that are well suited for many statistical regression problems. In a Bayesian framework for regression trees, Markov Chain Monte Carlo (MCMC) search algorithms are required to generate samples of tree models according to their posterior probabilities. The critical component of such an MCMC algorithm is to construct good Metropolis-Hastings steps for updating the tree topology. However, such algorithms frequently suffering from local mode stickiness and poor mixing. As a result, the algorithms are slow to converge. Hitherto, authors have primarily used discrete-time birth/death mechanisms for Bayesian (sums of) regression tree models to explore the model space. These algorithms are efficient only if the acceptance rate is high which is not always the case. Here we overcome this issue by developing a new search algorithm which is based on a continuous-time birth-death Markov process. This search algorithm explores the model space by jumping between parameter spaces corresponding to different tree structures. In the proposed algorithm, the moves between models are always accepted which can dramatically improve the convergence and mixing properties of the MCMC algorithm. We provide theoretical support of the algorithm for Bayesian regression tree models and demonstrate its performance.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
128,345
2410.19341
Context-Based Visual-Language Place Recognition
In vision-based robot localization and SLAM, Visual Place Recognition (VPR) is essential. This paper addresses the problem of VPR, which involves accurately recognizing the location corresponding to a given query image. A popular approach to vision-based place recognition relies on low-level visual features. Despite significant progress in recent years, place recognition based on low-level visual features is challenging when there are changes in scene appearance. To address this, end-to-end training approaches have been proposed to overcome the limitations of hand-crafted features. However, these approaches still fail under drastic changes and require large amounts of labeled data to train models, presenting a significant limitation. Methods that leverage high-level semantic information, such as objects or categories, have been proposed to handle variations in appearance. In this paper, we introduce a novel VPR approach that remains robust to scene changes and does not require additional training. Our method constructs semantic image descriptors by extracting pixel-level embeddings using a zero-shot, language-driven semantic segmentation model. We validate our approach in challenging place recognition scenarios using real-world public dataset. The experiments demonstrate that our method outperforms non-learned image representation techniques and off-the-shelf convolutional neural network (CNN) descriptors. Our code is available at https: //github.com/woo-soojin/context-based-vlpr.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
502,270
2312.03795
AnimatableDreamer: Text-Guided Non-rigid 3D Model Generation and Reconstruction with Canonical Score Distillation
Advances in 3D generation have facilitated sequential 3D model generation (a.k.a 4D generation), yet its application for animatable objects with large motion remains scarce. Our work proposes AnimatableDreamer, a text-to-4D generation framework capable of generating diverse categories of non-rigid objects on skeletons extracted from a monocular video. At its core, AnimatableDreamer is equipped with our novel optimization design dubbed Canonical Score Distillation (CSD), which lifts 2D diffusion for temporal consistent 4D generation. CSD, designed from a score gradient perspective, generates a canonical model with warp-robustness across different articulations. Notably, it also enhances the authenticity of bones and skinning by integrating inductive priors from a diffusion model. Furthermore, with multi-view distillation, CSD infers invisible regions, thereby improving the fidelity of monocular non-rigid reconstruction. Extensive experiments demonstrate the capability of our method in generating high-flexibility text-guided 3D models from the monocular video, while also showing improved reconstruction performance over existing non-rigid reconstruction methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
413,437
2003.07761
CycleISP: Real Image Restoration via Improved Data Synthesis
The availability of large-scale datasets has helped unleash the true potential of deep convolutional neural networks (CNNs). However, for the single-image denoising problem, capturing a real dataset is an unacceptably expensive and cumbersome procedure. Consequently, image denoising algorithms are mostly developed and evaluated on synthetic data that is usually generated with a widespread assumption of additive white Gaussian noise (AWGN). While the CNNs achieve impressive results on these synthetic datasets, they do not perform well when applied on real camera images, as reported in recent benchmark datasets. This is mainly because the AWGN is not adequate for modeling the real camera noise which is signal-dependent and heavily transformed by the camera imaging pipeline. In this paper, we present a framework that models camera imaging pipeline in forward and reverse directions. It allows us to produce any number of realistic image pairs for denoising both in RAW and sRGB spaces. By training a new image denoising network on realistic synthetic data, we achieve the state-of-the-art performance on real camera benchmark datasets. The parameters in our model are ~5 times lesser than the previous best method for RAW denoising. Furthermore, we demonstrate that the proposed framework generalizes beyond image denoising problem e.g., for color matching in stereoscopic cinema. The source code and pre-trained models are available at https://github.com/swz30/CycleISP.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
168,535
2211.00745
Self-supervised Physics-based Denoising for Computed Tomography
Computed Tomography (CT) imposes risk on the patients due to its inherent X-ray radiation, stimulating the development of low-dose CT (LDCT) imaging methods. Lowering the radiation dose reduces the health risks but leads to noisier measurements, which decreases the tissue contrast and causes artifacts in CT images. Ultimately, these issues could affect the perception of medical personnel and could cause misdiagnosis. Modern deep learning noise suppression methods alleviate the challenge but require low-noise-high-noise CT image pairs for training, rarely collected in regular clinical workflows. In this work, we introduce a new self-supervised approach for CT denoising Noise2NoiseTD-ANM that can be trained without the high-dose CT projection ground truth images. Unlike previously proposed self-supervised techniques, the introduced method exploits the connections between the adjacent projections and the actual model of CT noise distribution. Such a combination allows for interpretable no-reference denoising using nothing but the original noisy LDCT projections. Our experiments with LDCT data demonstrate that the proposed method reaches the level of the fully supervised models, sometimes superseding them, easily generalizes to various noise levels, and outperforms state-of-the-art self-supervised denoising algorithms.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
327,980
2411.10476
Efficient Denoising Method to Improve The Resolution of Satellite Images
Satellites are widely used to estimate and monitor ground cover, providing critical information to address the challenges posed by climate change. High-resolution satellite images help to identify smaller features on the ground and classification of ground cover types. Small satellites have become very popular recently due to their cost-effectiveness. However, smaller satellites have weaker spatial resolution, and preprocessing using recent generative models made it possible to enhance the resolution of these satellite images. The objective of this paper is to propose computationally efficient guided or image-conditioned denoising diffusion models (DDMs) to perform super-resolution on low-quality images. Denoising based on stochastic ordinary differential equations (ODEs) typically takes hundreds of iterations and it can be reduced using deterministic ODEs. I propose Consistency Models (CM) that utilize deterministic ODEs for efficient denoising and perform super resolution on satellite images. The DOTA v2.0 image dataset that is used to develop object detectors needed for urban planning and ground cover estimation, is used in this project. The Stable Diffusion model is used as the base model, and the DDM in Stable Diffusion is converted into a Consistency Model (CM) using Teacher-Student Distillation to apply deterministic denoising. Stable diffusion with modified CM has successfully improved the resolution of satellite images by a factor of 16, and the computational time was reduced by a factor of 20 compared to stochastic denoising methods. The FID score of low-resolution images improved from 10.0 to 1.9 after increasing the image resolution using my algorithm for consistency models.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
508,653
2203.14498
EnCBP: A New Benchmark Dataset for Finer-Grained Cultural Background Prediction in English
While cultural backgrounds have been shown to affect linguistic expressions, existing natural language processing (NLP) research on culture modeling is overly coarse-grained and does not examine cultural differences among speakers of the same language. To address this problem and augment NLP models with cultural background features, we collect, annotate, manually validate, and benchmark EnCBP, a finer-grained news-based cultural background prediction dataset in English. Through language modeling (LM) evaluations and manual analyses, we confirm that there are noticeable differences in linguistic expressions among five English-speaking countries and across four states in the US. Additionally, our evaluations on nine syntactic (CoNLL-2003), semantic (PAWS-Wiki, QNLI, STS-B, and RTE), and psycholinguistic tasks (SST-5, SST-2, Emotion, and Go-Emotions) show that, while introducing cultural background information does not benefit the Go-Emotions task due to text domain conflicts, it noticeably improves deep learning (DL) model performance on other tasks. Our findings strongly support the importance of cultural background modeling to a wide variety of NLP tasks and demonstrate the applicability of EnCBP in culture-related research.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
288,032
2502.11481
Variable-frame CNNLSTM for Breast Nodule Classification using Ultrasound Videos
The intersection of medical imaging and artificial intelligence has become an important research direction in intelligent medical treatment, particularly in the analysis of medical images using deep learning for clinical diagnosis. Despite the advances, existing keyframe classification methods lack extraction of time series features, while ultrasonic video classification based on three-dimensional convolution requires uniform frame numbers across patients, resulting in poor feature extraction efficiency and model classification performance. This study proposes a novel video classification method based on CNN and LSTM, introducing NLP's long and short sentence processing scheme into video classification for the first time. The method reduces CNN-extracted image features to 1x512 dimension, followed by sorting and compressing feature vectors for LSTM training. Specifically, feature vectors are sorted by patient video frame numbers and populated with padding value 0 to form variable batches, with invalid padding values compressed before LSTM training to conserve computing resources. Experimental results demonstrate that our variable-frame CNNLSTM method outperforms other approaches across all metrics, showing improvements of 3-6% in F1 score and 1.5% in specificity compared to keyframe methods. The variable-frame CNNLSTM also achieves better accuracy and precision than equal-frame CNNLSTM. These findings validate the effectiveness of our approach in classifying variable-frame ultrasound videos and suggest potential applications in other medical imaging modalities.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
534,398
1512.08580
A Simple Baseline for Travel Time Estimation using Large-Scale Trip Data
The increased availability of large-scale trajectory data around the world provides rich information for the study of urban dynamics. For example, New York City Taxi Limousine Commission regularly releases source-destination information about trips in the taxis they regulate. Taxi data provide information about traffic patterns, and thus enable the study of urban flow -- what will traffic between two locations look like at a certain date and time in the future? Existing big data methods try to outdo each other in terms of complexity and algorithmic sophistication. In the spirit of "big data beats algorithms", we present a very simple baseline which outperforms state-of-the-art approaches, including Bing Maps and Baidu Maps (whose APIs permit large scale experimentation). Such a travel time estimation baseline has several important uses, such as navigation (fast travel time estimates can serve as approximate heuristics for A search variants for path finding) and trip planning (which uses operating hours for popular destinations along with travel time estimates to create an itinerary).
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
50,530
2408.06010
DEEPTalk: Dynamic Emotion Embedding for Probabilistic Speech-Driven 3D Face Animation
Speech-driven 3D facial animation has garnered lots of attention thanks to its broad range of applications. Despite recent advancements in achieving realistic lip motion, current methods fail to capture the nuanced emotional undertones conveyed through speech and produce monotonous facial motion. These limitations result in blunt and repetitive facial animations, reducing user engagement and hindering their applicability. To address these challenges, we introduce DEEPTalk, a novel approach that generates diverse and emotionally rich 3D facial expressions directly from speech inputs. To achieve this, we first train DEE (Dynamic Emotion Embedding), which employs probabilistic contrastive learning to forge a joint emotion embedding space for both speech and facial motion. This probabilistic framework captures the uncertainty in interpreting emotions from speech and facial motion, enabling the derivation of emotion vectors from its multifaceted space. Moreover, to generate dynamic facial motion, we design TH-VQVAE (Temporally Hierarchical VQ-VAE) as an expressive and robust motion prior overcoming limitations of VAEs and VQ-VAEs. Utilizing these strong priors, we develop DEEPTalk, A talking head generator that non-autoregressively predicts codebook indices to create dynamic facial motion, incorporating a novel emotion consistency loss. Extensive experiments on various datasets demonstrate the effectiveness of our approach in creating diverse, emotionally expressive talking faces that maintain accurate lip-sync. Source code will be made publicly available soon.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
480,042
2412.08389
SweetieChat: A Strategy-Enhanced Role-playing Framework for Diverse Scenarios Handling Emotional Support Agent
Large Language Models (LLMs) have demonstrated promising potential in providing empathetic support during interactions. However, their responses often become verbose or overly formulaic, failing to adequately address the diverse emotional support needs of real-world scenarios. To tackle this challenge, we propose an innovative strategy-enhanced role-playing framework, designed to simulate authentic emotional support conversations. Specifically, our approach unfolds in two steps: (1) Strategy-Enhanced Role-Playing Interactions, which involve three pivotal roles -- Seeker, Strategy Counselor, and Supporter -- engaging in diverse scenarios to emulate real-world interactions and promote a broader range of dialogues; and (2) Emotional Support Agent Training, achieved through fine-tuning LLMs using our specially constructed dataset. Within this framework, we develop the \textbf{ServeForEmo} dataset, comprising an extensive collection of 3.7K+ multi-turn dialogues and 62.8K+ utterances. We further present \textbf{SweetieChat}, an emotional support agent capable of handling diverse open-domain scenarios. Extensive experiments and human evaluations confirm the framework's effectiveness in enhancing emotional support, highlighting its unique ability to provide more nuanced and tailored assistance.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
516,066
2407.18357
Needle Segmentation Using GAN: Restoring Thin Instrument Visibility in Robotic Ultrasound
Ultrasound-guided percutaneous needle insertion is a standard procedure employed in both biopsy and ablation in clinical practices. However, due to the complex interaction between tissue and instrument, the needle may deviate from the in-plane view, resulting in a lack of close monitoring of the percutaneous needle. To address this challenge, we introduce a robot-assisted ultrasound (US) imaging system designed to seamlessly monitor the insertion process and autonomously restore the visibility of the inserted instrument when misalignment happens. To this end, the adversarial structure is presented to encourage the generation of segmentation masks that align consistently with the ground truth in high-order space. This study also systematically investigates the effects on segmentation performance by exploring various training loss functions and their combinations. When misalignment between the probe and the percutaneous needle is detected, the robot is triggered to perform transverse searching to optimize the positional and rotational adjustment to restore needle visibility. The experimental results on ex-vivo porcine samples demonstrate that the proposed method can precisely segment the percutaneous needle (with a tip error of $0.37\pm0.29mm$ and an angle error of $1.19\pm 0.29^{\circ}$). Furthermore, the needle appearance can be successfully restored under the repositioned probe pose in all 45 trials, with repositioning errors of $1.51\pm0.95mm$ and $1.25\pm0.79^{\circ}$. from latex to text with math symbols
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
476,330