id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2405.06917
Design Requirements for Human-Centered Graph Neural Network Explanations
Graph neural networks (GNNs) are powerful graph-based machine-learning models that are popular in various domains, e.g., social media, transportation, and drug discovery. However, owing to complex data representations, GNNs do not easily allow for human-intelligible explanations of their predictions, which can decrease trust in them as well as deter any collaboration opportunities between the AI expert and non-technical, domain expert. Here, we first discuss the two papers that aim to provide GNN explanations to domain experts in an accessible manner and then establish a set of design requirements for human-centered GNN explanations. Finally, we offer two example prototypes to demonstrate some of those proposed requirements.
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
453,503
2407.11492
MMSD-Net: Towards Multi-modal Stuttering Detection
Stuttering is a common speech impediment that is caused by irregular disruptions in speech production, affecting over 70 million people across the world. Standard automatic speech processing tools do not take speech ailments into account and are thereby not able to generate meaningful results when presented with stuttered speech as input. The automatic detection of stuttering is an integral step towards building efficient, context-aware speech processing systems. While previous approaches explore both statistical and neural approaches for stuttering detection, all of these methods are uni-modal in nature. This paper presents MMSD-Net, the first multi-modal neural framework for stuttering detection. Experiments and results demonstrate that incorporating the visual signal significantly aids stuttering detection, and our model yields an improvement of 2-17% in the F1-score over existing state-of-the-art uni-modal approaches.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
473,484
2007.04169
An exploration of the influence of path choice in game-theoretic attribution algorithms
We compare machine learning explainability methods based on the theory of atomic (Shapley, 1953) and infinitesimal (Aumann and Shapley, 1974) games, in a theoretical and experimental investigation into how the model and choice of integration path can influence the resulting feature attributions. To gain insight into differences in attributions resulting from interventional Shapley values (Sundararajan and Najmi, 2019; Janzing et al., 2019; Chen et al., 2019) and Generalized Integrated Gradients (GIG) (Merrill et al., 2019) we note interventional Shapley is equivalent to a multi-path integration along $n!$ paths where $n$ is the number of model input features. Applying Stoke's theorem we show that the path symmetry of these two methods results in the same attributions when the model is composed of a sum of separable functions of individual features and a sum of two-feature products. We then perform a series of experiments with varying degrees of data missingness to demonstrate how interventional Shapley's multi-path approach can yield less consistent attributions than the single straight-line path of Aumann-Shapley. We argue this is because the multiple paths employed by interventional Shapley extend away from the training data manifold and are therefore more likely to pass through regions where the model has little support. In the absence of a more meaningful path choice, we therefore advocate the straight-line path since it will almost always pass closer to the data manifold. Among straight-line path attribution algorithms, GIG is uniquely robust since it will still yield Shapley values for atomic games modeled by decision trees.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
186,272
2312.10585
ESDMR-Net: A Lightweight Network With Expand-Squeeze and Dual Multiscale Residual Connections for Medical Image Segmentation
Segmentation is an important task in a wide range of computer vision applications, including medical image analysis. Recent years have seen an increase in the complexity of medical image segmentation approaches based on sophisticated convolutional neural network architectures. This progress has led to incremental enhancements in performance on widely recognised benchmark datasets. However, most of the existing approaches are computationally demanding, which limits their practical applicability. This paper presents an expand-squeeze dual multiscale residual network (ESDMR-Net), which is a fully convolutional network that is particularly well-suited for resource-constrained computing hardware such as mobile devices. ESDMR-Net focuses on extracting multiscale features, enabling the learning of contextual dependencies among semantically distinct features. The ESDMR-Net architecture allows dual-stream information flow within encoder-decoder pairs. The expansion operation (depthwise separable convolution) makes all of the rich features with multiscale information available to the squeeze operation (bottleneck layer), which then extracts the necessary information for the segmentation task. The Expand-Squeeze (ES) block helps the network pay more attention to under-represented classes, which contributes to improved segmentation accuracy. To enhance the flow of information across multiple resolutions or scales, we integrated dual multiscale residual (DMR) blocks into the skip connection. This integration enables the decoder to access features from various levels of abstraction, ultimately resulting in more comprehensive feature representations. We present experiments on seven datasets from five distinct examples of applications. Our model achieved the best results despite having significantly fewer trainable parameters, with a reduction of two or even three orders of magnitude.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
416,232
1509.00836
Energy Harvesting Transmitters that Heat Up: Throughput Maximization under Temperature Constraints
Motivated by damage due to heating in sensor operation, we consider the throughput optimal offline data scheduling problem in an energy harvesting transmitter such that the resulting temperature increase remains below a critical level. We model the temperature dynamics of the transmitter as a linear system and determine the optimal transmit power policy under such temperature constraints as well as energy harvesting constraints over an AWGN channel. We first derive the structural properties of the solution for the general case with multiple energy arrivals. We show that the optimal power policy is piecewise monotone decreasing with possible jumps at the energy harvesting instants. We derive analytical expressions for the optimal solution in the single energy arrival case. We show that, in the single energy arrival case, the optimal power is monotone decreasing, the resulting temperature is monotone increasing, and both remain constant after the temperature hits the critical level. We then generalize the solution for the multiple energy arrival case.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
46,539
2012.03143
Majority Opinion Diffusion in Social Networks: An Adversarial Approach
We introduce and study a novel majority-based opinion diffusion model. Consider a graph $G$, which represents a social network. Assume that initially a subset of nodes, called seed nodes or early adopters, are colored either black or white, which correspond to positive or negative opinion regarding a consumer product or a technological innovation. Then, in each round an uncolored node, which is adjacent to at least one colored node, chooses the most frequent color among its neighbors. Consider a marketing campaign which advertises a product of poor quality and its ultimate goal is that more than half of the population believe in the quality of the product at the end of the opinion diffusion process. We focus on three types of attackers which can select the seed nodes in a deterministic or random fashion and manipulate almost half of them to adopt a positive opinion toward the product (that is, to choose black color). We say that an attacker succeeds if a majority of nodes are black at the end of the process. Our main purpose is to characterize classes of graphs where an attacker cannot succeed. In particular, we prove that if the maximum degree of the underlying graph is not too large or if it has strong expansion properties, then it is fairly resilient to such attacks. Furthermore, we prove tight bounds on the stabilization time of the process (that is, the number of rounds it needs to end) in both settings of choosing the seed nodes deterministically and randomly. We also provide several hardness results for some optimization problems regarding stabilization time and choice of seed nodes.
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
true
210,005
2403.09700
Shapley Values-Powered Framework for Fair Reward Split in Content Produced by GenAI
It is evident that, currently, generative models are surpassed in quality by human professionals. However, with the advancements in Artificial Intelligence, this gap will narrow, leading to scenarios where individuals who have dedicated years of their lives to mastering a skill become obsolete due to their high costs, which are inherently linked to the time they require to complete a task -- a task that AI could accomplish in minutes or seconds. To avoid future social upheavals, we must, even now, contemplate how to fairly assess the contributions of such individuals in training generative models and how to compensate them for the reduction or complete loss of their incomes. In this work, we propose a method to structure collaboration between model developers and data providers. To achieve this, we employ Shapley Values to quantify the contribution of artist(s) in an image generated by the Stable Diffusion-v1.5 model and to equitably allocate the reward among them.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
437,869
cs/9809113
Improving Tagging Performance by Using Voting Taggers
We present a bootstrapping method to develop an annotated corpus, which is specially useful for languages with few available resources. The method is being applied to develop a corpus of Spanish of over 5Mw. The method consists on taking advantage of the collaboration of two different POS taggers. The cases in which both taggers agree present a higher accuracy and are used to retrain the taggers.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
540,415
1206.6877
Inference in Hybrid Bayesian Networks Using Mixtures of Gaussians
The main goal of this paper is to describe a method for exact inference in general hybrid Bayesian networks (BNs) (with a mixture of discrete and continuous chance variables). Our method consists of approximating general hybrid Bayesian networks by a mixture of Gaussians (MoG) BNs. There exists a fast algorithm by Lauritzen-Jensen (LJ) for making exact inferences in MoG Bayesian networks, and there exists a commercial implementation of this algorithm. However, this algorithm can only be used for MoG BNs. Some limitations of such networks are as follows. All continuous chance variables must have conditional linear Gaussian distributions, and discrete chance nodes cannot have continuous parents. The methods described in this paper will enable us to use the LJ algorithm for a bigger class of hybrid Bayesian networks. This includes networks with continuous chance nodes with non-Gaussian distributions, networks with no restrictions on the topology of discrete and continuous variables, networks with conditionally deterministic variables that are a nonlinear function of their continuous parents, and networks with continuous chance variables whose variances are functions of their parents.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
17,101
1711.10521
A Recursive Bayesian Approach To Describe Retinal Vasculature Geometry
Demographic studies suggest that changes in the retinal vasculature geometry, especially in vessel width, are associated with the incidence or progression of eye-related or systemic diseases. To date, the main information source for width estimation from fundus images has been the intensity profile between vessel edges. However, there are many factors affecting the intensity profile: pathologies, the central light reflex and local illumination levels, to name a few. In this study, we introduce three information sources for width estimation. These are the probability profiles of vessel interior, centreline and edge locations generated by a deep network. The probability profiles provide direct access to vessel geometry and are used in the likelihood calculation for a Bayesian method, particle filtering. We also introduce a geometric model which can handle non-ideal conditions of the probability profiles. Our experiments conducted on the REVIEW dataset yielded consistent estimates of vessel width, even in cases when one of the vessel edges is difficult to identify. Moreover, our results suggest that the method is better than human observers at locating edges of low contrast vessels.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
85,615
2212.14736
PRISM: Privacy Preserving Healthcare Internet of Things Security Management
Consumer healthcare Internet of Things (IoT) devices are gaining popularity in our homes and hospitals. These devices provide continuous monitoring at a low cost and can be used to augment high-precision medical equipment. However, major challenges remain in applying pre-trained global models for anomaly detection on smart health monitoring, for a diverse set of individuals that they provide care for. In this paper, we propose PRISM, an edge-based system for experimenting with in-home smart healthcare devices. We develop a rigorous methodology that relies on automated IoT experimentation. We use a rich real-world dataset from in-home patient monitoring from 44 households of People Living With Dementia (PLWD) over two years. Our results indicate that anomalies can be identified with accuracy up to 99% and mean training times as low as 0.88 seconds. While all models achieve high accuracy when trained on the same patient, their accuracy degrades when evaluated on different patients.
false
false
false
false
false
false
false
false
false
false
true
false
true
false
false
false
false
false
338,723
2203.01994
Fast Neural Architecture Search for Lightweight Dense Prediction Networks
We present LDP, a lightweight dense prediction neural architecture search (NAS) framework. Starting from a pre-defined generic backbone, LDP applies the novel Assisted Tabu Search for efficient architecture exploration. LDP is fast and suitable for various dense estimation problems, unlike previous NAS methods that are either computational demanding or deployed only for a single subtask. The performance of LPD is evaluated on monocular depth estimation, semantic segmentation, and image super-resolution tasks on diverse datasets, including NYU-Depth-v2, KITTI, Cityscapes, COCO-stuff, DIV2K, Set5, Set14, BSD100, Urban100. Experiments show that the proposed framework yields consistent improvements on all tested dense prediction tasks, while being $5\%-315\%$ more compact in terms of the number of model parameters than prior arts.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
283,590
2208.09418
SAFARI: Versatile and Efficient Evaluations for Robustness of Interpretability
Interpretability of Deep Learning (DL) is a barrier to trustworthy AI. Despite great efforts made by the Explainable AI (XAI) community, explanations lack robustness -- indistinguishable input perturbations may lead to different XAI results. Thus, it is vital to assess how robust DL interpretability is, given an XAI method. In this paper, we identify several challenges that the state-of-the-art is unable to cope with collectively: i) existing metrics are not comprehensive; ii) XAI techniques are highly heterogeneous; iii) misinterpretations are normally rare events. To tackle these challenges, we introduce two black-box evaluation methods, concerning the worst-case interpretation discrepancy and a probabilistic notion of how robust in general, respectively. Genetic Algorithm (GA) with bespoke fitness function is used to solve constrained optimisation for efficient worst-case evaluation. Subset Simulation (SS), dedicated to estimate rare event probabilities, is used for evaluating overall robustness. Experiments show that the accuracy, sensitivity, and efficiency of our methods outperform the state-of-the-arts. Finally, we demonstrate two applications of our methods: ranking robust XAI methods and selecting training schemes to improve both classification and interpretation robustness.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
313,694
2411.00004
RapidDock: Unlocking Proteome-scale Molecular Docking
Accelerating molecular docking -- the process of predicting how molecules bind to protein targets -- could boost small-molecule drug discovery and revolutionize medicine. Unfortunately, current molecular docking tools are too slow to screen potential drugs against all relevant proteins, which often results in missed drug candidates or unexpected side effects occurring in clinical trials. To address this gap, we introduce RapidDock, an efficient transformer-based model for blind molecular docking. RapidDock achieves at least a $100 \times$ speed advantage over existing methods without compromising accuracy. On the Posebusters and DockGen benchmarks, our method achieves $52.1\%$ and $44.0\%$ success rates ($\text{RMSD}<2$\r{A}), respectively. The average inference time is $0.04$ seconds on a single GPU, highlighting RapidDock's potential for large-scale docking studies. We examine the key features of RapidDock that enable leveraging the transformer architecture for molecular docking, including the use of relative distance embeddings of $3$D structures in attention matrices, pre-training on protein folding, and a custom loss function invariant to molecular symmetries.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
504,393
1212.3996
Increasing Air Traffic: What is the Problem?
Nowadays, huge efforts are made to modernize the air traffic management systems to cope with uncertainty, complexity and sub-optimality. An answer is to enhance the information sharing between the stakeholders. This paper introduces a framework that bridges the gap between air traffic management and air traffic control on the one hand, and bridges the gap between the ground, the approach and the en-route centers on the other hand. An original system is presented, that has three essential components: the trajectory models, the optimization process, and the monitoring process. The uncertainty of the trajectory is modeled with a Bayesian Network, where the nodes are associated to two types of random variables: the time of overflight on metering points of the airspace, and the traveling time of the routes linking these points. The resulting Bayesian Network covers the complete airspace, and Monte- Carlo simulations are done to estimate the probabilities of sector congestion and delays. On top of this trajectory model, an optimization process minimizes these probabilities by tuning the parameters of the Bayesian trajectory model related to overflight times on metering points. The last component is the monitoring process, that continuously updates the situation of the airspace, modifying the trajectories uncertainties according to actual positions of aircraft. After each update, a new optimal set of overflight times is computed, and can be communicated to the controllers as clearances for the aircraft pilots. The paper presents a formal specification of this global optimization problem, whose underlying rationale was derived with the help of air traffic controllers at Thales Air Systems.
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
20,448
2302.08058
Learning Non-Local Spatial-Angular Correlation for Light Field Image Super-Resolution
Exploiting spatial-angular correlation is crucial to light field (LF) image super-resolution (SR), but is highly challenging due to its non-local property caused by the disparities among LF images. Although many deep neural networks (DNNs) have been developed for LF image SR and achieved continuously improved performance, existing methods cannot well leverage the long-range spatial-angular correlation and thus suffer a significant performance drop when handling scenes with large disparity variations. In this paper, we propose a simple yet effective method to learn the non-local spatial-angular correlation for LF image SR. In our method, we adopt the epipolar plane image (EPI) representation to project the 4D spatial-angular correlation onto multiple 2D EPI planes, and then develop a Transformer network with repetitive self-attention operations to learn the spatial-angular correlation by modeling the dependencies between each pair of EPI pixels. Our method can fully incorporate the information from all angular views while achieving a global receptive field along the epipolar line. We conduct extensive experiments with insightful visualizations to validate the effectiveness of our method. Comparative results on five public datasets show that our method not only achieves state-of-the-art SR performance, but also performs robust to disparity variations. Code is publicly available at https://github.com/ZhengyuLiang24/EPIT.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
345,922
1410.0640
Term-Weighting Learning via Genetic Programming for Text Classification
This paper describes a novel approach to learning term-weighting schemes (TWSs) in the context of text classification. In text mining a TWS determines the way in which documents will be represented in a vector space model, before applying a classifier. Whereas acceptable performance has been obtained with standard TWSs (e.g., Boolean and term-frequency schemes), the definition of TWSs has been traditionally an art. Further, it is still a difficult task to determine what is the best TWS for a particular problem and it is not clear yet, whether better schemes, than those currently available, can be generated by combining known TWS. We propose in this article a genetic program that aims at learning effective TWSs that can improve the performance of current schemes in text classification. The genetic program learns how to combine a set of basic units to give rise to discriminative TWSs. We report an extensive experimental study comprising data sets from thematic and non-thematic text classification as well as from image classification. Our study shows the validity of the proposed method; in fact, we show that TWSs learned with the genetic program outperform traditional schemes and other TWSs proposed in recent works. Further, we show that TWSs learned from a specific domain can be effectively used for other tasks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
36,491
2210.07703
Hybrid Decentralized Optimization: Leveraging Both First- and Zeroth-Order Optimizers for Faster Convergence
Distributed optimization is the standard way of speeding up machine learning training, and most of the research in the area focuses on distributed first-order, gradient-based methods. Yet, there are settings where some computationally-bounded nodes may not be able to implement first-order, gradient-based optimization, while they could still contribute to joint optimization tasks. In this paper, we initiate the study of hybrid decentralized optimization, studying settings where nodes with zeroth-order and first-order optimization capabilities co-exist in a distributed system, and attempt to jointly solve an optimization task over some data distribution. We essentially show that, under reasonable parameter settings, such a system can not only withstand noisier zeroth-order agents but can even benefit from integrating such agents into the optimization process, rather than ignoring their information. At the core of our approach is a new analysis of distributed optimization with noisy and possibly-biased gradient estimators, which may be of independent interest. Our results hold for both convex and non-convex objectives. Experimental results on standard optimization tasks confirm our analysis, showing that hybrid first-zeroth order optimization can be practical, even when training deep neural networks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
323,831
1810.05456
Modeling Varying Camera-IMU Time Offset in Optimization-Based Visual-Inertial Odometry
Combining cameras and inertial measurement units (IMUs) has been proven effective in motion tracking, as these two sensing modalities offer complementary characteristics that are suitable for fusion. While most works focus on global-shutter cameras and synchronized sensor measurements, consumer-grade devices are mostly equipped with rolling-shutter cameras and suffer from imperfect sensor synchronization. In this work, we propose a nonlinear optimization-based monocular visual inertial odometry (VIO) with varying camera-IMU time offset modeled as an unknown variable. Our approach is able to handle the rolling-shutter effects and imperfect sensor synchronization in a unified way. Additionally, we introduce an efficient algorithm based on dynamic programming and red-black tree to speed up IMU integration over variable-length time intervals during the optimization. An uncertainty-aware initialization is also presented to launch the VIO robustly. Comparisons with state-of-the-art methods on the Euroc dataset and mobile phone data are shown to validate the effectiveness of our approach.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
110,234
2110.09291
Reconfigurable Intelligent Surface-Enhanced OFDM Communications via Delay Adjustable Metasurface
Reconfigurable intelligent surface (RIS) is a promising technology for establishing spectral- and energy-efficient wireless networks. In this paper, we study RIS-enhanced orthogonal frequency division multiplexing (OFDM) communications, which generalize the existing RIS-driven context focusing only on frequency-flat channels. Firstly, we introduce the delay adjustable metasurface (DAM) relying on varactor diodes. In contrast to existing reflecting elements, each one in DAM is capable of storing and retrieving the impinging electromagnetic waves upon dynamically controlling its electromagnetically induced transparency (EIT) properties, thus additionally imposing an extra delay onto the reflected incident signals. Secondly, we formulate the rate-maximization problem by jointly optimizing the transmit power allocation and the RIS reflection coefficients as well as the RIS delays. Furthermore, to address the coupling among optimization variables, we propose an efficient algorithm to achieve a high-quality solution for the formulated non-convex design problem by alternately optimizing the transmit power allocation and the RIS reflection pattern, including the reflection coefficients and the delays. Thirdly, to circumvent the high complexity for optimizing the RIS reflection coefficients, we conceive a low-complexity scheme upon aligning the strongest taps of all reflected channels, while ensuring that the maximum delay spread after introducing extra RIS delays does not exceed the length of the cyclic prefix (CP). Finally, simulation results demonstrate that the proposed design significantly improves the OFDM rate performance as well as the RIS's adaptability to wideband signals compared to baseline schemes without employing DAM.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
261,757
1806.02081
Distributed vs. Centralized Scheduling in D2D-enabled Cellular Networks
Employing channel adaptive resource allocation can yield to a large enhancement in almost any performance metric of Device-to-Device (D2D) communications. We observe that D2D users are able to estimate their local Channel State Information (CSI), however the base station needs some signaling exchange to acquire this information. Based on the D2D users' knowledge of their local CSI, we provide a scheduling framework that shows how distributed approach outperforms centralized one. We start by proposing a centralized scheduling that requires the knowledge of D2D links' CSI at the base station level. This CSI reporting suffers from the limited number of resources available for feedback transmission. Therefore, we benefit from the users' knowledge of their local CSI to develop a distributed algorithm for D2D resource allocation. In distributed approach, collisions may occur between the different CSI reporting; thus a collision reduction algorithm is proposed. We give a description on how both centralized and distributed algorithms can be implemented in practice. Furthermore, numerical results are presented to corroborate our claims and demonstrate the gain that the proposed scheduling algorithms bring to cellular networks.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
99,704
1802.02498
Spectral Learning of Binomial HMMs for DNA Methylation Data
We consider learning parameters of Binomial Hidden Markov Models, which may be used to model DNA methylation data. The standard algorithm for the problem is EM, which is computationally expensive for sequences of the scale of the mammalian genome. Recently developed spectral algorithms can learn parameters of latent variable models via tensor decomposition, and are highly efficient for large data. However, these methods have only been applied to categorial HMMs, and the main challenge is how to extend them to Binomial HMMs while still retaining computational efficiency. We address this challenge by introducing a new feature-map based approach that exploits specific properties of Binomial HMMs. We provide theoretical performance guarantees for our algorithm and evaluate it on real DNA methylation data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
89,779
1708.02300
Reinforced Video Captioning with Entailment Rewards
Sequence-to-sequence models have shown promising improvements on the temporal task of video captioning, but they optimize word-level cross-entropy loss during training. First, using policy gradient and mixed-loss methods for reinforcement learning, we directly optimize sentence-level task-based metrics (as rewards), achieving significant improvements over the baseline, based on both automatic metrics and human evaluation on multiple datasets. Next, we propose a novel entailment-enhanced reward (CIDEnt) that corrects phrase-matching based metrics (such as CIDEr) to only allow for logically-implied partial matches and avoid contradictions, achieving further significant improvements over the CIDEr-reward model. Overall, our CIDEnt-reward model achieves the new state-of-the-art on the MSR-VTT dataset.
false
false
false
false
true
false
true
false
true
false
false
true
false
false
false
false
false
false
78,563
2208.04313
AUTOSHAPE: An Autoencoder-Shapelet Approach for Time Series Clustering
Time series shapelets are discriminative subsequences that have been recently found effective for time series clustering (TSC). The shapelets are convenient for interpreting the clusters. Thus, the main challenge for TSC is to discover high-quality variable-length shapelets to discriminate different clusters. In this paper, we propose a novel autoencoder-shapelet approach (AUTOSHAPE), which is the first study to take the advantage of both autoencoder and shapelet for determining shapelets in an unsupervised manner. An autoencoder is specially designed to learn high-quality shapelets. More specifically, for guiding the latent representation learning, we employ the latest self-supervised loss to learn the unified embeddings for variable-length shapelet candidates (time series subsequences) of different variables, and propose the diversity loss to select the discriminating embeddings in the unified space. We introduce the reconstruction loss to recover shapelets in the original time series space for clustering. Finally, we adopt Davies Bouldin index (DBI) to inform AUTOSHAPE of the clustering performance during learning. We present extensive experiments on AUTOSHAPE. To evaluate the clustering performance on univariate time series (UTS), we compare AUTOSHAPE with 15 representative methods using UCR archive datasets. To study the performance of multivariate time series (MTS), we evaluate AUTOSHAPE on 30 UEA archive datasets with 5 competitive methods. The results validate that AUTOSHAPE is the best among all the methods compared. We interpret clusters with shapelets, and can obtain interesting intuitions about clusters in two UTS case studies and one MTS case study, respectively.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
312,067
2407.07550
Evaluating the method reproducibility of deep learning models in the biodiversity domain
Artificial Intelligence (AI) is revolutionizing biodiversity research by enabling advanced data analysis, species identification, and habitats monitoring, thereby enhancing conservation efforts. Ensuring reproducibility in AI-driven biodiversity research is crucial for fostering transparency, verifying results, and promoting the credibility of ecological findings.This study investigates the reproducibility of deep learning (DL) methods within the biodiversity domain. We design a methodology for evaluating the reproducibility of biodiversity-related publications that employ DL techniques across three stages. We define ten variables essential for method reproducibility, divided into four categories: resource requirements, methodological information, uncontrolled randomness, and statistical considerations. These categories subsequently serve as the basis for defining different levels of reproducibility. We manually extract the availability of these variables from a curated dataset comprising 61 publications identified using the keywords provided by biodiversity experts. Our study shows that the dataset is shared in 47% of the publications; however, a significant number of the publications lack comprehensive information on deep learning methods, including details regarding randomness.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
471,808
2112.09631
Sublinear Time Approximation of Text Similarity Matrices
We study algorithms for approximating pairwise similarity matrices that arise in natural language processing. Generally, computing a similarity matrix for $n$ data points requires $\Omega(n^2)$ similarity computations. This quadratic scaling is a significant bottleneck, especially when similarities are computed via expensive functions, e.g., via transformer models. Approximation methods reduce this quadratic complexity, often by using a small subset of exactly computed similarities to approximate the remainder of the complete pairwise similarity matrix. Significant work focuses on the efficient approximation of positive semidefinite (PSD) similarity matrices, which arise e.g., in kernel methods. However, much less is understood about indefinite (non-PSD) similarity matrices, which often arise in NLP. Motivated by the observation that many of these matrices are still somewhat close to PSD, we introduce a generalization of the popular Nystr\"{o}m method to the indefinite setting. Our algorithm can be applied to any similarity matrix and runs in sublinear time in the size of the matrix, producing a rank-$s$ approximation with just $O(ns)$ similarity computations. We show that our method, along with a simple variant of CUR decomposition, performs very well in approximating a variety of similarity matrices arising in NLP tasks. We demonstrate high accuracy of the approximated similarity matrices in the downstream tasks of document classification, sentence similarity, and cross-document coreference.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
272,204
2003.05410
How Powerful Are Randomly Initialized Pointcloud Set Functions?
We study random embeddings produced by untrained neural set functions, and show that they are powerful representations which well capture the input features for downstream tasks such as classification, and are often linearly separable. We obtain surprising results that show that random set functions can often obtain close to or even better accuracy than fully trained models. We investigate factors that affect the representative power of such embeddings quantitatively and qualitatively.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
167,848
2103.14529
Real-Time and Accurate Object Detection in Compressed Video by Long Short-term Feature Aggregation
Video object detection is a fundamental problem in computer vision and has a wide spectrum of applications. Based on deep networks, video object detection is actively studied for pushing the limits of detection speed and accuracy. To reduce the computation cost, we sparsely sample key frames in video and treat the rest frames are non-key frames; a large and deep network is used to extract features for key frames and a tiny network is used for non-key frames. To enhance the features of non-key frames, we propose a novel short-term feature aggregation method to propagate the rich information in key frame features to non-key frame features in a fast way. The fast feature aggregation is enabled by the freely available motion cues in compressed videos. Further, key frame features are also aggregated based on optical flow. The propagated deep features are then integrated with the directly extracted features for object detection. The feature extraction and feature integration parameters are optimized in an end-to-end manner. The proposed video object detection network is evaluated on the large-scale ImageNet VID benchmark and achieves 77.2\% mAP, which is on-par with state-of-the-art accuracy, at the speed of 30 FPS using a Titan X GPU. The source codes are available at \url{https://github.com/hustvl/LSFA}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
226,884
2301.05919
Efficient Evaluation Methods for Neural Architecture Search: A Survey
Neural Architecture Search (NAS) has received increasing attention because of its exceptional merits in automating the design of Deep Neural Network (DNN) architectures. However, the performance evaluation process, as a key part of NAS, often requires training a large number of DNNs. This inevitably makes NAS computationally expensive. In past years, many Efficient Evaluation Methods (EEMs) have been proposed to address this critical issue. In this paper, we comprehensively survey these EEMs published up to date, and provide a detailed analysis to motivate the further development of this research direction. Specifically, we divide the existing EEMs into four categories based on the number of DNNs trained for constructing these EEMs. The categorization can reflect the degree of efficiency in principle, which can in turn help quickly grasp the methodological features. In surveying each category, we further discuss the design principles and analyze the strengths and weaknesses to clarify the landscape of existing EEMs, thus making easily understanding the research trends of EEMs. Furthermore, we also discuss the current challenges and issues to identify future research directions in this emerging topic. In summary, this survey provides a convenient overview of EEM for interested users, and they can easily select the proper EEM method for the tasks at hand. In addition, the researchers in the NAS field could continue exploring the future directions suggested in the paper.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
340,492
2301.11422
RMSim: Controlled Respiratory Motion Simulation on Static Patient Scans
This work aims to generate realistic anatomical deformations from static patient scans. Specifically, we present a method to generate these deformations/augmentations via deep learning driven respiratory motion simulation that provides the ground truth for validating deformable image registration (DIR) algorithms and driving more accurate deep learning based DIR. We present a novel 3D Seq2Seq deep learning respiratory motion simulator (RMSim) that learns from 4D-CT images and predicts future breathing phases given a static CT image. The predicted respiratory patterns, represented by time-varying displacement vector fields (DVFs) at different breathing phases, are modulated through auxiliary inputs of 1D breathing traces so that a larger amplitude in the trace results in more significant predicted deformation. Stacked 3D-ConvLSTMs are used to capture the spatial-temporal respiration patterns. Training loss includes a smoothness loss in the DVF and mean-squared error between the predicted and ground truth phase images. A spatial transformer deforms the static CT with the predicted DVF to generate the predicted phase image. 10-phase 4D-CTs of 140 internal patients were used to train and test RMSim. The trained RMSim was then used to augment a public DIR challenge dataset for training VoxelMorph to show the effectiveness of RMSim-generated deformation augmentation. We validated our RMSim output with both private and public benchmark datasets (healthy and cancer patients). The proposed approach can be used for validating DIR algorithms as well as for patient-specific augmentations to improve deep learning DIR algorithms. The code, pretrained models, and augmented DIR validation datasets will be released at https://github.com/nadeemlab/SeqX2Y.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
342,135
1508.00691
Deterministic Differential Search Algorithm for Distributed Sensor/Relay Networks
For distributed sensor/relay networks, high reliability and power efficiency are often required. However, several implementation issues arise in practice. One such problem is that all the distributed transmitters have limited power supply since the power source of the transmitters cannot be recharged continually. To resolve this, distributed beamforming has been proposed as a viable solution where all distributed transmitters seek to align in phase at the receiver end. However, it is difficult to implement such transmit beamforming in a distributed fashion in practice since perfect channel state information (CSI) need to be made available at all distributed transmitters, requiring tremendous overhead to feed back CSI from the receiver to all distributed transmitters. In this paper, we propose a novel algorithm that belongs to the category of deterministic phase adjustment algorithm: the Deterministic Differential Search Algorithm (DDSA), where the differences between the measured received signal strength (RSS) are utilized judiciously to help us predict the deterministic phase adjustment done at distributed transmitters. Numerical simulations demonstrate rapid convergence to a pre-determined threshold.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
45,704
2205.05192
Social Inclusion in Curated Contexts: Insights from Museum Practices
Artificial intelligence literature suggests that minority and fragile communities in society can be negatively impacted by machine learning algorithms due to inherent biases in the design process, which lead to socially exclusive decisions and policies. Faced with similar challenges in dealing with an increasingly diversified audience, the museum sector has seen changes in theory and practice, particularly in the areas of representation and meaning-making. While rarity and grandeur used to be at the centre stage of the early museum practices, folk life and museums' relationships with the diverse communities they serve become a widely integrated part of the contemporary practices. These changes address issues of diversity and accessibility in order to offer more socially inclusive services. Drawing on these changes and reflecting back on the AI world, we argue that the museum experience provides useful lessons for building AI with socially inclusive approaches, especially in situations in which both a collection and access to it will need to be curated or filtered, as frequently happens in search engines, recommender systems and digital libraries. We highlight three principles: (1) Instead of upholding the value of neutrality, practitioners are aware of the influences of their own backgrounds and those of others on their work. By not claiming to be neutral but practising cultural humility, the chances of addressing potential biases can be increased. (2) There should be room for situational interpretation beyond the stages of data collection and machine learning. Before applying models and predictions, the contexts in which relevant parties exist should be taken into account. (3) Community participation serves the needs of communities and has the added benefit of bringing practitioners and communities together.
false
false
false
false
true
false
true
false
false
false
false
false
false
true
false
false
false
false
295,863
2410.14185
Combining Hough Transform and Deep Learning Approaches to Reconstruct ECG Signals From Printouts
This work presents our team's (SignalSavants) winning contribution to the 2024 George B. Moody PhysioNet Challenge. The Challenge had two goals: reconstruct ECG signals from printouts and classify them for cardiac diseases. Our focus was the first task. Despite many ECGs being digitally recorded today, paper ECGs remain common throughout the world. Digitising them could help build more diverse datasets and enable automated analyses. However, the presence of varying recording standards and poor image quality requires a data-centric approach for developing robust models that can generalise effectively. Our approach combines the creation of a diverse training set, Hough transform to rotate images, a U-Net based segmentation model to identify individual signals, and mask vectorisation to reconstruct the signals. We assessed the performance of our models using the 10-fold stratified cross-validation (CV) split of 21,799 recordings proposed by the PTB-XL dataset. On the digitisation task, our model achieved an average CV signal-to-noise ratio of 17.02 and an official Challenge score of 12.15 on the hidden set, securing first place in the competition. Our study shows the challenges of building robust, generalisable, digitisation approaches. Such models require large amounts of resources (data, time, and computational power) but have great potential in diversifying the data available.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
499,920
2006.04996
Implicit Class-Conditioned Domain Alignment for Unsupervised Domain Adaptation
We present an approach for unsupervised domain adaptation---with a strong focus on practical considerations of within-domain class imbalance and between-domain class distribution shift---from a class-conditioned domain alignment perspective. Current methods for class-conditioned domain alignment aim to explicitly minimize a loss function based on pseudo-label estimations of the target domain. However, these methods suffer from pseudo-label bias in the form of error accumulation. We propose a method that removes the need for explicit optimization of model parameters from pseudo-labels directly. Instead, we present a sampling-based implicit alignment approach, where the sample selection procedure is implicitly guided by the pseudo-labels. Theoretical analysis reveals the existence of a domain-discriminator shortcut in misaligned classes, which is addressed by the proposed implicit alignment approach to facilitate domain-adversarial learning. Empirical results and ablation studies confirm the effectiveness of the proposed approach, especially in the presence of within-domain class imbalance and between-domain class distribution shift.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
180,889
2202.03951
On Sibson's $\alpha$-Mutual Information
We explore a family of information measures that stems from R\'enyi's $\alpha$-Divergences with $\alpha<0$. In particular, we extend the definition of Sibson's $\alpha$-Mutual Information to negative values of $\alpha$ and show several properties of these objects. Moreover, we highlight how this family of information measures is related to functional inequalities that can be employed in a variety of fields, including lower-bounds on the Risk in Bayesian Estimation Procedures.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
279,400
2306.10739
COLE: A Column-based Learned Storage for Blockchain Systems
Blockchain systems suffer from high storage costs as every node needs to store and maintain the entire blockchain data. After investigating Ethereum's storage, we find that the storage cost mostly comes from the index, i.e., Merkle Patricia Trie (MPT). To support provenance queries, MPT persists the index nodes during the data update, which adds too much storage overhead. To reduce the storage size, an initial idea is to leverage the emerging learned index technique, which has been shown to have a smaller index size and more efficient query performance. However, directly applying it to the blockchain storage results in even higher overhead owing to the requirement of persisting index nodes and the learned index's large node size. To tackle this, we propose COLE, a novel column-based learned storage for blockchain systems. We follow the column-based database design to contiguously store each state's historical values, which are indexed by learned models to facilitate efficient data retrieval and provenance queries. We develop a series of write-optimized strategies to realize COLE in disk environments. Extensive experiments are conducted to validate the performance of the proposed COLE system. Compared with MPT, COLE reduces the storage size by up to 94% while improving the system throughput by $1.4\times$-$5.4\times$.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
374,344
2502.03200
CORTEX: A Cost-Sensitive Rule and Tree Extraction Method
Tree-based and rule-based machine learning models play pivotal roles in explainable artificial intelligence (XAI) due to their unique ability to provide explanations in the form of tree or rule sets that are easily understandable and interpretable, making them essential for applications in which trust in model decisions is necessary. These transparent models are typically used in surrogate modeling, a post-hoc XAI approach for explaining the logic of black-box models, enabling users to comprehend and trust complex predictive systems while maintaining competitive performance. This study proposes the Cost-Sensitive Rule and Tree Extraction (CORTEX) method, a novel rule-based XAI algorithm grounded in the multi-class cost-sensitive decision tree (CSDT) method. The original version of the CSDT is extended to classification problems with more than two classes by inducing the concept of an n-dimensional class-dependent cost matrix. The performance of CORTEX as a rule-extractor XAI method is compared to other post-hoc tree and rule extraction methods across several datasets with different numbers of classes. Several quantitative evaluation metrics are employed to assess the explainability of generated rule sets. Our findings demonstrate that CORTEX is competitive with other tree-based methods and can be superior to other rule-based methods across different datasets. The extracted rule sets suggest the advantages of using the CORTEX method over other methods by producing smaller rule sets with shorter rules on average across datasets with a diverse number of classes. Overall, the results underscore the potential of CORTEX as a powerful XAI tool for scenarios that require the generation of clear, human-understandable rules while maintaining good predictive performance.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
530,624
1808.00197
MaxMin Linear Initialization for Fuzzy C-Means
Clustering is an extensive research area in data science. The aim of clustering is to discover groups and to identify interesting patterns in datasets. Crisp (hard) clustering considers that each data point belongs to one and only one cluster. However, it is inadequate as some data points may belong to several clusters, as is the case in text categorization. Thus, we need more flexible clustering. Fuzzy clustering methods, where each data point can belong to several clusters, are an interesting alternative. Yet, seeding iterative fuzzy algorithms to achieve high quality clustering is an issue. In this paper, we propose a new linear and efficient initialization algorithm MaxMin Linear to deal with this problem. Then, we validate our theoretical results through extensive experiments on a variety of numerical real-world and artificial datasets. We also test several validity indices, including a new validity index that we propose, Transformed Standardized Fuzzy Difference (TSFD).
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
104,332
2303.05936
Learning Decoupled Multi-touch Force Estimation, Localization and Stretch for Soft Capacitive E-skin
Distributed sensor arrays capable of detecting multiple spatially distributed stimuli are considered an important element in the realisation of exteroceptive and proprioceptive soft robots. This paper expands upon the previously presented idea of decoupling the measurements of pressure and location of a local indentation from global deformation, using the overall stretch experienced by a soft capacitive e-skin. We employed machine learning methods to decouple and predict these highly coupled deformation stimuli, collecting data from a soft sensor e-skin which was then fed to a machine learning system comprising of linear regressor, gaussian process regressor, SVM and random forest classifier for stretch, force, detection and localisation respectively. We also studied how the localisation and forces are affected when two forces are applied simultaneously. Soft sensor arrays aided by appropriately chosen machine learning techniques can pave the way to e-skins capable of deciphering multi-modal stimuli in soft robots.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
350,639
2309.11735
FleXstage: Lightweight Magnetically Levitated Precision Stage with Over-Actuation towards High-Throughput IC Manufacturing
Precision motion stages play a critical role in various manufacturing and inspection equipment, for example, the wafer/reticle scanning in photolithography scanners and positioning stages in wafer inspection systems. To meet the growing demand for higher throughput in chip manufacturing and inspection, it is critical to create new precision motion stages with higher acceleration capability with high control bandwidth, which calls for the development of lightweight precision stages. However, in today's precision motion systems, only the rigid body motion of the system are under control, and the flexible dynamic systems are in open loop. For these systems, the motion control bandwidth is limited by the first structural resonance frequency of the stage, which enforces a fundamental trade-off between the stage's bandwidth and acceleration capability. Aiming to overcome this trade-off, we have introduced a sequential structure and control design framework for lightweight stages with the low-frequency flexible modes of the stage are under active control. To facilitate the controller design, we further propose to minimize the resonance frequency of the stage's mode being controlled and to maximize the resonance frequency of the uncontrolled mode. The system's control bandwidth is placed in between the resonance frequencies. This paper presents the design, optimization, building, and experimental evaluations for a lightweight magnetically levitated planar stage, which we call FleXstage, with first flexible mode actively controlled via over-actuation. Simulations show the proposed design is highly promising in enabling stages with lightweight without sacrificing control bandwidth. We have some preliminary results now and are still working on the experimental evaluations for the closed-loop system, and will present the results in the oral presentation.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
393,522
2005.07031
Temporal signals to images: Monitoring the condition of industrial assets with deep learning image processing algorithms
The ability to detect anomalies in time series is considered highly valuable in numerous application domains. The sequential nature of time series objects is responsible for an additional feature complexity, ultimately requiring specialized approaches in order to solve the task. Essential characteristics of time series, situated outside the time domain, are often difficult to capture with state-of-the-art anomaly detection methods when no transformations have been applied to the time series. Inspired by the success of deep learning methods in computer vision, several studies have proposed transforming time series into image-like representations, used as inputs for deep learning models, and have led to very promising results in classification tasks. In this paper, we first review the signal to image encoding approaches found in the literature. Second, we propose modifications to some of their original formulations to make them more robust to the variability in large datasets. Third, we compare them on the basis of a common unsupervised task to demonstrate how the choice of the encoding can impact the results when used in the same deep learning architecture. We thus provide a comparison between six encoding algorithms with and without the proposed modifications. The selected encoding methods are Gramian Angular Field, Markov Transition Field, recurrence plot, grey scale encoding, spectrogram, and scalogram. We also compare the results achieved with the raw signal used as input for another deep learning model. We demonstrate that some encodings have a competitive advantage and might be worth considering within a deep learning framework. The comparison is performed on a dataset collected and released by Airbus SAS, containing highly complex vibration measurements from real helicopter flight tests. The different encodings provide competitive results for anomaly detection.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
177,171
1711.07566
Neural 3D Mesh Renderer
For modeling the 3D world behind 2D images, which 3D representation is most appropriate? A polygon mesh is a promising candidate for its compactness and geometric properties. However, it is not straightforward to model a polygon mesh from 2D images using neural networks because the conversion from a mesh to an image, or rendering, involves a discrete operation called rasterization, which prevents back-propagation. Therefore, in this work, we propose an approximate gradient for rasterization that enables the integration of rendering into neural networks. Using this renderer, we perform single-image 3D mesh reconstruction with silhouette image supervision and our system outperforms the existing voxel-based approach. Additionally, we perform gradient-based 3D mesh editing operations, such as 2D-to-3D style transfer and 3D DeepDream, with 2D supervision for the first time. These applications demonstrate the potential of the integration of a mesh renderer into neural networks and the effectiveness of our proposed renderer.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
85,015
1811.00430
GA Based Q-Attack on Community Detection
Community detection plays an important role in social networks, since it can help to naturally divide the network into smaller parts so as to simplify network analysis. However, on the other hand, it arises the concern that individual information may be over-mined, and the concept community deception thus is proposed to protect individual privacy on social networks. Here, we introduce and formalize the problem of community detection attack and develop efficient strategies to attack community detection algorithms by rewiring a small number of connections, leading to individual privacy protection. In particular, we first give two heuristic attack strategies, i.e., Community Detection Attack (CDA) and Degree Based Attack (DBA), as baselines, utilizing the information of detected community structure and node degree, respectively. And then we propose a Genetic Algorithm (GA) based Q-Attack, where the modularity Q is used to design the fitness function. We launch community detection attack based on the above three strategies against three modularity based community detection algorithms on two social networks. By comparison, our Q-Attack method achieves much better attack effects than CDA and DBA, in terms of the larger reduction of both modularity Q and Normalized Mutual Information (NMI). Besides, we find that the adversarial networks obtained by Q-Attack on a specific community detection algorithm can be still effective on others, no matter whether they are modularity based or not, indicating its strong transferability.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
112,099
2412.11506
Glimpse: Enabling White-Box Methods to Use Proprietary Models for Zero-Shot LLM-Generated Text Detection
Advanced large language models (LLMs) can generate text almost indistinguishable from human-written text, highlighting the importance of LLM-generated text detection. However, current zero-shot techniques face challenges as white-box methods are restricted to use weaker open-source LLMs, and black-box methods are limited by partial observation from stronger proprietary LLMs. It seems impossible to enable white-box methods to use proprietary models because API-level access to the models neither provides full predictive distributions nor inner embeddings. To traverse the divide, we propose **Glimpse**, a probability distribution estimation approach, predicting the full distributions from partial observations. Despite the simplicity of Glimpse, we successfully extend white-box methods like Entropy, Rank, Log-Rank, and Fast-DetectGPT to latest proprietary models. Experiments show that Glimpse with Fast-DetectGPT and GPT-3.5 achieves an average AUROC of about 0.95 in five latest source models, improving the score by 51% relative to the remaining space of the open source baseline. It demonstrates that the latest LLMs can effectively detect their own outputs, suggesting that advanced LLMs may be the best shield against themselves. We release our code and data at https://github.com/baoguangsheng/glimpse.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
517,447
2210.05022
Dynamic Gap: Safe Gap-based Navigation in Dynamic Environments
This paper extends the family of gap-based local planners to unknown dynamic environments through generating provable collision-free properties for hierarchical navigation systems. Existing perception-informed local planners that operate in dynamic environments rely on emergent or empirical robustness for collision avoidance as opposed to performing formal analysis of dynamic obstacles. In addition to this, the obstacle tracking that is performed in these existent planners is often achieved with respect to a global inertial frame, subjecting such tracking estimates to transformation errors from odometry drift. The proposed local planner, dynamic gap, shifts the tracking paradigm to modeling how the free space, represented as gaps, evolves over time. Gap crossing and closing conditions are developed to aid in determining the feasibility of passage through gaps, and a breadth of simulation benchmarking is performed against other navigation planners in the literature where the proposed dynamic gap planner achieves the highest success rate out of all planners tested in all environments.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
322,666
2102.03613
Linear Matrix Inequality Approaches to Koopman Operator Approximation
The regression problem associated with finding a matrix approximation of the Koopman operator from data is considered. The regression problem is formulated as a convex optimization problem subject to linear matrix inequality (LMI) constraints. Doing so allows for additional LMI constraints to be incorporated into the regression problem. In particular, asymptotic stability constraints, regularization using matrix norms, and even regularization using system norms can be easily incorporated into the regression problem.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
218,811
1703.04103
Detection of Human Rights Violations in Images: Can Convolutional Neural Networks help?
After setting the performance benchmarks for image, video, speech and audio processing, deep convolutional networks have been core to the greatest advances in image recognition tasks in recent times. This raises the question of whether there are any benefit in targeting these remarkable deep architectures with the unattempted task of recognising human rights violations through digital images. Under this perspective, we introduce a new, well-sampled human rights-centric dataset called Human Rights Understanding (HRUN). We conduct a rigorous evaluation on a common ground by combining this dataset with different state-of-the-art deep convolutional architectures in order to achieve recognition of human rights violations. Experimental results on the HRUN dataset have shown that the best performing CNN architectures can achieve up to 88.10\% mean average precision. Additionally, our experiments demonstrate that increasing the size of the training samples is crucial for achieving an improvement on mean average precision principally when utilising very deep networks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
69,837
2208.00050
Generating Multiple 4D Expression Transitions by Learning Face Landmark Trajectories
In this work, we address the problem of 4D facial expressions generation. This is usually addressed by animating a neutral 3D face to reach an expression peak, and then get back to the neutral state. In the real world though, people show more complex expressions, and switch from one expression to another. We thus propose a new model that generates transitions between different expressions, and synthesizes long and composed 4D expressions. This involves three sub-problems: (i) modeling the temporal dynamics of expressions, (ii) learning transitions between them, and (iii) deforming a generic mesh. We propose to encode the temporal evolution of expressions using the motion of a set of 3D landmarks, that we learn to generate by training a manifold-valued GAN (Motion3DGAN). To allow the generation of composed expressions, this model accepts two labels encoding the starting and the ending expressions. The final sequence of meshes is generated by a Sparse2Dense mesh Decoder (S2D-Dec) that maps the landmark displacements to a dense, per-vertex displacement of a known mesh topology. By explicitly working with motion trajectories, the model is totally independent from the identity. Extensive experiments on five public datasets show that our proposed approach brings significant improvements with respect to previous solutions, while retaining good generalization to unseen data.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
310,725
2009.03162
Improving colonoscopy lesion classification using semi-supervised deep learning
While data-driven approaches excel at many image analysis tasks, the performance of these approaches is often limited by a shortage of annotated data available for training. Recent work in semi-supervised learning has shown that meaningful representations of images can be obtained from training with large quantities of unlabeled data, and that these representations can improve the performance of supervised tasks. Here, we demonstrate that an unsupervised jigsaw learning task, in combination with supervised training, results in up to a 9.8% improvement in correctly classifying lesions in colonoscopy images when compared to a fully-supervised baseline. We additionally benchmark improvements in domain adaptation and out-of-distribution detection, and demonstrate that semi-supervised learning outperforms supervised learning in both cases. In colonoscopy applications, these metrics are important given the skill required for endoscopic assessment of lesions, the wide variety of endoscopy systems in use, and the homogeneity that is typical of labeled datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
194,760
2004.08947
Desmoking laparoscopy surgery images using an image-to-image translation guided by an embedded dark channel
In laparoscopic surgery, the visibility in the image can be severely degraded by the smoke caused by the $CO_2$ injection, and dissection tools, thus reducing the visibility of organs and tissues. This lack of visibility increases the surgery time and even the probability of mistakes conducted by the surgeon, then producing negative consequences on the patient's health. In this paper, a novel computational approach to remove the smoke effects is introduced. The proposed method is based on an image-to-image conditional generative adversarial network in which a dark channel is used as an embedded guide mask. Obtained experimental results are evaluated and compared quantitatively with other desmoking and dehazing state-of-art methods using the metrics of the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM) index. Based on these metrics, it is found that the proposed method has improved performance compared to the state-of-the-art. Moreover, the processing time required by our method is 92 frames per second, and thus, it can be applied in a real-time medical system trough an embedded device.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
173,217
2301.08146
What's happening in your neighborhood? A Weakly Supervised Approach to Detect Local News
Local news articles are a subset of news that impact users in a geographical area, such as a city, county, or state. Detecting local news (Step 1) and subsequently deciding its geographical location as well as radius of impact (Step 2) are two important steps towards accurate local news recommendation. Naive rule-based methods, such as detecting city names from the news title, tend to give erroneous results due to lack of understanding of the news content. Empowered by the latest development in natural language processing, we develop an integrated pipeline that enables automatic local news detection and content-based local news recommendations. In this paper, we focus on Step 1 of the pipeline, which highlights: (1) a weakly supervised framework incorporated with domain knowledge and auto data processing, and (2) scalability to multi-lingual settings. Compared with Stanford CoreNLP NER model, our pipeline has higher precision and recall evaluated on a real-world and human-labeled dataset. This pipeline has potential to more precise local news to users, helps local businesses get more exposure, and gives people more information about their neighborhood safety.
false
false
false
false
false
true
true
false
true
false
false
false
false
false
false
false
false
false
341,115
2203.04430
The Impact of Heavy-Duty Vehicle Electrification on Large Power Grids: a Synthetic Texas Case Study
The electrification of heavy-duty vehicles (HDEVs) is a nascent and rapidly emerging avenue for decarbonization of the transportation sector. In this paper, we examine the impacts of increased vehicle electrification on the power grid infrastructure, with particular focus on HDEVs. We utilize a synthetic representation of the 2000-bus Texas transmission grid, and realistic representations of multiple distribution grids in Travis county, Texas, as well as transit data pertaining to HDEVs, to uncover the consequences of HDEV electrification, and expose the limitations imposed by existing electric grid infrastructure. Our analysis reveals that grid-wide voltage problems that are spatiotemporally correlated with the mobility of HDEVs may occur even at modest penetration levels. In fact, we find that as little as 11% of heavy duty vehicles in Texas charging simultaneously can lead to significant voltage violations on the transmission network that compromise grid reliability. Furthermore, we find that just a few dozen EVs charging simultaneously can lead to voltage violations at the distribution level.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
284,464
1603.08616
Submodular Variational Inference for Network Reconstruction
In real-world and online social networks, individuals receive and transmit information in real time. Cascading information transmissions (e.g. phone calls, text messages, social media posts) may be understood as a realization of a diffusion process operating on the network, and its branching path can be represented by a directed tree. The process only traverses and thus reveals a limited portion of the edges. The network reconstruction/inference problem is to infer the unrevealed connections. Most existing approaches derive a likelihood and attempt to find the network topology maximizing the likelihood, a problem that is highly intractable. In this paper, we focus on the network reconstruction problem for a broad class of real-world diffusion processes, exemplified by a network diffusion scheme called respondent-driven sampling (RDS). We prove that under realistic and general models of network diffusion, the posterior distribution of an observed RDS realization is a Bayesian log-submodular model.We then propose VINE (Variational Inference for Network rEconstruction), a novel, accurate, and computationally efficient variational inference algorithm, for the network reconstruction problem under this model. Crucially, we do not assume any particular probabilistic model for the underlying network. VINE recovers any connected graph with high accuracy as shown by our experimental results on real-life networks.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
true
53,806
1902.09884
Assume, Augment and Learn: Unsupervised Few-Shot Meta-Learning via Random Labels and Data Augmentation
The field of few-shot learning has been laboriously explored in the supervised setting, where per-class labels are available. On the other hand, the unsupervised few-shot learning setting, where no labels of any kind are required, has seen little investigation. We propose a method, named Assume, Augment and Learn or AAL, for generating few-shot tasks using unlabeled data. We randomly label a random subset of images from an unlabeled dataset to generate a support set. Then by applying data augmentation on the support set's images, and reusing the support set's labels, we obtain a target set. The resulting few-shot tasks can be used to train any standard meta-learning framework. Once trained, such a model, can be directly applied on small real-labeled datasets without any changes or fine-tuning required. In our experiments, the learned models achieve good generalization performance in a variety of established few-shot learning tasks on Omniglot and Mini-Imagenet.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
122,537
2405.03541
RepVGG-GELAN: Enhanced GELAN with VGG-STYLE ConvNets for Brain Tumour Detection
Object detection algorithms particularly those based on YOLO have demonstrated remarkable efficiency in balancing speed and accuracy. However, their application in brain tumour detection remains underexplored. This study proposes RepVGG-GELAN, a novel YOLO architecture enhanced with RepVGG, a reparameterized convolutional approach for object detection tasks particularly focusing on brain tumour detection within medical images. RepVGG-GELAN leverages the RepVGG architecture to improve both speed and accuracy in detecting brain tumours. Integrating RepVGG into the YOLO framework aims to achieve a balance between computational efficiency and detection performance. This study includes a spatial pyramid pooling-based Generalized Efficient Layer Aggregation Network (GELAN) architecture which further enhances the capability of RepVGG. Experimental evaluation conducted on a brain tumour dataset demonstrates the effectiveness of RepVGG-GELAN surpassing existing RCS-YOLO in terms of precision and speed. Specifically, RepVGG-GELAN achieves an increased precision of 4.91% and an increased AP50 of 2.54% over the latest existing approach while operating at 240.7 GFLOPs. The proposed RepVGG-GELAN with GELAN architecture presents promising results establishing itself as a state-of-the-art solution for accurate and efficient brain tumour detection in medical images. The implementation code is publicly available at https://github.com/ThensiB/RepVGG-GELAN.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
452,220
2311.12992
FollowMe: a Robust Person Following Framework Based on Re-Identification and Gestures
Human-robot interaction (HRI) has become a crucial enabler in houses and industries for facilitating operational flexibility. When it comes to mobile collaborative robots, this flexibility can be further increased due to the autonomous mobility and navigation capacity of the robotic agents, expanding their workspace and consequently, the personalizable assistance they can provide to the human operators. This however requires that the robot is capable of detecting and identifying the human counterpart in all stages of the collaborative task, and in particular while following a human in crowded workplaces. To respond to this need, we developed a unified perception and navigation framework, which enables the robot to identify and follow a target person using a combination of visual Re-Identification (Re-ID), hand gestures detection, and collision-free navigation. The Re-ID module can autonomously learn the features of a target person and use the acquired knowledge to visually re-identify the target. The navigation stack is used to follow the target avoiding obstacles and other individuals in the environment. Experiments are conducted with few subjects in a laboratory setting where some unknown dynamic obstacles are introduced.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
409,579
1712.04182
A Generic Model for Swarm Intelligence and Its Validations
The modeling of emergent swarm intelligence constitutes a major challenge and it has been tackled in a number of different ways. However, existing approaches fail to capture the nature of swarm intelligence and they are either too abstract for practical application or not generic enough to describe the various types of emergence phenomena. In this paper, a contradiction-centric model for swarm intelligence is proposed, in which individu-als determine their behaviors based on their internal contradictions whilst they associate and interact to update their contradictions. The model hypothesizes that 1) the emergence of swarm intelligence is rooted in the de-velopment of individuals' internal contradictions and the interactions taking place between individuals and the environment, and 2) swarm intelligence is essentially a combinative reflection of the configurations of individuals' internal contradictions and the distributions of these contradictions across individuals. The model is formally described and five swarm intelligence systems are studied to illustrate its broad applicability. The studies confirm the generic character of the model and its effectiveness for describing the emergence of various kinds of swarm intelligence; and they also demonstrate that the model is straightforward to apply, without the need for complicated computations.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
86,569
2103.03729
Data-Driven Short-Term Voltage Stability Assessment Based on Spatial-Temporal Graph Convolutional Network
Post-fault dynamics of short-term voltage stability (SVS) present spatial-temporal characteristics, but the existing data-driven methods for online SVS assessment fail to incorporate such characteristics into their models effectively. Confronted with this dilemma, this paper develops a novel spatial-temporal graph convolutional network (STGCN) to address this problem. The proposed STGCN utilizes graph convolution to integrate network topology information into the learning model to exploit spatial information. Then, it adopts one-dimensional convolution to exploit temporal information. In this way, it models the spatial-temporal characteristics of SVS with complete convolutional structures. After that, a node layer and a system layer are strategically designed in the STGCN for SVS assessment. The proposed STGCN incorporates the characteristics of SVS into the data-driven classification model. It can result in higher assessment accuracy, better robustness and adaptability than conventional methods. Besides, parameters in the system layer can provide valuable information about the influences of individual buses on SVS. Test results on the real-world Guangdong Power Grid in South China verify the effectiveness of the proposed network.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
223,393
2304.08235
A Platform-Agnostic Deep Reinforcement Learning Framework for Effective Sim2Real Transfer towards Autonomous Driving
Deep Reinforcement Learning (DRL) has shown remarkable success in solving complex tasks across various research fields. However, transferring DRL agents to the real world is still challenging due to the significant discrepancies between simulation and reality. To address this issue, we propose a robust DRL framework that leverages platform-dependent perception modules to extract task-relevant information and train a lane-following and overtaking agent in simulation. This framework facilitates the seamless transfer of the DRL agent to new simulated environments and the real world with minimal effort. We evaluate the performance of the agent in various driving scenarios in both simulation and the real world, and compare it to human players and the PID baseline in simulation. Our proposed framework significantly reduces the gaps between different platforms and the Sim2Real gap, enabling the trained agent to achieve similar performance in both simulation and the real world, driving the vehicle effectively.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
358,635
2102.07337
Machine Learning on Camera Images for Fast mmWave Beamforming
Perfect alignment in chosen beam sectors at both transmit- and receive-nodes is required for beamforming in mmWave bands. Current 802.11ad WiFi and emerging 5G cellular standards spend up to several milliseconds exploring different sector combinations to identify the beam pair with the highest SNR. In this paper, we propose a machine learning (ML) approach with two sequential convolutional neural networks (CNN) that uses out-of-band information, in the form of camera images, to (i) rapidly identify the locations of the transmitter and receiver nodes, and then (ii) return the optimal beam pair. We experimentally validate this intriguing concept for indoor settings using the NI 60GHz mmwave transceiver. Our results reveal that our ML approach reduces beamforming related exploration time by 93% under different ambient lighting conditions, with an error of less than 1% compared to the time-intensive deterministic method defined by the current standards.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
220,073
1604.07952
Zero-shot object prediction using semantic scene knowledge
This work focuses on the semantic relations between scenes and objects for visual object recognition. Semantic knowledge can be a powerful source of information especially in scenarios with few or no annotated training samples. These scenarios are referred to as zero-shot or few-shot recognition and often build on visual attributes. Here, instead of relying on various visual attributes, a more direct way is pursued: after recognizing the scene that is depicted in an image, semantic relations between scenes and objects are used for predicting the presence of objects in an unsupervised manner. Most importantly, relations between scenes and objects can easily be obtained from external sources such as large scale text corpora from the web and, therefore, do not require tremendous manual labeling efforts. It will be shown that in cluttered scenes, where visual recognition is difficult, scene knowledge is an important cue for predicting objects.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
55,152
2303.07295
Meet in the Middle: A New Pre-training Paradigm
Most language models (LMs) are trained and applied in an autoregressive left-to-right fashion, assuming that the next token only depends on the preceding ones. However, this assumption ignores the potential benefits of using the full sequence information during training, and the possibility of having context from both sides during inference. In this paper, we propose a new pre-training paradigm with techniques that jointly improve the training data efficiency and the capabilities of the LMs in the infilling task. The first is a training objective that aligns the predictions of a left-to-right LM with those of a right-to-left LM, trained on the same data but in reverse order. The second is a bidirectional inference procedure that enables both LMs to meet in the middle. We show the effectiveness of our pre-training paradigm with extensive experiments on both programming and natural language models, outperforming strong baselines.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
351,206
1705.01332
LiDAR-based Control of Autonomous Rotorcraft for the Inspection of Pier-like Structures: Proofs
This is a complementary document to the paper presented in [1], to provide more detailed proofs for some results. The main paper addresses the problem of trajectory tracking control of autonomous rotorcraft in operation scenarios where only relative position measurements obtained from LiDAR sensors are possible. The proposed approach defines an alternative kinematic model, directly based on LiDAR measurements, and uses a trajectory-dependent error space to express the dynamic model of the vehicle. An LPV representation with piecewise affine dependence on the parameters is adopted to describe the error dynamics over a set of predefined operating regions, and a continuous-time $H_2$ control problem is solved using LMIs and implemented within the scope of gain-scheduling control theory.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
72,828
2406.01206
On the Stability of Networked Nonlinear Negative Imaginary Systems with Applications to Electrical Power Systems
In the transition to achieving net zero emissions, it has been suggested that a substantial expansion of electric power grids will be necessary to support emerging renewable energy zones. In this paper, we propose employing battery-based feedback control and nonlinear negative imaginary (NI) systems theory to reduce the need for such expansion. By formulating a novel Lur\'e-Postnikov-like Lyapunov function, stability results are presented for the feedback interconnection of two single nonlinear NI systems, while output feedback consensus results are established for the feedback interconnection of two networked nonlinear NI systems based on a network topology. This theoretical framework underpins our design of battery-based control in power transmission systems. We demonstrate that the power grid can be gradually transitioned into the proposed NI systems, one transmission line at a time.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
460,221
2412.20682
Learning to Rank Pre-trained Vision-Language Models for Downstream Tasks
Vision language models (VLMs) like CLIP show stellar zero-shot capability on classification benchmarks. However, selecting the VLM with the highest performance on the unlabeled downstream task is non-trivial. Existing VLM selection methods focus on the class-name-only setting, relying on a supervised large-scale dataset and large language models, which may not be accessible or feasible during deployment. This paper introduces the problem of \textbf{unsupervised vision-language model selection}, where only unsupervised downstream datasets are available, with no additional information provided. To solve this problem, we propose a method termed Visual-tExtual Graph Alignment (VEGA), to select VLMs without any annotations by measuring the alignment of the VLM between the two modalities on the downstream task. VEGA is motivated by the pretraining paradigm of VLMs, which aligns features with the same semantics from the visual and textual modalities, thereby mapping both modalities into a shared representation space. Specifically, we first construct two graphs on the vision and textual features, respectively. VEGA is then defined as the overall similarity between the visual and textual graphs at both node and edge levels. Extensive experiments across three different benchmarks, covering a variety of application scenarios and downstream datasets, demonstrate that VEGA consistently provides reliable and accurate estimates of VLMs' performance on unlabeled downstream tasks.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
521,317
1909.03582
Clickbait? Sensational Headline Generation with Auto-tuned Reinforcement Learning
Sensational headlines are headlines that capture people's attention and generate reader interest. Conventional abstractive headline generation methods, unlike human writers, do not optimize for maximal reader attention. In this paper, we propose a model that generates sensational headlines without labeled data. We first train a sensationalism scorer by classifying online headlines with many comments ("clickbait") against a baseline of headlines generated from a summarization model. The score from the sensationalism scorer is used as the reward for a reinforcement learner. However, maximizing the noisy sensationalism reward will generate unnatural phrases instead of sensational headlines. To effectively leverage this noisy reward, we propose a novel loss function, Auto-tuned Reinforcement Learning (ARL), to dynamically balance reinforcement learning (RL) with maximum likelihood estimation (MLE). Human evaluation shows that 60.8% of samples generated by our model are sensational, which is significantly better than the Pointer-Gen baseline and other RL models.
true
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
144,535
2404.00394
Analysis of Fairness-promoting Optimization Schemes of Photovoltaic Curtailments for Voltage Regulation in Power Distribution Networks
Active power curtailment of photovoltaic (PV) generation is commonly exercised to mitigate over-voltage issues in power distribution networks. However, fairness concerns arise as certain PV plants may experience more significant curtailments than others depending on their locations within the network. Existing literature tackles this issue through fairness-promoting/aware optimization schemes. These schemes can be broadly categorized into two types. The first type maximizes an additional fairness objective along with the main objective of curtailment minimization. The second type is formulated as a feedback controller, where fairness is accounted for by assigning different weights (as feedback) in the curtailment minimization objective for each PV plant based on previous curtailment actions. In this work, we combine these two schemes and provide extensive analyses and comparisons of these two fairness schemes. We compare the performance in terms of fairness and net curtailments for several benchmark test networks.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
442,900
2408.03872
Inter-Series Transformer: Attending to Products in Time Series Forecasting
Time series forecasting is an important task in many fields ranging from supply chain management to weather forecasting. Recently, Transformer neural network architectures have shown promising results in forecasting on common time series benchmark datasets. However, application to supply chain demand forecasting, which can have challenging characteristics such as sparsity and cross-series effects, has been limited. In this work, we explore the application of Transformer-based models to supply chain demand forecasting. In particular, we develop a new Transformer-based forecasting approach using a shared, multi-task per-time series network with an initial component applying attention across time series, to capture interactions and help address sparsity. We provide a case study applying our approach to successfully improve demand prediction for a medical device manufacturing company. To further validate our approach, we also apply it to public demand forecasting datasets as well and demonstrate competitive to superior performance compared to a variety of baseline and state-of-the-art forecast methods across the private and public datasets.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
479,177
0901.1408
A Message-Passing Approach for Joint Channel Estimation, Interference Mitigation and Decoding
Channel uncertainty and co-channel interference are two major challenges in the design of wireless systems such as future generation cellular networks. This paper studies receiver design for a wireless channel model with both time-varying Rayleigh fading and strong co-channel interference of similar form as the desired signal. It is assumed that the channel coefficients of the desired signal can be estimated through the use of pilots, whereas no pilot for the interference signal is available, as is the case in many practical wireless systems. Because the interference process is non-Gaussian, treating it as Gaussian noise generally often leads to unacceptable performance. In order to exploit the statistics of the interference and correlated fading in time, an iterative message-passing architecture is proposed for joint channel estimation, interference mitigation and decoding. Each message takes the form of a mixture of Gaussian densities where the number of components is limited so that the overall complexity of the receiver is constant per symbol regardless of the frame and code lengths. Simulation of both coded and uncoded systems shows that the receiver performs significantly better than conventional receivers with linear channel estimation, and is robust with respect to mismatch in the assumed fading model.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
2,921
2003.10381
Ambiguity in Sequential Data: Predicting Uncertain Futures with Recurrent Models
Ambiguity is inherently present in many machine learning tasks, but especially for sequential models seldom accounted for, as most only output a single prediction. In this work we propose an extension of the Multiple Hypothesis Prediction (MHP) model to handle ambiguous predictions with sequential data, which is of special importance, as often multiple futures are equally likely. Our approach can be applied to the most common recurrent architectures and can be used with any loss function. Additionally, we introduce a novel metric for ambiguous problems, which is better suited to account for uncertainties and coincides with our intuitive understanding of correctness in the presence of multiple labels. We test our method on several experiments and across diverse tasks dealing with time series data, such as trajectory forecasting and maneuver prediction, achieving promising results.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
169,319
1710.07723
Generalized linear mixing model accounting for endmember variability
Endmember variability is an important factor for accurately unveiling vital information relating the pure materials and their distribution in hyperspectral images. Recently, the extended linear mixing model (ELMM) has been proposed as a modification of the linear mixing model (LMM) to consider endmember variability effects resulting mainly from illumination changes. In this paper, we further generalize the ELMM leading to a new model (GLMM) to account for more complex spectral distortions where different wavelength intervals can be affected unevenly. We also extend the existing methodology to jointly estimate the variability and the abundances for the GLMM. Simulations with real and synthetic data show that the unmixing process can benefit from the extra flexibility introduced by the GLMM.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
82,970
2208.06117
Facial Expression Recognition and Image Description Generation in Vietnamese
This paper discusses a facial expression recognition model and a description generation model to build descriptive sentences for images and facial expressions of people in images. Our study shows that YOLOv5 achieves better results than a traditional CNN for all emotions on the KDEF dataset. In particular, the accuracies of the CNN and YOLOv5 models for emotion recognition are 0.853 and 0.938, respectively. A model for generating descriptions for images based on a merged architecture is proposed using VGG16 with the descriptions encoded over an LSTM model. YOLOv5 is also used to recognize dominant colors of objects in the images and correct the color words in the descriptions generated if it is necessary. If the description contains words referring to a person, we recognize the emotion of the person in the image. Finally, we combine the results of all models to create sentences that describe the visual content and the human emotions in the images. Experimental results on the Flickr8k dataset in Vietnamese achieve BLEU-1, BLEU-2, BLEU-3, BLEU-4 scores of 0.628; 0.425; 0.280; and 0.174, respectively.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
312,604
math/0603155
Vers une commande multivariable sans mod\`ele
A control strategy without any precise mathematical model is derived for linear or nonlinear systems which are assumed to be finite-dimensional. Two convincing numerical simulations are provided.
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
540,712
2305.11854
Multimodal Web Navigation with Instruction-Finetuned Foundation Models
The progress of autonomous web navigation has been hindered by the dependence on billions of exploratory interactions via online reinforcement learning, and domain-specific model designs that make it difficult to leverage generalization from rich out-of-domain data. In this work, we study data-driven offline training for web agents with vision-language foundation models. We propose an instruction-following multimodal agent, WebGUM, that observes both webpage screenshots and HTML pages and outputs web navigation actions, such as click and type. WebGUM is trained by jointly finetuning an instruction-finetuned language model and a vision encoder with temporal and local perception on a large corpus of demonstrations. We empirically demonstrate this recipe improves the agent's ability of grounded multimodal perception, HTML comprehension, and multi-step reasoning, outperforming prior works by a significant margin. On the MiniWoB, we improve over the previous best offline methods by more than 45.8%, even outperforming online-finetuned SoTA, humans, and GPT-4-based agent. On the WebShop benchmark, our 3-billion-parameter model achieves superior performance to the existing SoTA, PaLM-540B. Furthermore, WebGUM exhibits strong positive transfer to the real-world planning tasks on the Mind2Web. We also collect 347K high-quality demonstrations using our trained models, 38 times larger than prior work, and make them available to promote future research in this direction.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
365,721
1610.09995
Generating Sentiment Lexicons for German Twitter
Despite a substantial progress made in developing new sentiment lexicon generation (SLG) methods for English, the task of transferring these approaches to other languages and domains in a sound way still remains open. In this paper, we contribute to the solution of this problem by systematically comparing semi-automatic translations of common English polarity lists with the results of the original automatic SLG algorithms, which were applied directly to German data. We evaluate these lexicons on a corpus of 7,992 manually annotated tweets. In addition to that, we also collate the results of dictionary- and corpus-based SLG methods in order to find out which of these paradigms is better suited for the inherently noisy domain of social media. Our experiments show that semi-automatic translations notably outperform automatic systems (reaching a macro-averaged F1-score of 0.589), and that dictionary-based techniques produce much better polarity lists as compared to corpus-based approaches (whose best F1-scores run up to 0.479 and 0.419 respectively) even for the non-standard Twitter genre.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
63,146
2111.01590
Detect-and-Segment: a Deep Learning Approach to Automate Wound Image Segmentation
Chronic wounds significantly impact quality of life. If not properly managed, they can severely deteriorate. Image-based wound analysis could aid in objectively assessing the wound status by quantifying important features that are related to healing. However, the high heterogeneity of the wound types, image background composition, and capturing conditions challenge the robust segmentation of wound images. We present Detect-and-Segment (DS), a deep learning approach to produce wound segmentation maps with high generalization capabilities. In our approach, dedicated deep neural networks detected the wound position, isolated the wound from the uninformative background, and computed the wound segmentation map. We evaluated this approach using one data set with images of diabetic foot ulcers. For further testing, 4 supplemental independent data sets with larger variety of wound types from different body locations were used. The Matthews' correlation coefficient (MCC) improved from 0.29 when computing the segmentation on the full image to 0.85 when combining detection and segmentation in the same approach. When tested on the wound images drawn from the supplemental data sets, the DS approach increased the mean MCC from 0.17 to 0.85. Furthermore, the DS approach enabled the training of segmentation models with up to 90% less training data while maintaining the segmentation performance.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
264,603
2303.06468
Accurate Prediction of Global Mean Temperature through Data Transformation Techniques
It is important to predict how the Global Mean Temperature (GMT) will evolve in the next few decades. The ability to predict historical data is a necessary first step toward the actual goal of making long-range forecasts. This paper examines the advantage of statistical and simpler Machine Learning (ML) methods instead of directly using complex ML algorithms and Deep Learning Neural Networks (DNN). Often neglected data transformation methods prior to applying different algorithms have been used as a means of improving predictive accuracy. The GMT time series is treated both as a univariate time series and also cast as a regression problem. Some steps of data transformations were found to be effective. Various simple ML methods did as well or better than the more well-known ones showing merit in trying a large bouquet of algorithms as a first step. Fifty-six algorithms were subject to Box-Cox, Yeo-Johnson, and first-order differencing and compared with the absence of them. Predictions for the annual GMT testing data were better than that published so far, with the lowest RMSE value of 0.02 $^\circ$C. RMSE for five-year mean GMT values for the test data ranged from 0.00002 to 0.00036 $^\circ$C.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
350,857
1105.5545
Competing activation mechanisms in epidemics on networks
In contrast to previous common wisdom that epidemic activity in heterogeneous networks is dominated by the hubs with the largest number of connections, recent research has pointed out the role that the innermost, dense core of the network plays in sustaining epidemic processes. Here we show that the mechanism responsible of spreading depends on the nature of the process. Epidemics with a transient state are boosted by the innermost core. Contrarily, epidemics allowing a steady state present a dual scenario, where either the hub independently sustains activity and propagates it to the rest of the system, or, alternatively, the innermost network core collectively turns into the active state, maintaining it globally. In uncorrelated networks the former mechanism dominates if the degree distribution decays with an exponent larger than 5/2, and the latter otherwise. Topological correlations, rife in real networks, may perturb this picture, mixing the role of both mechanisms.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
10,547
2011.03372
FDNAS: Improving Data Privacy and Model Diversity in AutoML
To prevent the leakage of private information while enabling automated machine intelligence, there is an emerging trend to integrate federated learning and Neural Architecture Search (NAS). Although promising as it may seem, the coupling of difficulties from both two tenets makes the algorithm development quite challenging. In particular, how to efficiently search the optimal neural architecture directly from massive non-iid data of clients in a federated manner remains to be a hard nut to crack. To tackle this challenge, in this paper, by leveraging the advances in proxy-less NAS, we propose a Federated Direct Neural Architecture Search (FDNAS) framework that allows hardware-aware NAS from decentralized non-iid data of clients. To further adapt for various data distributions of clients, inspired by meta-learning, a cluster Federated Direct Neural Architecture Search (CFDNAS) framework is proposed to achieve client-aware NAS, in the sense that each client can learn a tailored deep learning model for its particular data distribution. Extensive experiments on real-world non-iid datasets show state-of-the-art accuracy-efficiency trade-offs for various hardware and data distributions of clients. Our codes will be released publicly upon paper acceptance.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
205,229
2112.13595
Depth estimation of endoscopy using sim-to-real transfer
In order to use the navigation system effectively, distance information sensors such as depth sensors are essential. Since depth sensors are difficult to use in endoscopy, many groups propose a method using convolutional neural networks. In this paper, the ground truth of the depth image and the endoscopy image is generated through endoscopy simulation using the colon model segmented by CT colonography. Photo-realistic simulation images can be created using a sim-to-real approach using cycleGAN for endoscopy images. By training the generated dataset, we propose a quantitative endoscopy depth estimation network. The proposed method represents a better-evaluated score than the existing unsupervised training-based results.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
273,302
2001.03067
Domain-independent Extraction of Scientific Concepts from Research Articles
We examine the novel task of domain-independent scientific concept extraction from abstracts of scholarly articles and present two contributions. First, we suggest a set of generic scientific concepts that have been identified in a systematic annotation process. This set of concepts is utilised to annotate a corpus of scientific abstracts from 10 domains of Science, Technology and Medicine at the phrasal level in a joint effort with domain experts. The resulting dataset is used in a set of benchmark experiments to (a) provide baseline performance for this task, (b) examine the transferability of concepts between domains. Second, we present two deep learning systems as baselines. In particular, we propose active learning to deal with different domains in our task. The experimental results show that (1) a substantial agreement is achievable by non-experts after consultation with domain experts, (2) the baseline system achieves a fairly high F1 score, (3) active learning enables us to nearly halve the amount of required training data.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
159,876
2307.02106
SoK: Privacy-Preserving Data Synthesis
As the prevalence of data analysis grows, safeguarding data privacy has become a paramount concern. Consequently, there has been an upsurge in the development of mechanisms aimed at privacy-preserving data analyses. However, these approaches are task-specific; designing algorithms for new tasks is a cumbersome process. As an alternative, one can create synthetic data that is (ideally) devoid of private information. This paper focuses on privacy-preserving data synthesis (PPDS) by providing a comprehensive overview, analysis, and discussion of the field. Specifically, we put forth a master recipe that unifies two prominent strands of research in PPDS: statistical methods and deep learning (DL)-based methods. Under the master recipe, we further dissect the statistical methods into choices of modeling and representation, and investigate the DL-based methods by different generative modeling principles. To consolidate our findings, we provide comprehensive reference tables, distill key takeaways, and identify open problems in the existing literature. In doing so, we aim to answer the following questions: What are the design principles behind different PPDS methods? How can we categorize these methods, and what are the advantages and disadvantages associated with each category? Can we provide guidelines for method selection in different real-world scenarios? We proceed to benchmark several prominent DL-based methods on the task of private image synthesis and conclude that DP-MERF is an all-purpose approach. Finally, upon systematizing the work over the past decade, we identify future directions and call for actions from researchers.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
true
false
377,591
2103.12338
Consistency Analysis of the Closed-loop SRIVC Estimator
The Consistency of the Closed-Loop Simplified Refined Instrumental Variable method for Continuous-time system (CLSRIVC) is analysed based on sampled data. It is proven that the CLSRIVC estimator is not consistent when a continuous-time controller is used in the closed-loop.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
226,137
1409.6941
Individual risk in mean-field control models for decentralized control, with application to automated demand response
Flexibility of energy consumption can be harnessed for the purposes of ancillary services in a large power grid. In prior work by the authors a randomized control architecture is introduced for individual loads for this purpose. In examples it is shown that the control architecture can be designed so that control of the loads is easy at the grid level: Tracking of a balancing authority reference signal is possible, while ensuring that the quality of service (QoS) for each load is acceptable on average. The analysis was based on a mean field limit (as the number of loads approaches infinity), combined with an LTI-system approximation of the aggregate nonlinear model. This paper examines in depth the issue of individual risk in these systems. The main contributions of the paper are of two kinds: Risk is modeled and quantified: (i) The average performance is not an adequate measure of success. It is found empirically that a histogram of QoS is approximately Gaussian, and consequently each load will eventually receive poor service. (ii) The variance can be estimated from a refinement of the LTI model that includes a white-noise disturbance; variance is a function of the randomized policy, as well as the power spectral density of the reference signal. Additional local control can eliminate risk: (iii) The histogram of QoS is truncated through this local control, so that strict bounds on service quality are guaranteed. (iv) This has insignificant impact on the grid-level performance, beyond a modest reduction in capacity of ancillary service.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
36,282
1906.10973
Defending Adversarial Attacks by Correcting logits
Generating and eliminating adversarial examples has been an intriguing topic in the field of deep learning. While previous research verified that adversarial attacks are often fragile and can be defended via image-level processing, it remains unclear how high-level features are perturbed by such attacks. We investigate this issue from a new perspective, which purely relies on logits, the class scores before softmax, to detect and defend adversarial attacks. Our defender is a two-layer network trained on a mixed set of clean and perturbed logits, with the goal being recovering the original prediction. Upon a wide range of adversarial attacks, our simple approach shows promising results with relatively high accuracy in defense, and the defender can transfer across attackers with similar properties. More importantly, our defender can work in the scenarios that image data are unavailable, and enjoys high interpretability especially at the semantic level.
false
false
false
false
false
false
true
false
false
false
false
true
true
false
false
false
false
false
136,559
1610.06067
Fairness as a Program Property
We explore the following question: Is a decision-making program fair, for some useful definition of fairness? First, we describe how several algorithmic fairness questions can be phrased as program verification problems. Second, we discuss an automated verification technique for proving or disproving fairness of decision-making programs with respect to a probabilistic model of the population.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
62,601
2310.04041
Observation-Guided Diffusion Probabilistic Models
We propose a novel diffusion-based image generation method called the observation-guided diffusion probabilistic model (OGDM), which effectively addresses the tradeoff between quality control and fast sampling. Our approach reestablishes the training objective by integrating the guidance of the observation process with the Markov chain in a principled way. This is achieved by introducing an additional loss term derived from the observation based on a conditional discriminator on noise level, which employs a Bernoulli distribution indicating whether its input lies on the (noisy) real manifold or not. This strategy allows us to optimize the more accurate negative log-likelihood induced in the inference stage especially when the number of function evaluations is limited. The proposed training scheme is also advantageous even when incorporated only into the fine-tuning process, and it is compatible with various fast inference strategies since our method yields better denoising networks using the exactly the same inference procedure without incurring extra computational cost. We demonstrate the effectiveness of our training algorithm using diverse inference techniques on strong diffusion model baselines. Our implementation is available at https://github.com/Junoh-Kang/OGDM_edm.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
397,516
2010.12363
Regret in Online Recommendation Systems
This paper proposes a theoretical analysis of recommendation systems in an online setting, where items are sequentially recommended to users over time. In each round, a user, randomly picked from a population of $m$ users, requests a recommendation. The decision-maker observes the user and selects an item from a catalogue of $n$ items. Importantly, an item cannot be recommended twice to the same user. The probabilities that a user likes each item are unknown. The performance of the recommendation algorithm is captured through its regret, considering as a reference an Oracle algorithm aware of these probabilities. We investigate various structural assumptions on these probabilities: we derive for each structure regret lower bounds, and devise algorithms achieving these limits. Interestingly, our analysis reveals the relative weights of the different components of regret: the component due to the constraint of not presenting the same item twice to the same user, that due to learning the chances users like items, and finally that arising when learning the underlying structure.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
202,668
2412.17142
AI-Based Teat Shape and Skin Condition Prediction for Dairy Management
Dairy owners spend significant effort to keep their animals healthy. There is good reason to hope that technologies such as computer vision and artificial intelligence (AI) could reduce these costs, yet obstacles arise when adapting advanced tools to farming environments. In this work, we adapt AI tools to dairy cow teat localization, teat shape, and teat skin condition classifications. We also curate a data collection and analysis methodology for a Machine Learning (ML) pipeline. The resulting teat shape prediction model achieves a mean Average Precision (mAP) of 0.783, and the teat skin condition model achieves a mean average precision of 0.828. Our work leverages existing ML vision models to facilitate the individualized identification of teat health and skin conditions, applying AI to the dairy management industry.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
519,846
2210.11513
Learning Sample Reweighting for Accuracy and Adversarial Robustness
There has been great interest in enhancing the robustness of neural network classifiers to defend against adversarial perturbations through adversarial training, while balancing the trade-off between robust accuracy and standard accuracy. We propose a novel adversarial training framework that learns to reweight the loss associated with individual training samples based on a notion of class-conditioned margin, with the goal of improving robust generalization. We formulate weighted adversarial training as a bilevel optimization problem with the upper-level problem corresponding to learning a robust classifier, and the lower-level problem corresponding to learning a parametric function that maps from a sample's \textit{multi-class margin} to an importance weight. Extensive experiments demonstrate that our approach consistently improves both clean and robust accuracy compared to related methods and state-of-the-art baselines.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
325,341
1807.04020
Improved SVD-based Initialization for Nonnegative Matrix Factorization using Low-Rank Correction
Due to the iterative nature of most nonnegative matrix factorization (\textsc{NMF}) algorithms, initialization is a key aspect as it significantly influences both the convergence and the final solution obtained. Many initialization schemes have been proposed for NMF, among which one of the most popular class of methods are based on the singular value decomposition (SVD). However, these SVD-based initializations do not satisfy a rather natural condition, namely that the error should decrease as the rank of factorization increases. In this paper, we propose a novel SVD-based \textsc{NMF} initialization to specifically address this shortcoming by taking into account the SVD factors that were discarded to obtain a nonnegative initialization. This method, referred to as nonnegative SVD with low-rank correction (NNSVD-LRC), allows us to significantly reduce the initial error at a negligible additional computational cost using the low-rank structure of the discarded SVD factors. NNSVD-LRC has two other advantages compared to previous SVD-based initializations: (1) it provably generates sparse initial factors, and (2) it is faster as it only requires to compute a truncated SVD of rank $\lceil r/2 + 1 \rceil$ where $r$ is the factorization rank of the sought NMF decomposition (as opposed to a rank-$r$ truncated SVD for other methods). We show on several standard dense and sparse data sets that our new method competes favorably with state-of-the-art SVD-based initializations for NMF.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
102,652
2310.11569
When Rigidity Hurts: Soft Consistency Regularization for Probabilistic Hierarchical Time Series Forecasting
Probabilistic hierarchical time-series forecasting is an important variant of time-series forecasting, where the goal is to model and forecast multivariate time-series that have underlying hierarchical relations. Most methods focus on point predictions and do not provide well-calibrated probabilistic forecasts distributions. Recent state-of-art probabilistic forecasting methods also impose hierarchical relations on point predictions and samples of distribution which does not account for coherency of forecast distributions. Previous works also silently assume that datasets are always consistent with given hierarchical relations and do not adapt to real-world datasets that show deviation from this assumption. We close both these gap and propose PROFHiT, which is a fully probabilistic hierarchical forecasting model that jointly models forecast distribution of entire hierarchy. PROFHiT uses a flexible probabilistic Bayesian approach and introduces a novel Distributional Coherency regularization to learn from hierarchical relations for entire forecast distribution that enables robust and calibrated forecasts as well as adapt to datasets of varying hierarchical consistency. On evaluating PROFHiT over wide range of datasets, we observed 41-88% better performance in accuracy and significantly better calibration. Due to modeling the coherency over full distribution, we observed that PROFHiT can robustly provide reliable forecasts even if up to 10% of input time-series data is missing where other methods' performance severely degrade by over 70%.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
400,686
2310.02568
Stand for Something or Fall for Everything: Predict Misinformation Spread with Stance-Aware Graph Neural Networks
Although pervasive spread of misinformation on social media platforms has become a pressing challenge, existing platform interventions have shown limited success in curbing its dissemination. In this study, we propose a stance-aware graph neural network (stance-aware GNN) that leverages users' stances to proactively predict misinformation spread. As different user stances can form unique echo chambers, we customize four information passing paths in stance-aware GNN, while the trainable attention weights provide explainability by highlighting each structure's importance. Evaluated on a real-world dataset, stance-aware GNN outperforms benchmarks by 32.65% and exceeds advanced GNNs without user stance by over 4.69%. Furthermore, the attention weights indicate that users' opposition stances have a higher impact on their neighbors' behaviors than supportive ones, which function as social correction to halt misinformation propagation. Overall, our study provides an effective predictive model for platforms to combat misinformation, and highlights the impact of user stances in the misinformation propagation.
false
false
false
true
true
false
true
false
false
false
false
false
false
true
false
false
false
false
396,907
2210.16099
An Empirical Evaluation of Zeroth-Order Optimization Methods on AI-driven Molecule Optimization
Molecule optimization is an important problem in chemical discovery and has been approached using many techniques, including generative modeling, reinforcement learning, genetic algorithms, and much more. Recent work has also applied zeroth-order (ZO) optimization, a subset of gradient-free optimization that solves problems similarly to gradient-based methods, for optimizing latent vector representations from an autoencoder. In this paper, we study the effectiveness of various ZO optimization methods for optimizing molecular objectives, which are characterized by variable smoothness, infrequent optima, and other challenges. We provide insights on the robustness of various ZO optimizers in this setting, show the advantages of ZO sign-based gradient descent (ZO-signGD), discuss how ZO optimization can be used practically in realistic discovery tasks, and demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite. Code is available at: https://github.com/IBM/QMO-bench.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
327,222
1801.01179
Inferring propagation paths for sparsely observed perturbations on complex networks
In a complex system, perturbations propagate by following paths on the network of interactions among the system's units. In contrast to what happens with the spreading of epidemics, observations of general perturbations are often very sparse in time (there is a single observation of the perturbed system) and in "space" (only a few perturbed and unperturbed units are observed). A major challenge in many areas, from biology to the social sciences, is to infer the propagation paths from observations of the effects of perturbation under these sparsity conditions. We address this problem and show that it is possible to go beyond the usual approach of using the shortest paths connecting the known perturbed nodes. Specifically, we show that a simple and general probabilistic model, which we solved using belief propagation, provides fast and accurate estimates of the probabilities of nodes being perturbed.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
87,687
2311.10359
FIKIT: Priority-Based Real-time GPU Multi-tasking Scheduling with Kernel Identification
Highly parallelized workloads like machine learning training, inferences and general HPC tasks are greatly accelerated using GPU devices. In a cloud computing cluster, serving a GPU's computation power through multi-tasks sharing is highly demanded since there are always more task requests than the number of GPU available. Existing GPU sharing solutions focus on reducing task-level waiting time or task-level switching costs when multiple jobs competing for a single GPU. Non-stopped computation requests come with different priorities, having non-symmetric impact on QoS for sharing a GPU device. Existing work missed the kernel-level optimization opportunity brought by this setting. To address this problem, we present a novel kernel-level scheduling strategy called FIKIT: Filling Inter-kernel Idle Time. FIKIT incorporates task-level priority information, fine-grained kernel identification, and kernel measurement, allowing low priorities task's execution during high priority task's inter-kernel idle time. Thereby, filling the GPU's device runtime fully, and reduce overall GPU sharing impact to cloud services. Across a set of ML models, the FIKIT based inference system accelerated high priority tasks by 1.32 to 16.41 times compared to the JCT in GPU sharing mode, and more than half of the cases are accelerated by more than 3.4 times. Alternatively, under preemptive sharing, the low-priority tasks have a comparable to default GPU sharing mode JCT, with a 0.86 to 1 times ratio. We further limit the kernel measurement and runtime fine-grained kernel scheduling overhead to less than 5%.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
408,506
1906.06931
Of Cores: A Partial-Exploration Framework for Markov Decision Processes
We introduce a framework for approximate analysis of Markov decision processes (MDP) with bounded-, unbounded-, and infinite-horizon properties. The main idea is to identify a "core" of an MDP, i.e., a subsystem where we provably remain with high probability, and to avoid computation on the less relevant rest of the state space. Although we identify the core using simulations and statistical techniques, it allows for rigorous error bounds in the analysis. Consequently, we obtain efficient analysis algorithms based on partial exploration for various settings, including the challenging case of strongly connected systems.
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
true
135,467
1808.01102
Hallucinating Agnostic Images to Generalize Across Domains
The ability to generalize across visual domains is crucial for the robustness of artificial recognition systems. Although many training sources may be available in real contexts, the access to even unlabeled target samples cannot be taken for granted, which makes standard unsupervised domain adaptation methods inapplicable in the wild. In this work we investigate how to exploit multiple sources by hallucinating a deep visual domain composed of images, possibly unrealistic, able to maintain categorical knowledge while discarding specific source styles. The produced agnostic images are the result of a deep architecture that applies pixel adaptation on the original source data guided by two adversarial domain classifier branches at image and feature level. Our approach is conceived to learn only from source data, but it seamlessly extends to the use of unlabeled target samples. Remarkable results for both multi-source domain adaptation and domain generalization support the power of hallucinating agnostic images in this framework.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
104,509
1602.03814
Enabling Basic Normative HRI in a Cognitive Robotic Architecture
Collaborative human activities are grounded in social and moral norms, which humans consciously and subconsciously use to guide and constrain their decision-making and behavior, thereby strengthening their interactions and preventing emotional and physical harm. This type of norm-based processing is also critical for robots in many human-robot interaction scenarios (e.g., when helping elderly and disabled persons in assisted living facilities, or assisting humans in assembly tasks in factories or even the space station). In this position paper, we will briefly describe how several components in an integrated cognitive architecture can be used to implement processes that are required for normative human-robot interactions, especially in collaborative tasks where actions and situations could potentially be perceived as threatening and thus need a change in course of action to mitigate the perceived threats.
true
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
52,055
2205.07872
ScAN: Suicide Attempt and Ideation Events Dataset
Suicide is an important public health concern and one of the leading causes of death worldwide. Suicidal behaviors, including suicide attempts (SA) and suicide ideations (SI), are leading risk factors for death by suicide. Information related to patients' previous and current SA and SI are frequently documented in the electronic health record (EHR) notes. Accurate detection of such documentation may help improve surveillance and predictions of patients' suicidal behaviors and alert medical professionals for suicide prevention efforts. In this study, we first built Suicide Attempt and Ideation Events (ScAN) dataset, a subset of the publicly available MIMIC III dataset spanning over 12k+ EHR notes with 19k+ annotated SA and SI events information. The annotations also contain attributes such as method of suicide attempt. We also provide a strong baseline model ScANER (Suicide Attempt and Ideation Events Retriever), a multi-task RoBERTa-based model with a retrieval module to extract all the relevant suicidal behavioral evidences from EHR notes of an hospital-stay and, and a prediction module to identify the type of suicidal behavior (SA and SI) concluded during the patient's stay at the hospital. ScANER achieved a macro-weighted F1-score of 0.83 for identifying suicidal behavioral evidences and a macro F1-score of 0.78 and 0.60 for classification of SA and SI for the patient's hospital-stay, respectively. ScAN and ScANER are publicly available.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
296,751